← Daniel Rashevsky's stuff

The LLM Hype Bubble, and What Comes Next

Nobody disputes the usefulness of large language models. LLMs can do "intellectual grunt work", such as suggesting a well-placed word for an essay or filling in the background of an artwork. This is a huge value add and breakthrough because it cuts the effort needed to produce a creative work in half, even if it makes mistakes (hallucinations).

The problem is, LLMs are generative, not creative. They cannot think originally, only relying on situations they've already seen to navigate the world. They struggle when they encounter a new or unexpected scenario. This is because they possess no logical/intuitive understanding of it, whether they need to completing the structure of a paper or determine how to architect a complex service keeping track of weeds that need to be zapped with lasers. Since the world is infinitely complex, the point of any creative process is responding to new situations with novel and original thinking.

If I read 100 papers on the central message of Shakespeare's Macbeth, I can probably plagiarize a good paper on the subject. But if I try to write an analysis on why Shakespeare used iambic pentameter and how it relates to British poetry development in the late 16th century by reading those same 100 papers, I will flounder.

Eventually, all the "intellectual grunt work" use cases will run out, which is when I believe the LLM hype bubble will burst. But will the computing field have peaked after this event? If not, what will become the next big computer breakthrough?

To answer that question, we have to go back to the fundamentals of what a computer is. Computers are special in that they are machines developed not to help the human body solve physical problems, but to help the human brain solve cognitive ones. From rudimentary slide rulers to the first mainframe to personal computers, the computing field has expanded by iteratively automating parts of the human thought process. Therefore, the next big computing advancement lies in a part of this thought process that hasn't been automated yet - which is getting computers to intuitively understand the physical and logical world around them.

This is something our brains do continuously throughout the day, utilizing a neural network to "understand" certain electrical signals. For example, signals fired by optic nerves are interpreted a "picture". If you cover your eyes for months or wear mirrors that flip your vision upside down, your brain forgets how to see or learns to see upside down.

However, even humans don't get turnkey awareness on day 1. Human brains are born with the necessary "hardware", sensory capabilities, and instincts, but they have to grow their brain's "logic and intuition" engine to process the inputs into a practical and emotional understanding of the world around them. Babies take time to develop vision, attention span, and logic. Infants take tens of examples to learn what a "dog" is and why it is different from a "cat". A toddler needs to be taught to properly process their rage without throwing a temper tantrum. This process does not fully finish until age 25.

I am excited about systems that help computers understand logic, like program synthesis/verification systems, solvers, expert systems, as well as real world phenomena, like computer vision algorithms and signal processors. This list is not all-encompassing. However, advances from these kinds of systems will advance the computing field into finally creating machines with that logical and intuitive understanding of their surroundings.

Posted on Sat, 29 Jun 2024.


© 2024 Daniel Rashevsky. All rights reserved.  GitHub  Gists