Large Language Models: The New Computing Paradigm
The nature of computation is fundamentally evolving. Today, we're at the cusp of a rare and significant shift, reminiscent of the computing revolution of the 1980s. Back then, we saw the rise of central processing units (CPUs) that operated on instructions using bytes. Now, we're transitioning into a new paradigm driven by Large Language Models (LLMs).
Imagine an LLM as a modern equivalent of the CPU—but instead of processing instructions through bytes, these models operate using tokens, small strings of data. Unlike traditional RAM that handles bytes, we now deal with a context window made of tokens, fundamentally changing how memory and data processing work.
Moreover, just as traditional computing has relied on disk storage and other peripheral hardware, this new computing model has complementary storage and retrieval systems, enabling more dynamic and sophisticated information handling.
The term "LMOS," or Large Model Operating System, was originally coined by Andrej Karpathy to describe this emerging paradigm, where language models function as orchestrators akin to operating systems managing traditional computing resources.
Right now, we're collectively discovering how best to program LLMs, exploring its strengths, recognizing its limitations, and learning innovative ways to integrate it into practical products and services. The next wave of technology will belong to those who master this paradigm, extracting maximum efficiency and functionality from it.
Welcome to the dawn of LLMs—computing reimagined.