- Subconscious
- Posts
- The Future of Computing
The Future of Computing
How AI has the potential to change how we interact with technology forever

This will be the first article in a two part series about the future of computing and computers. I’ll start with what I think was an under discussed speech from one of OpenAI’s founders. In the following article, I’ll discuss the inherent disruptive nature of this idea, whether it’s possible to bring to market, and how that may happen.
Apologies
Before we begin, apologies for the late article. Our account was falsely flagged for suspicious activity late last week so was unable to post on Friday.
Now back to our regularly scheduled programming.
Andrej Karpathy’s Speech
For those who don’t know Andrej Karpathy was on the founding team of OpenAI. He then left for a few years to lead an autonomous driving division at Tesla, then came back to OpenAI in 2022 and has since departed again. To start his own AI-Education company, Eureka Labs. It’s sufficient to say that he’s been one of the most important people in the development of modern AI and so what he says carries some weight. With that being said, the smartest people usually have the most ideas. Some being groundbreaking and some being completely off-target. As the saying goes, ‘there’s a thin line between genius and madness’ so we’ll see where it falls. Now for the speech.
Andrej gave a speech at a UC Berkley Hackathon that spanned a few subjects but the most interesting were his ideas on the future of computing.
Modern Computing
Before we get into the future of computing, we need to understand the state of computing we are currently in. Andrej mentions that an Large-Language Model (LLM), think ChatGPT, will essentially replace our normal computer architecture, so what does that look like?
In all computers, we store information in bits. Each bit is either a 0 or 1, true or false and we scale up everything from there (I know very cool). And this is where binary comes from. This information is stored in memory, more specifically random access memory (or RAM for short). This is often an important spec to look at when buying a phone or computer. 8GB, 16GB, etc. of RAM. Because more memory, means it can hold more information at any given time and therefore multi-task better. And lastly we have the CPU, this is the multi-tasker. Reading from and writing to the general information (bits) through the memory (RAM) to operate our computers.
So how does this work with AI (or LLMs to be more specific)?
The Future of Computing
In his talk Andrej laid out what he sees a potential new computing paradigm being. And he called it Large-Language Model Operating System (LLM OS). So how does this work?
In the world of LLMs, we communicate through tokens. These are parts of speech that are manufactured by the model as it’s built. Essentially it’s a black-box understanding of speech but it’s in these increments that all LLMs read in and write out text. Analogous to a bit. Then all LLMs have a context window or the total amount of tokens which they can hold in memory and work with. Analogous to memory. And lastly, the LLM is the orchestrator of this information. The multi-tasker that uses the token, provided through the context-window, to read and write back in tokens and cause actions to happen.
This is what Andrej sees as the future.
So how realistic is this?
The real answer is sort of a mixed bag. Completely changing the architecture of the computers we use in our every day lives is a massive paradigm shift. One that, if even possible, would take decades of innovation and execution to accomplish.
BUT there’s already a less ambitious, but similar idea already happening that seems quite realistic. And if proven useful and plausible could open the way for this more ambitious goal. This is the idea of moving away from the current paradigm of smart phones into one that is closer to a true virtual assistant. Which if we think about it, isn’t that far off.
We initially got modern virtual assistants like Siri and Alexa with the idea that they can help us perform a bunch of tasks. But for quite some time, they’ve been stuck at some pretty basic things like take notes, send texts, and request music. But a real assistant can do much more. Given pretty high-level requirements can complete complex tasks around work. Maybe scheduling a dinner reservation at a busy place with time and party size constraints.
So what if this new way of thinking about AI can unlock that?
While this doesn’t require changing the physical hardware of a computer, it does require bringing an LLM to a lower-level of the user experience. Instead of it simply being an app on your phone or a chatbot in a web browser, it’s the center piece. Essentially, the next generation of voice assistants like Siri, Alexa, and Google but with much more intelligence and control over what can be done.
This is already starting to come to fruition with early companies like Rabbit with their r1 device. Without any apps or a traditional smartphone experience, you’re able to order a ride through Uber, food through DoorDash, or request music from Spotify. While the product release was mostly considered a disappointment because a lot more was promised than what was delivered, it does begin to pave the way for this idea of a truly virtual assistant.
Thanks For Reading
As always thanks for reading! 😊 In the following edition of Subconscious, I’ll be talking about how realistic these new virtual assistants are from the study of disruptive technologies. Inspired by The Innovator’s Dilemma.