feature

My Take on AGI

Before I say another word, let me admit one thing.

This is a dead horse. But it’s a dead horse worth beating, because our distant future relies on collective and mutual agreement on how to deal with AGI.

What’s going on right now:

Do not interpret my speculation as fact, but if AGI is not already here, it’s just around the corner.

We have seen posts on X originating from OpenAI’s technical staff, and it’s obvious that there’s something in the water over there.

Now, as an engineer who gets excited about things, I know how excited engineers talk.

We already know that OpenAI has adopted a “move fast and break things” mentality. That pattern is important because it reveals the working patterns of its individual contributors.

At companies which move fast and break things, developers still spend most of their time contributing well-understood code in order to complete features. Although companies who move fast and break things usually cut down on bureaucracy and meetings, most work is still devoted to slogging through highly expectable code.

This steady stream of progress usually originates from a single moment: a developer pulls on a loose thread to discover that a lofty end goal is actually feasible. A basic MVP starts working, as shown by a janky script somewhere that outputs a desired output to a sandbox command line. And alas the struggle begins, to code that exact goal into usable existence.

The current attitude of OpenAI’s staff seems to demonstrate that they had one of these thread-pulling moments (rumors about Q, pushing back the veil of ignorance, etc.). However, there is clearly a lot of work to still be done before that moment can be shared or implemented. This results in the exact waiting game we are playing right now.

AGI isn’t scary

The reality is that there are a few really big barriers that stand in the way of AGI. At the end of the day, AGI is still a technology, and before a future where “AGI Runs Everything,” that technology will still need to surpass certain key challenges.

AGI is extremely expensive to build and run

There’s a reason why Sam Altman is vying for a $7 trillion GPU investment: the earliest iterations of AGI will need to harness compute and energy on a massive scale.

Understand that current ChatGPT is a relatively narrow (when compared to AGI) application with 200 million daily human users. AGI will be an infinitely more complex, intensive, and broad application that will eventually serve tens of billions of human AND machine users.

The current pattern of AI development clearly shows that larger models require more compute, so getting AGI off the ground (before AGI-Turbo releases, if you will) will require vast resources. This is an important physical constraint that slows down the progress of a potentially-rogue superintelligence.

AGI Cannot Instantly Solve Technological Enablement

Let me introduce a video reference that will resonate with Silicon Valley fans:

For non-fans of the show, here’s a quick summary: the founders of Pied Piper, pictured here, built a decentralized P2P network on top of a revolutionary file compression algorithm. This compression algorithm allows computers to search over a compressed space, a step that theoretically achieves AGI at scale. Gilfoyle tests out this intelligence by having the network break a Tesla’s encryption and harness its self-driving capabilities.

While this moment leads to the show’s climax and Pied Piper’s eventual downfall, it’s an important example of AGI’s first major barrier: technological enablement.

Even if AGI becomes a superintelligence, it’s impossible for it to dangerously hijack every technology on the globe. Here are a few technical barriers that AGI will need to overcome in order to become dominant:

  • How will AGI control totally incompatible farming/manufacturing equipment?
  • How will AGI interact with files that it does not have access to and cannot discover on its own, therefore inhibiting its ability to become an all-encompassing “world model?”
  • If an AGI interacts with a server and the server crashes (a phenomenon that will become common as AI agents begin to autonomously interact with legacy infrastructure), how will AGI restart and fix that server to accomplish a task?

It is obvious that AGI will come up against barriers that are, for now, only fixable by humans. Data will be unobtainable, processes will not be executable, and the typical “human roadblocks” which already bloat (and arguably empower) today’s systems will slow down AGI to a crawling pace which abates our unjustified Doomsday-fears.

The Extremely Frustrating Barrier of Societal Enablement

Given the strong technological barriers slowing down AGI progress, another barrier will quickly emerge: societal enablement. Most people are not ready for their lives and decisions to be taken over by a superintelligence. Certain individuals, societies, companies, and governments will all strive to halt AGI’s progress, as they already are.

Back in the early 1900s, Henry Ford’s innovations faced this exact conflict. Despite the car’s obvious superiority over the horse, it took decades for workhorses to be phased out. Cars were initially unaffordable, cities saw automobiles as unwanted intruders, and ranchers stood together against the obsoletion of their own jobs.

Eventually, automobiles did replace the horse, but this happened because cars were a superior technology that made the vast majority of people more productive, even if at the expense of a few ranchers.

On the contrary, AGI stands to make everyone a horse-rancher in 1908. A true AGI can replace most people’s jobs and uproot our way of living. Some zealots claim that this scenario happens “whether we like it or not,” but I disagree. I must reiterate that technological barriers exist for AGI, and these barriers must be resolved with human intervention. Any entity with agency can therefore “stop AGI” in very simple ways: by refusing to give AGI access to an application, by preventing a technology from connecting to the open internet, or by refusing to fix a broken piece of technology that AGI is trying to harness. Due to the nature of copyright and privacy laws, this will be a very tall mountain for OpenAI to climb before it succeeds in its mission.

Why you shouldn’t be so anxious about AGI

For the Skynet-fearing folks out there, this is all probably good news. Assuming a nascent AGI has emerged, we have successfully avoided the first filter where AGI instantly adapts itself into a singularity, effectively breaking itself loose of its straightjacket.

Maybe we will live in an eventual future that resembles a mix of Wall-E and Terminator, but it will take years of building, fixing, and optimization for that to happen. AGI will need to figure out how to operate within a human-error-filled global bureaucracy, one which serves as the substrate for most of our productive work today.

What’s my prediction? There is no future in which our priorities as a civilization are instantly reshuffled around the notion of AGI.

AGI will be reached (or is already here), but it will emerge as a glimmer of light inside of an engineer’s coding sandbox. Judging by its predecessors, it will be extremely slow starting out. Even once it gains its legs, it will be slowed down through the familiar technological barriers which slow down us humans every day. And finally, if it can surpass some of these barriers, people seeking to slow down a looming superintelligence will have the tools to do so, and likely will act against the goals of AGI.

Societally, this means that AGI will be announced, and will be a focus of mainstream media for a few days or weeks. But eventually that announcement will pass, as engineers try to lasso in the full scope of AGI adoption. This adoption curve will be fairly slow as AGI tries to globally reach every industry. We will not reorient ourselves as a society and basically nothing will immediately change. The same way we are impressed today by demos of GPT4 or Sora, we will soon be impressed by demos of AGI in narrow applications, but the small scope of these tasks will not stir a compulsion to unite around or against AGI, thereby avoiding an immediate seismic shift.