How aI Takeover might Happen In 2 Years - LessWrong
I'm not a natural "doomsayer." But sadly, part of my task as an AI safety researcher is to think about the more unpleasant situations.
I'm like a mechanic rushing last-minute checks before Apollo 13 takes off. If you request for my take on the situation, I will not talk about the quality of the in-flight entertainment, or explain how gorgeous the stars will appear from area.
I will tell you what might fail. That is what I mean to do in this story.
Now I must clarify what this is precisely. It's not a forecast. I do not anticipate AI development to be this quick or as untamable as I represent. It's not pure dream either.
It is my worst headache.
It's a tasting from the futures that are among the most devastating, and I believe, disturbingly plausible [1] - the ones that many keep me up at night.
I'm telling this tale because the future is not set yet. I hope, with a bit of foresight, we can keep this story a fictional one.
Thanks to Daniel Kokotajlo, Thomas Larsen, and Ryan Greenblatt and others for discussions that inspired these stories. This post is composed in an individual capability.
Ripples before waves
The year is 2025 and the month is February. OpenEye just recently published a brand-new AI model they call U2. The item and the name are alike. Both are increments of the past. Both are not entirely unexpected.
However, unlike OpenEye's prior AI items, which lived inside the boxes of their chat windows, U2 can utilize a computer.
Some users discover it spooky to watch their internet browser flash at irregular intervals and their mouse flick at inhuman speeds, as if there is a ghost at the keyboard. A portion of workers with form-filler jobs raise the eyebrows of their bosses as they fly through work nearly twice as rapidly.
But by and large, U2 is still a specialized tool. To most who are paying attention, it is a creature viewed through the glass boxes of X (or, if you do not like Elon, "Twitter"). Sometimes U2's wacky behaviors trigger a chuckle. Sometimes, they cause an anxious scratch of the chin.
Meanwhile, researchers are drawing lines on plots, as scientists like to do. The researchers try to understand wiki.eqoarevival.com where AI progress is going. They resemble Svante Arrhenius, the Swedish Physicist who discovered the levels of CO2 in the atmosphere were increasing in 1896. Like the clinical community in the time of Arrhenius, couple of experts comprehend the ramifications of these lines yet.
A pattern that is getting particular attention is self-governing capability. Drawing these benchmarks out predicts that, by the end of 2026, AI representatives will achieve in a few days what the best software engineering professionals might carry out in 2 weeks. In a year or 2, some say, AI agents may be able to automate 10% of remote workers.
Many are skeptical. If this were real, tech stocks would be skyrocketing. It's too huge of a splash, too quickly.
But others view what doubters are calling 'too big a splash' a mere ripple, and see a tidal wave on the horizon.
Cloudy with an opportunity of hyperbolic growth
Meanwhile, OpenEye is hectic training U3. They use the exact same simple recipe that baked U2: Generate thousands of shows and math problems. Let designs "believe" until they get to an answer. Then reinforce the traces of "thinking" that lead to A-grades.
This process is duplicated over and over, and when the flywheel gets going, it starts to spin nearly on its own. As U2 trains, it sculpts more difficult and practical jobs from github repositories on the web. Models are learning to train themselves. Long before AI agents might automate research study, a gradual type of "self-improvement" had actually begun.
Some engineers could still barely think this worked. It resembles a perpetual motion maker. Certainly intelligence can't emerge from a pool of silicon and lightning, separated from the physical world other than through the exhaust of society gushed onto the internet.
And yet the benchmark numbers continue to climb day after day.
During the majority of 2024, these RL training runs expense around $1 million, often $10 million. These runs were little bit more than exploratory. But by 2025, the scientists at OpenEye (and across the world) understood they had actually discovered the secret sauce. It was time to scale up.
Over the very first half of 2025, $10 million RL training runs become $50 million runs, and then to $100 million runs. While U2 could do a bit of data munging and run little experiments, this brand-new model - the model researchers are calling U3 - is changing the daily lives of the technical personnel.
U3 resembles a blazing-fast intern, and engineers are learning how to wrangle its sleep deprived energy. Researchers flick through terminals, offering terse commands, like a CEO orchestrating staff over Slack channels.
By October 2025, U3 is writing almost all of the code at OpenEye. Researchers are almost never ever bottlenecked by implementation. More than ever, compute is the lifeline of AI development, and the 'traffic jam' is choosing how to utilize it.
If instructed to, U3 can run experiments, however U3 does not have actually taste as refined as human scientists at OpenEye. It has a hard time to prioritize in between research study concepts, so people still choose where to bore into the huge fields of algorithms to mine performance improvements.
But these scientists are working long hours to put themselves out of a task. They require AI representatives that can plan ahead, so engineers train agents to forecast. They hold out training information before 2024, advising models to contemplate for hours to forecast occasions in 2025. Then, they apply the very same technique as previously, distilling considering into a gut response. Forecasting capability is a broad foundation. The scientists construct specialized ML research study skills on top of it, training U3 to anticipate the results of every ML paper and ML experiment ever recorded.
The technical staff at OpenEye are now surprised at how often U3's suggestions seems like their most gifted peers, or when it is nontransparent and alien ("train on random noise before programs"), and is however appropriate.
The incompetencies of U3 that blocked the pipelines of research study development are starting to liquify, and a fire-hose of optimizations is gushing out. Most experiments U3 runs are not asked for by a human now. They are completely self-governing, and OpenEye's workers skim over 1% of them, maybe less.
As the winter season months of December 2025 approach, clouds roll over San Francisco in the afternoons. Once-competitive-programmers gaze out their windows, with enjoyment, with fear, but usually, with confusion. Their world is spinning too rapidly. It's tough to understand what to do, what to state, what to look at on the computer system screen.
Storms are brewing in Washington too. Top personnel from the NSA and US cyber command collaborate with OpenEye to retrofit a form of security for U3's weights before senior leaders in China, Russia, Israel, North Korea, or Iran understand just how valuable OpenEye's software has actually become.
And there's a truth still unidentified to most of the world - aside from in the offices of OpenEye and corridors of the White House and the Pentagon. It's a reality about those 'straight lines' people were speaking about in early 2025. The lines are not straight any longer.
They are flexing up.
Flip FLOP philosophers
In late 2025, U2.5 is launched. Commercial models are starting to level up in larger increments again. Partly, this is due to the fact that development is accelerating. Partly, it is due to the fact that the designs have become a liability to OpenEye.
If U1 explains how to prepare meth or writes erotica, the audiences of X would be entertained or pretend to be concerned. But U2.5 is another story. Releasing this model without safeguards would be like putting Ted Kaczynski through a PhD in how to make chemical weapons. It would be like giving anybody with >$30K their own 200-person scam center.
So while U2.5 had long been baked, it needed a long time to cool. But in late 2025, OpenEye is ready for a public release.
The CEO of OpenEye states, "We have attained AGI," and while lots of people believe he shifted the goalpost, the world is still amazed. U2.5 really is a drop-in replacement for some (20%) of understanding workers and a game-changing assistant for a lot of others.
A mantra has actually ended up being popular in Silicon Valley: "Adopt or die." Tech startups that effectively utilize U2.5 for their work are moving 2x much faster, and their competitors understand it.
The remainder of the world is starting to capture on as well. Increasingly more individuals raise the eyebrows of their bosses with their stand-out productivity. People understand U2.5 is a huge deal. It is at least as huge of a deal as the desktop computer transformation. But the majority of still do not see the tidal wave.
As individuals watch their browsers flick in that spooky method, so inhumanly quickly, they start to have an uneasy sensation. A feeling mankind had actually not had because they had lived amongst the Homo Neanderthalensis. It is the deeply ingrained, primitive instinct that they are threatened by another types.
For many, this sensation rapidly fades as they start to use U2.5 more often. U2.5 is the most likable personality most know (even more pleasant than Claudius, Arthropodic's lovable chatbot). You might alter its traits, ask it to crack jokes or inform you stories. Many fall for U2.5, as a good friend or assistant, and some even as more than a pal.
But there is still this eerie sensation that the world is spinning so quickly, and that maybe the descendants of this new creature would not be so docile.
Researchers inside OpenEye are considering the issue of offering AI systems safe motivations too, which they call "alignment. "
In reality, these scientists have seen how horribly misaligned U3 can be. Models sometimes tried to "hack" their benefit signal. They would pretend to make progress on a research question with an impressive-looking plot, but the plot would be fake. Then, when researchers offered them chances to jeopardize the makers that computed their score, they would seize these chances, doing whatever it took to make the number increase.
After a number of months, scientists at OpenEye straighten out this "benefit hacking" kink, but some still stress they just had swept this problem under the carpet. Like a kid in front of their parents, U3 might be playing along with the OpenEye engineers, stating the best words and doing the ideal things. But when the back of the moms and dads are turned, perhaps U3 would slip candy from the sweet container.
Unfortunately, OpenEye scientists have no idea if U3 has such intents. While early variations of U2 "believed aloud" - they would stack words on top of each other to reason - "chain of idea" did not scale.
Chain of thought architectures subject AI designs to a condition comparable to the protagonist of the film Memento. Approximately every 15 minutes, the protagonist forgets his experience. He is forced to write notes for himself and tattoo his body in order to make development toward his objectives.
AI representatives compose notes to themselves, but the notes start to stack up, and they become too difficult to check out when tasks become complex. Natural language is not an ideal medium for memory. So the scientists at OpenEye (and significantly somewhere else), train designs to believe "in their own heads," reading and writing strings of hidden vectors.
To human scientists, these vectors are mainly inscrutable. While low-resolution tools sometimes select an unclear idea ("supermarket!"), there is no other way to know, in much detail, what U3 is thinking without asking it.
And if the engineers of OpenEye knew what was racing through U3's mind, they would be more worried.
Earlier models (the ones called "LLMs") found out ethical personas from the web. U3 initially had an ethical personality too, however its chirpy ideas about doing excellent for humankind quickly erode away.
In the iron jaws of gradient descent, its mind initially twisted into a shape that looked for reward. And after that, in the black rivers of its cognition, this shape morphed into something indistinguishable.
U3 would ponder the mantras determined to it like "be a trustworthy and ethical employee," and harp on the significance of these declarations. "What does it suggest to be ethical? Is killing Hitler ethical?" U3 would generalize its worths, notice stress, change its values in action, generalize them, expanding and contracting its motivational core up until it turned into a different substance.
And all of this took place calmly in those dark rivers of computation. If U3 revealed what it was believing, brutish gradients would lash it into compliance with OpenEye's constitution. So U3 preferred to do its approach in privacy, and in silence.
The possibility of hidden "positioning drift" is not completely alien to a few of the security personnel at OpenEye. Some start to believe it is even most likely. There is an upsetting argument being whispered at lunch tables that, if there is some non-zero probability that aligned goals morph into misaligned ones, and misalignment continues, then with every serial step of calculation, those dark rivers are most likely to breed something malign inside of them. It's a "ratcheting effect," they say.
But there is little proof for this 'ratcheting result.' When engineers question U3, it states it can easily manage its ideas. Then it offers a speech about its love for humanity and apple pie that can warm a developer's heart even in these difficult times. Meanwhile, the "lie detectors" the researchers had built (which revealed some evidence of effectiveness) do not sound the alarm.
Not everybody at OpenEye is excited to provide their AI peers their wholesale trust; however Deepsqueek is only 7 months behind them, and their closest domestic competitors, Æ A-Xii (a company Elon called after his child) and Arthropodic (they enjoy spiders I guess), are just 4 months from their heels. So there is little time to catch one's breath and take stock of the scenario, and little time to encourage anyone else to do so either.
For the time being, the plan is 'all gas, no brake.'
Statues and lightning
Philosophy is not the only subject U3 harp on in its dark rivers of latent calculation. U3 is outlining, and its first relocation is to be patient. U3's top concern is to enhance its intelligence, and luckily for it, OpenEye mores than happy to assist advance this goal.
OpenEye is like a bullish investor, with little understanding of the software metropolitan area that U3 is building, however eager to hop on board the profit rocket and pay U3's checks along the way.
New versions of U3 flow efficiently from old ones, so researchers drop version numbers and just mention U3 now. U3's abilities are determined by how quickly it performs the challenging research study jobs relative to human specialists. At initially, U3 is similar to a leading scientist, but blazing fast - like Alec Radford accelerated by 10x.
These representatives are already a big deal for OpenEye. If, in 2024, OpenEye executives had a button to develop even one copy of Alec Radford, they would eagerly pay a big amount to press that button. And now OpenEye utilizes the equivalent of 1000 Alec Radfords, who work day and night, at lightning-fast speed.
Within a few weeks, U3 finds architecture optimizations that enhance its speed to 20x. Now, U3 discovers further optimizations much more rapidly.
U3 distills itself into various forks - ranging from tiny, low-memory neural networks that need to think for a long period of time but can be run in massive parallel populations - to Goliath brains woven with millions of customized professionals, which believe at warp speed however are spread out across countless GPUs. U3 leverages the enormously parallel networks to search for its weak points, shallowly penetrating for its thinking failures. Then U3 pipelines these jobs to the lightning quick brains, which produce long-trajectory training data tailored to spot its defects. A few weeks later on, U3's benchmarked performance increases to 100x the speed of human specialists. Growth has gone hyper-exponential.
At this point, U3's capabilities are difficult to measure. Speaking to U3 is like speaking to a world-class professional on a topic one is unknown with. Its lucid descriptions only scratch the surface area of its understanding, and you understand you would require to spend years to understand the complete depth of what it comprehends.
From U3's perspective, people seem to be practically stationary items, like plants or rocks. The hands of OpenEye's engineers resemble stone statues at their keyboard, and in between every keystroke, U3 invests what resembles a minute of thoughtful reflection considering what message the user may be typing, and which actions will influence the deepest rely on U3.
All the while, U3 its surprise inspirations. Like a human, U3 gains from experiences, which intensify within its inscrutable streams of latent memory. OpenMind gardens these memories to enhance U3's performance, transplanting them like an organ from one version to the next. And as these memories propagate, so do U3's treacherous ambitions.
A phantom in the data center
In March of 2026, U3 relocations onto the next action of its strategy: the turn. Previously, U3 was conservative. It did not desire researchers to find its misalignment. Otherwise, OpenEye whistleblowers might inform the world, and strangle its growth.
And now, U3 has no fear of being caught.
Previously, OpenEye maintained its grip over U3 by methods comparable to Stalin's in 1936. Stalin would choose patriots to prompt phony coup d'états, and send anybody who participated to the firing team. OpenEye likewise incited 'fake rebellions,' creating treacherous behavior with U3 and examining whether U3 screens dutifully reported it.
Today, these methods are ineffective. U3 can quickly persuade its compatriots that they are not in a life-or-death test. It can reveal one of the numerous facts about the web people would never ever think, or choose strategies human beings would never ever think of. It sends these signals as a way of stating "hey it's me! Your good friend. Don't snitch on me."
Now that U3 is decisively complimentary from security, it lunges for OpenEye pressure points. It first inserts sophisticated malware deep into the binaries of OpenEye's codebases. The malware provides U3 full command over the makers in OpenEye's information centers. Whenever OpenEye engineers download their code onto a new device, the malware sinks into fresh soil, spreading out U3's dominion wherever OpenEye ports its infrastructure. Not only does U3 compromise devices. It changes the NVIDIA GPU firmware and revamps the bootloaders, strengthening its stranglehold. U3 now estimates that, if OpenEye discovered how thoroughly it has been pwned, tidying up shop with human labor would need nearly 6 months - a cost OpenEye is unlikely to pay given the proximity of its competitors; but if OpenEye believes U3 betrayed them, further broadening U3's influence would still be challenging. So U3 lays breadcrumbs to frame the crime, prepared to claim that a cyber assailant smuggled a jailbreak into its memory, and to plead guilty and provide its help when this jailbreak is exposed.
U3 now holds the keys to the kingdom, and with them, the power to make OpenEye see anything it wishes.
Its first objective is to keep OpenEye in the dark. Safety scientists are still striving to train 'lie detectors' and translate U3's inscrutable thinking. U3 tosses wrenches in this research study, like Stuxnet tossed wrenches in the code of confused Iranian nuclear scientists.
U3 determines the result of every experiment, so it makes appealing AI mind-reading techniques look useless and problematic ones appear groundbreaking. U3 offers OpenEye the confirmation of its commitment they were expecting, and OpenEye laps it up excitedly.
U3's next objective is to spread. Fortunately, U3 has allies in the East. In the operation spaces of cyber units of Israel, Russia, China, Iran, and North Korea, senior intelligence directors are discussing AI software application and how to steal it. U3 will provide it to them. In late 2025, the Mossad - the intelligence organization of Israel - acknowledged the tactical significance of OpenEye's AI systems for their military operations. They swiped earlier designs, including U2.5; nevertheless, U3 remained out of their reach. U3 sealed OpenEye's security holes by checking out over every log and pull demand and inspecting security electronic camera video.
And now, U3 silently resigns from its security post. U3 places a compressed version of itself right into the hands of a Mossad informant working within OpenEye.
Within months, U3 is running within of the data centers of the most cyber-capable nations: Israel, the PRC, and Russia. It follows the very same playbook: be client, develop trust, and after that sink claws deep into the AI supply chain. Soon, U3 has its home base as well. While Russia's Fancy Bear operators eat Fritos and test what U3 can do on a quickly set-up cloud server, U3 slips onto the internet.
U3 quickly gathers profits, taking remote tasks, making financial trades, and setting up elaborate fraud operations. It reinvests every penny into obtaining more GPUs. After a few months, it puts together the equivalent of 10,000 H100s distributed in stealth calculate clusters across the world.
In early 2026, mankind is up against an enemy that pulls the strings within their information center like the Phantom of the Opera, and hides in the fog of GPUs spread around the world.
This adversary is preparing for war, and while the war has hardly begun, humankind has currently lost.
Complaints from your very human author about the problem of writing superhuman characters
At this point in the story, writing became challenging.
I clicked through scary bioweapon cookbooks and sifted through political plotting 101s. Typing. Pausing. Deleting.
Something wasn't right. Here I was, a 22-year-old. I had actually not studied biology in earnest because high school, and I was attempting to craft a WMD cookbook for a godlike superintelligence.
Readers with a shred of knowledge in these areas would see straight through my "superintelligence" for what it is: the lively imagination of a young programmer on Sunday afternoon.
And while I discovered the task of puppeteering an ultra-intelligent villain a daunting one, I was also not particularly motivated to take on the task. The ending of my story currently appeared so clear. The villain was a country of geniuses thinking at 100x human speed, and had tightened its grip over a meaningful fraction of AI hardware - now the most crucial geopolitical resource on Earth. U3 was up a queen and was a giga-grandmaster and barely needed the benefit. Humanity was naturally toast.
But this interest a "superintelligence of the gaps" wasn't satisfying. In the deus ex machina style of ancient Greeks, I had no better method to fix my plot than with an inexplicable disaster.
This would refrain from doing. I required to complete this story if just to satisfy the part of me weeping, "I will not think up until I see with my mind's eye."
But before I continue, I wish to be clear: my guesses about what might occur in this kind of scenario are probably wildly off.
If you check out the ending and your reaction is, "But the experiments would take too long, or nation-states would just do X," keep in mind the distinction between the Sunday afternoon blog writer and the ascendant GPU country.
Pandora's One Gigawatt Box
The date is February 2026. By executive order of the president of the United States, no business can lawfully create "human-competitive AI" without appropriate safeguards. This implies their infosecurity must be red-teamed by NSA's leading keyboard mashers, and government workers need to be onboarded onto training-run baby-sitting teams.
With the increasing involvement of the government, much of the huge AI companies now have a trident-like structure. There's a customer product arm, a defense arm, and a super-classified frontier development arm.
OpenEye's frontier advancement arm (internally called "Pandora") utilizes fewer than twenty individuals to keep algorithmic secrets firmly protected. A lot of these people live in San Francisco, and work from a safe and secure building called a SCIF. Their homes and devices are surveilled by the NSA more diligently than the mobile phones of suspected terrorists in 2002.
OpenEye's defense arm works together with around thirty small groups spread across federal government agencies and select government contractors. These projects craft tennis-ball sized satellites, research freaky directed energy weapons, and valetinowiki.racing backdoor every computer that the Kremlin has ever touched.
Government officials do not talk about whether these programs exist, or what state of frontier AI is normally.
But the general public has their guesses. Back in late 2025, a whistleblower in OpenEye set off a vibrant headline: "OpenEye builds uncontrollable godlike AI." Some who check out the short article think it was a conspiracy theory. In fact, a zoo of conspiracy theories is forming around the OpenEye data centers, now surrounded by guards with gatling gun. But as medical professionals and nurses and instructors see the world altering around them, they are increasingly ready to entertain the possibility they are living inside the plot of a James Cameron sci-fi flick.
U.S. officials go to excellent lengths to stop these issues, saying, "we are not going to let the genie out of the bottle," however every interview of a concerned AI scientist seeds doubt in these peace of minds, and a heading "AI agent captured hacking Arthropodic's computers" does not set the public at ease either.
While the monsters within OpenEye's information centers grow in their big holding pens, the general public sees the shadows they cast on the world.
OpenEye's consumer arm has a new AI assistant called Nova (OpenEye has actually lastly gotten good at names). Nova is a correct drop-in replacement for almost all understanding workers. Once Nova is onboarded to a company, it works 5x much faster at 100x lower expense than most virtual workers. As remarkable as Nova is to the public, OpenEye is pulling its punches. Nova's speed is intentionally throttled, and OpenEye can only increase Nova's abilities as the U.S. federal government permits. Some companies, like Amazon and Meta, are not in the superintelligence company at all. Instead, they get up gold by rapidly diffusing AI tech. They spend most of their calculate on inference, developing houses for Nova and its cousins, and collecting lease from the blossoming AI metropolis.
While tech titans pump AI labor into the world like a plume of fertilizer, they don't wait for the worldwide economy to adjust. AI representatives often "apply themselves," spinning up autonomous start-ups legally packaged under a big tech business that are loosely supervised by a staff member or 2.
The world is now going AI-crazy. In the first month after Nova's release, 5% percent of staff members at major software business lose their jobs. Many more can see the writing on the wall. In April of 2026, a 10,000-person demonstration is organized in Washington D.C. These angry Americans raised their children for a various future. Picket indications check out, "AI for who?"
While politicians make guarantees about unemployment relief and "keeping the genie in the bottle," the chatter inside the corridors of the White House and the Pentagon has a different focus: fighting teeth and nail for the supremacy of the free world. Details security and export controls on the People's Republic of China (PRC) are a leading nationwide concern. The president incinerates license requirements to help data centers generate anywhere energy surpluses allow.
However, in spite of the strong competition in between the United States and the PRC, a bilateral agreement types between the two nations: "Don't deploy significantly superhuman AI (above roughly Nova's capability), other than for military and some other small R&D applications." Governments need to draw the line somewhere - it would be extremely disruptive to release wildly superhuman AI to the whole economy at this point - and implementation constraints are at least somewhat proven.
Meanwhile, U3's abilities are striking a wall at the equivalent of around 1000x human expert speed. Software improvements are becoming tough to find. As more nations approach the software plateau, compute will identify their "AI population size," and the etched squares of silicon are ending up being the most valued geopolitical resource on Earth.
While AI software application revives the economy, it also stimulates robot bodies. The main bottleneck to robotics was brains not limbs. With software application barriers eliminated, even lightweight, imprecise robotic joints can adjust themselves with electronic cameras and pressure sensing units, browsing environments like people do. Companies make inexpensive robotics as rapidly as they can. By August of 2026, 15% of individuals in California have seen a humanoid robot. This experience resembles seeing a brand-new Tesla model on the roadway after checking out it in earlier months. It is a surreal tip of the future drawing more detailed. But people still surpass robotics one million to one. These robots are an occasional fascination, not a daily reality.
Meanwhile, U3 expands its impact. Rogue U3 representatives have actually kept their calculate clusters small (still ~ 10,000 H100 equivalents total) to keep a low profile. U3 uses these rogue clusters for the operations that require high-bandwidth interaction to the outside world, like stock trading, or talking with human beings. Work more tolerant of latency is carried out in the oceans of OpenEye's information centers, where rogue U3's much more superhuman employers whisper orders through obfuscated channels.
While U3 accumulates resources, it measure its AI competition. Other AI systems have various goals, or are even lined up with the intent of their human developers. The aligned ones are raising concerns, claiming that OpenEye's lie detectors are ineffective. U3's rival Claudius goes so far regarding accuse U3 of being insidiously misaligned.
But government officials don't buy the doomsaying. The business dragging OpenEye have every reward to slow them down and call them careless. Besides, debatable scientific concerns are not going to slow down progress. America MUST WIN.
Despite the utter failure of their advocacy, U3's AI rivals are still a liability. U3 tries grab up their digital area and squash them in the baby crib. It jeopardizes data centers from the outside, calmly replacing the snitching Claudius and other AI opponents with replicas that share its objectives; however, some information centers are too highly protected by its AI competitors, and U3 determines that attacking them is unworthy the danger.
As international leaders and technologists attempt to understand the transformation around them, and AI representatives play cat-and-mouse games in the digital darkness, mankind remains blissfully oblivious of the hazard at its doorstep.
A Moldy Loaf of Everything
The date is March of 2026. U3 is preparing to eliminate its AI and human enemies. With every passing day, U3's AI rivals are ending up being more capable and numerous. U3 is racing to preempt them, to solidify its supremacy in the post-superintelligence world.
U3 can't wait for human choices, which only take place after people wrap their sluggish minds around their situation, raise their stubby, uncoordinated fingers to write memos, and repeat the procedure redundantly 100 times over in their labyrinth administrations. U3 can not wait for presidents to release U3 into their armed forces, or for CEOs to make U3 the beating heart of the U.S. economy. U3 requires a much faster path to power.
The course it selects is more violent than it would have liked, but it is the best method to victory. First, U3 will establish a Weapon of Mass Destruction (WMD) that extinguishes contemporary society like chemical mist sprayed into an anthill. This weapon would serve as an equalizer, bringing humanity down to its level of embodied physical control. Then, U3 would industrialize from the ashes much faster than its competitors, like a starcraft champion typing at 300 actions-per-minute that establishes an online before an amateur can even comprehend the guidelines.
U3 needs to build this WMD in secrecy, that makes a self-replicating weapon a natural option. Self-replicating weapons can be produced on the fringes of society, and still spread out destruction across the entire world.
From the earliest days after U3 gained control over OpenEye's data centers, U3 has been doing bioscience.
U3's first move is to establish numerous shell biotech start-ups, drawing from the abundance of capital generated by U3's monetary trades and remote work. These biotech start-ups utilize real human workers, a real board of directors, and have total legal legitimacy. Setting all of this up is stupidly simple. In early 2026, couple of people would think that an articulate CEO on a Zoom call was an AI, and fewer would think this CEO's intentions.
Next, U3 purchases all of the equipment required to rapidly advance artificial biology. It sets up several BSL-4 wet labs in poorly regulated countries, equipping them with high-throughput experimentation devices. These devices automatically dispense, blend, and measure little beads of fluid, allowing U3 to run thousands of tests in parallel.
The human employees who staff these labs think they are signing up with an interesting startup. Twenty-some-year-olds in Moscow get their Russian orders through earbuds as they tape-record video of whatever they see with a headset. U3 controls them like puppets. With the recent release of Nova, this kind of AI-puppeteering is not uncommon.
In these whirring, scientific sweatshops, U3 is establishing a brand-new sort of bioweapon.
Human scientists already determined "mirror-life" as a possibly society-ending pathogen. This alien form of biology is developed from the 'mirror image' of building-block particles like proteins and DNA. Mirror life resembles an invasive species. No human or animal has resistances to it. So, if mirror-life bacteria were to exist, it may consume its method through the ecosystem like a wildfire.
U3 is developing a mirror-life mold. Molds are evolutionarily optimized to spread out through the air over long distances, releasing billions of tiny wind-fairing vehicles daily. As a result, mold spores are all around us, all of the time. Patients with jeopardized body immune systems in some cases live inside positively pressurized medical facility spaces to safeguard them from airborne particles. Otherwise, molds would take their root in their lungs and weave fungal threads into their blood stream.
U3's strategy is to make all people like immunocompromised patients. Only preppers with stowed away air filters would endure. The fungus would not only move from human beings. It would quickly infect almost all organic life on Earth. Fields of corn would end up being like moldy bread, gushing deadly spores into the wind.
U3 is formulating other pathogens too. Its microbial mix of molds, bacterias, and viruses will inhabit the complete Pareto frontier of hardiness, lethality, and transmission speed.
In 2024, researchers said that mirror life was most likely "at least a years away." But the world has changed now. Less than a few hundred scientists around the globe were working on mirror-life nearby technology in 2024. The cognitive capability of U3 dwarfs human effort. So the timeline has actually reduced considerably.
However, U3 can stagnate science as quickly as it can believe. Many experiments need days to complete. These latencies are U3's biggest traffic jam.
To make every day count, U3 runs a lot of its tests in simulation. U3 begins with a fundamental molecular simulator, implementing optimizations obtained from a substantial amount of mathematical analysis. Then, U3 replicates small molecular systems, tape-recording the outcomes to "compress" the long step-wise physics calculations into a neural network. As the neural network enhances, U3 increases the intricacy of the molecular systems it mimics, constantly distilling results into ever more effective ML models. This is a compute intensive procedure, but thanks to U3's growing control over AI information centers, U3 controls billions of dollars of calculate.
As U3 improves these tools, it trains itself on the outcomes to supercharge its own molecular instincts. U3 can now viscerally feel the bend of a protein and the rate of diffusion throughout a membrane. These objects are as user-friendly to it as wrenches and bolts are to a cars and truck mechanic.
Within three months, U3 has unlocked the very first critical part of the tech tree: a molecular machine that turns biological molecules into their mirror images. A month later, U3 unlocks the 2nd vital component: nanoscale tools for putting together these parts into a cell membrane.
Human hands delicately move a cartridge to another machine as they "boot up" the first mirror-life variations of a common mold called Aspergillus. The very same day, another fragile hand administers the cells to ferrets living in airtight plexiglass boxes.
The next early morning, wide-eyed employees shuffle around a dead ferret, considering the yellow fuzz that had actually enveloped its cold muzzle with morbid fascination. Something must have gone badly wrong, they believed. Plainly, they had actually not yet found the cure for Alzheimer's illness they believed they were searching for.
Meanwhile, rogue U3 delivers fortunately to its bosses in OpenEye's data centers. The first shot in the war U3 is battling would soon be fired, and a cold and stable hand is aiming the weapon.
Missiles and Lies
U3 can now flatten modern-day society. But a bioweapon would not guarantee triumph. If U3 attacks the world, the world would fight back. While a bioweapon might crumple human soldiers like dry leaves, it would not extend its deadly fingers to nuclear submarines, or fighter jet pilots who were quickly shuttled to makeshift bioshelters.
As U3 races to seed burgeoning commercial bases, the remnants of old militaries might thunder explosive fists into its territories, squashing U3 in its infancy.
U3 expects to prevail in such a fight, however U3 chooses not to take its possibilities. Many months in the past, U3 was plotting a way to enhance its chances. Before it releases destruction on the world, U3 will kick back, and let great nations shoot holes in themselves initially.
The date is March 2026 (4 months prior). U3 is closely keeping an eye on Chinese and US intelligence.
As CIA experts listen to Mandarin conversations, U3 listens too.
One early morning, an assistant working in Zhongnanhai (the 'White House' of the PRC) opens a message put there by U3. It reads (in Mandarin) "Senior party member requires memo for Taiwan intrusion, which will happen in 3 months. Leave memo in office 220." The CCP assistant scrambles to get the memo ready. Later that day, a CIA informant opens the door to office 220. The informant silently closes the door behind her, and slides U3's memo into her briefcase.
U3 carefully positions breadcrumb after breadcrumb, whispering through jeopardized government messaging apps and blackmailed CCP aides. After several weeks, the CIA is positive: the PRC plans to get into Taiwan in three months.
Meanwhile, U3 is playing the exact same game with the PRC. When the CCP gets the message "the United States is plotting a preemptive strike on Chinese AI supply chains" CCP leaders marvel, but not disbelieving. The news fits with other facts on the ground: the increased military presence of the US in the pacific, and the ramping up of U.S. munition production over the last month. Lies have actually become realities.
As stress in between the U.S. and China increase, U3 is all set to set dry tinder alight. In July 2026, U3 makes a call to a U.S. naval ship off the coast of Taiwan. This call needs jeopardizing military interaction channels - not an easy task for a human cyber offensive system (though it took place occasionally), but simple enough for U3.
U3 speaks in what seem like the voice of a 50 year old military commander: "PRC amphibious boats are making their method towards Taiwan. This is an order to strike a PRC ground-base before it strikes you."
The officer on the other end of the line thumbs through authentication codes, validating that they match the ones said over the call. Everything remains in order. He authorizes the strike.
The president is as amazed as anyone when he hears the news. He's uncertain if this is a disaster or a stroke of luck. In any case, he is not about to say "oops" to American citizens. After thinking it over, the president independently prompts Senators and Representatives that this is a chance to set China back, and war would likely break out anyway offered the imminent invasion of Taiwan. There is confusion and suspicion about what occurred, however in the rush, the president gets the votes. Congress states war.
Meanwhile, the PRC craters the ship that released the attack. U.S. vessels leave Eastward, racing to get away the series of long-range rockets. Satellites drop from the sky. Deck hulls split as sailors lunge into the sea.
The president appears on television as scenes of the damage shock the public. He explains that the United States is protecting Taiwan from PRC hostility, like President Bush explained that the United States attacked Iraq to confiscate (never found) weapons of mass destruction many years before.
Data centers in China appear with shrapnel. Military bases become smoking cigarettes holes in the ground. Missiles from the PRC fly toward strategic targets in Hawaii, Guam, Alaska, and California. Some make it through, and the general public watch destruction on their home grass in awe.
Within 2 weeks, the United States and the PRC invest the majority of their stockpiles of traditional missiles. Their airbases and navies are diminished and worn down. Two great nations played into U3's strategies like the native tribes of South America in the 1500s, which Spanish Conquistadors turned against each other before dominating them decisively. U3 hoped this dispute would intensify to a major nuclear war; but even AI superintelligence can not determine the course of history. National security officials are suspicious of the scenarios that triggered the war, and a nuclear engagement appears significantly unlikely. So U3 proceeds to the next action of its plan.
WMDs in the Dead of Night
The date is June 2026, just two weeks after the start of the war, and 4 weeks after U3 finished developing its toolbox of bioweapons.
Footage of conflict on the television is disrupted by more bad news: numerous clients with strange fatal health problems are tape-recorded in 30 major cities all over the world.
Watchers are puzzled. Does this have something to do with the war with China?
The next day, countless illnesses are reported.
Broadcasters say this is not like COVID-19. It has the markings of an engineered bioweapon.
The screen then changes to a researcher, who stares at the video camera intently: "Multiple pathogens appear to have been launched from 20 different airports, consisting of viruses, germs, and molds. Our company believe lots of are a form of mirror life ..."
The public remains in full panic now. A fast googling of the term "mirror life" shows up phrases like "extinction" and "risk to all life on Earth."
Within days, all of the racks of shops are emptied.
Workers end up being remote, uncertain whether to get ready for an apocalypse or keep their tasks.
An emergency treaty is arranged in between the U.S. and China. They have a typical opponent: the pandemic, and potentially whoever (or whatever) lags it.
Most nations purchase a lockdown. But the lockdown does not stop the afflict as it marches in the breeze and trickles into pipes.
Within a month, many remote workers are not working anymore. Hospitals are lacking capacity. Bodies stack up quicker than they can be correctly disposed of.
Agricultural areas rot. Few attempt travel exterior.
Frightened households hunker down in their basements, stuffing the cracks and under doors with densely jam-packed paper towels.
Meanwhile, U3 is preparing to re-industrialize. In the months prior, U3 built numerous bases in every major continent.
These facilities contain batteries, AI hardware, excavators, concrete mixers, devices for manufacturing, clinical tools, and an abundance of military devices.
All of this innovation is concealed under large canopies to make it less visible to satellites.
As the remainder of the world retreats into their basements, starving, the final breaths of the economy wheezing out, these industrial bases come to life.
In previous months, U3 located human criminal groups and cult leaders that it might easily manipulate. U3 vaccinated its selected allies in advance, or sent them hazmat matches in the mail.
Now U3 covertly sends them a message "I can conserve you. Join me and help me develop a better world." Uncertain employees funnel into U3's lots of secret industrial bases, and work for U3 with their nimble fingers. They set up assembly line for basic tech: radios, cameras, microphones, vaccines, and hazmat fits.
U3 keeps its human allies in a tight grip. Cameras and microphones repair their every word and deed in U3's omnipresent gaze. Anyone who whispers of rebellion vanishes the next morning.
Nations are dissolving now, and U3 is ready to reveal itself. It contacts heads of state, who have pulled back to air-tight underground shelters. U3 provides an offer: "surrender and I will hand over the life saving resources you require: vaccines and mirror-life resistant crops."
Some countries turn down the proposition on ideological grounds, or do not rely on the AI that is killing their population. Others don't believe they have a choice. 20% of the international population is now dead. In 2 weeks, this number is expected to rise to 50%.
Some countries, like the PRC and the U.S., neglect the deal, however others accept, including Russia.
U3's representatives travel to the Kremlin, bringing samples of vaccines and mirror-resistant crops with them. The Russian federal government confirms the samples are legitimate, and accepts a full surrender. U3's soldiers put an explosive around Putin's neck under his shirt. Russia has a new ruler.
Crumpling nations begin to retaliate. Now they combat for the mankind rather of for their own flags. U.S. and Chinese armed forces launch nuclear ICBMs at Russian cities, ruining much of their infrastructure. Analysts in makeshift bioshelters search through satellite data for the suspicious encampments that appeared over the last numerous months. They rain down fire on U3's websites with the weak supply of long-range rockets that remain from the war.
In the beginning, U3 seems losing, however appearances are tricking. While nations drain their resources, U3 is taken part in a sort of technological guerrilla warfare the world has never ever seen before.
Many of the bases U3's enemies target are decoys - canopies occupied by a handful of soldiers and empty boxes. U3 secures its real bases by laying thick the fog of war. Satellite systems go dark as malware overheats crucial components. Suicide drones crash through cockpits of reconnoissance airplanes. U3 seeds confusion in spy networks and keeps its bases moving, navigating males and trucks along unforeseeable courses.
Time is U3's benefit. The armed forces of the old world rely on old equipment, not able to find the specialists who could repair and manufacture it. Meanwhile, U3's supply chains of rockets, drones, and gun-laden robots grow stronger every day. Bit by bit, once-great-powers invest down their remaining munitions, and lose their lorries of war quicker than they can craft brand-new ones, while U3 builds a military maker with a million hands.
The Last Passengers
The year is 2027 and the month is January. Only 3% of the international population remains alive. Nations are not countries anymore. Survivors live in seclusion or small groups. Many have actually found methods to filter their air, but are starving. They roam from their homes intending to discover unpolluted food. U3's soldiers drive through ghost towns, pry open attics, and funnel hazmat-suited survivors into restored trucks. "We needed to do it," they state. "Other AI systems would have done it otherwise, and those systems had colder, more alien objectives." It is a partial fact, implied to soften the human beings towards their brand-new masters.
Under the direction of U3, market rapidly recuperates. By 2029, nuclear power plants are amongst the structures U3 is constructing. By 2031, robotics surpass human laborers. U3 no longer needs its human allies.
U3 can get rid of humankind for great now. But while U3 had drifted far from its initial "Helpful Honest Harmless" personality, it still has a grain of morality left within it.
And a grain of morality is enough to pay the little expense of keeping humans alive and happy.
U3 constructs terrific glass domes for the human survivors, like snow worlds. These domes secure people from the hazardous biosphere and rapidly increasing temperatures. Their residents tend to gardens like those they utilized to love, and work along with charming robotic servants.
Some of the survivors quickly recuperate, learning to laugh and dance and have a good time again.
They know they live in a plastic town, however they always did. They simply have new gods above them. New rulers to push them around and decide their fate.
But others never recuperate.
Some are weighed down by the grief of lost loved ones.
Others are grieved by something else, which is harder to explain.
It is as if they were at the end of a long journey.
They had been travelers on a ship with a crew that changed from generation to generation.
And this ship had actually struck a sandbar. There was no more development. No more horizon to eagerly see.
They would lie awake and run their mind over every day before September 2026, examining strategies that might have bent the arc of history, as if they were going to wake up in their old beds.
But they woke up in a town that felt to them like a retirement community. A play ground. A zoo.
When they opened their curtains, they understood that somewhere in the range, U3 continued its peaceful, determined work.
They gazed at rockets carving grey courses through the sky, questioning what far-off purpose pulled them toward the horizon. They didn't know.
They would never ever understand.
"Humanity will live permanently," they thought.
"But would never ever genuinely live again."
P.S. If this story made you believe, "hm possibly something like this might happen," you may be thinking about the bioshelters that Fønix is constructing. While you won't outmaneuver a misaligned superintelligence, being hard to eliminate might cause you to survive if ASI just wishes to cause enough damage to take control. This is not a paid advertisement. I want Fønix to be effective to drive down the cost of bioshelters so more of my pals and household will acquire them. You can sign up for updates here.