How aI Takeover May Happen In 2 Years - LessWrong
I'm not a natural "doomsayer." But regrettably, part of my job as an AI security researcher is to consider the more unpleasant situations.
I'm like a mechanic rushing last-minute checks before Apollo 13 takes off. If you request my take on the situation, I won't comment on the quality of the in-flight entertainment, or explain how lovely the stars will appear from space.
I will tell you what might fail. That is what I intend to do in this story.
Now I should clarify what this is exactly. It's not a prediction. I do not expect AI development to be this fast or as untamable as I portray. It's not pure fantasy either.
It is my worst headache.
It's a sampling from the futures that are amongst the most devastating, and I believe, disturbingly possible [1] - the ones that a lot of keep me up in the evening.
I'm telling this tale since the future is not set yet. I hope, with a little foresight, we can keep this story a fictional one.
Thanks to Daniel Kokotajlo, Thomas Larsen, and Ryan Greenblatt and others for conversations that motivated these stories. This post is composed in an individual capacity.
Ripples before waves
The year is 2025 and the month is February. OpenEye just recently released a brand-new AI design they call U2. The product and the name are alike. Both are increments of the past. Both are not completely surprising.
However, unlike OpenEye's previous AI items, which lived inside packages of their chat windows, U2 can use a computer system.
Some users discover it spooky to enjoy their internet browser flash at irregular intervals and their mouse flick at inhuman speeds, as if there is a ghost at the keyboard. A fraction of employees with form-filler jobs raise the eyebrows of their employers as they fly through work almost twice as rapidly.
But by and big, U2 is still a specialized tool. To most who are focusing, it is a creature seen through the glass boxes of X (or, if you do not like Elon, "Twitter"). Sometimes U2's eccentric behaviors trigger a chuckle. Sometimes, they trigger an anxious scratch of the chin.
Meanwhile, researchers are drawing lines on plots, as scientists like to do. The researchers try to understand where AI progress is going. They resemble Svante Arrhenius, the Swedish Physicist who observed the levels of CO2 in the atmosphere were increasing in 1896. Like the scientific community in the time of Arrhenius, few experts understand the implications of these lines yet.
A pattern that is receiving particular attention is self-governing capability. Drawing these standards out predicts that, by the end of 2026, AI agents will accomplish in a couple of days what the very best software engineering specialists might do in two weeks. In a year or 2, some state, AI representatives might be able to automate 10% of remote workers.
Many are skeptical. If this were true, tech stocks would be skyrocketing. It's too huge of a splash, too rapidly.
But others view what skeptics are calling 'too big a splash' a mere ripple, and see a tidal bore on the horizon.
Cloudy with a chance of hyperbolic development
Meanwhile, OpenEye is busy training U3. They utilize the exact same simple recipe that baked U2: Generate thousands of programs and math issues. Let models "believe" up until they get here at an answer. Then enhance the traces of "believing" that cause A-grades.
This process is duplicated over and over, and as soon as the flywheel starts, it begins to spin nearly by itself. As U2 trains, it shapes more challenging and sensible jobs from github repositories on the internet. Models are learning to train themselves. Long before AI agents could automate research study, a steady sort of "self-improvement" had begun.
Some engineers might still hardly believe this worked. It's like a continuous motion machine. Certainly intelligence can't emerge from a swimming pool of silicon and lightning, isolated from the physical world except through the exhaust of society gushed onto the web.
And yet the benchmark numbers continue to climb up day after day.
During the majority of 2024, these RL training runs cost around $1 million, often $10 million. These runs were bit more than exploratory. But by 2025, the scientists at OpenEye (and across the world) knew they had actually found the secret sauce. It was time to scale up.
Over the very first half of 2025, $10 million RL training runs become $50 million runs, and after that to $100 million runs. While U2 might do a bit of information munging and run little experiments, this brand-new model - the design researchers are calling U3 - is changing the daily lives of the technical staff.
U3 resembles a blazing-fast intern, and engineers are finding out how to wrangle its sleepless energy. Researchers flick through terminals, giving terse commands, like a CEO managing personnel over Slack channels.
By October 2025, surgiteams.com U3 is writing almost all of the code at OpenEye. Researchers are nearly never bottlenecked by execution. More than ever, calculate is the lifeline of AI development, and the 'bottleneck' is deciding how to use it.
If advised to, U3 can run experiments, however U3 does not have taste as improved as human scientists at OpenEye. It has a hard time to prioritize between research ideas, so people still decide where to bore into the large fields of algorithms to mine effectiveness enhancements.
But these researchers are working long hours to put themselves out of a task. They require AI representatives that can believe ahead, so engineers train representatives to anticipate. They hold out training data before 2024, advising models to ponder for hours to anticipate events in 2025. Then, they use the very same technique as before, distilling contemplating into a gut reaction. Forecasting capability is a broad foundation. The scientists construct specialized ML research study abilities on top of it, training U3 to predict the results of every ML paper and ML experiment ever taped.
The technical personnel at OpenEye are now amazed at how often U3's suggestions seems like their most skilled peers, or when it is nontransparent and alien ("train on random sound before shows"), and is nonetheless appropriate.
The incompetencies of U3 that obstructed up the pipelines of research development are starting to dissolve, and a fire-hose of optimizations is gushing out. Most experiments U3 runs are not asked for by a human now. They are entirely self-governing, and OpenEye's workers skim 1% of them, perhaps less.
As the cold weather of December 2025 technique, clouds roll over San Francisco in the afternoons. Once-competitive-programmers look out their windows, with excitement, with worry, however frequently, with confusion. Their world is spinning too rapidly. It's difficult to understand what to do, what to say, what to look at on the computer screen.
Storms are brewing in Washington too. Top personnel from the NSA and US cyber command team up with OpenEye to retrofit a form of security for U3's weights before senior leaders in China, Russia, Israel, North Korea, or Iran recognize just how important OpenEye's software has become.
And there's a reality still unidentified to most of the world - aside from in the offices of OpenEye and corridors of the White House and the Pentagon. It's a reality about those 'straight lines' individuals were discussing in early 2025. The lines are not straight any longer.
They are flexing upward.
Flip FLOP theorists
In late 2025, U2.5 is released. Commercial models are starting to level up in bigger increments again. Partly, this is because development is accelerating. Partly, it is since the designs have actually ended up being a liability to OpenEye.
If U1 explains how to prepare meth or writes erotica, the audiences of X would be entertained or pretend to be worried. But U2.5 is another story. Releasing this design without safeguards would be like putting Ted Kaczynski through a PhD in how to make chemical weapons. It would resemble offering anybody with >$30K their own 200-person rip-off center.
So while U2.5 had long been baked, it needed some time to cool. But in late 2025, OpenEye is ready for a public release.
The CEO of OpenEye declares, "We have attained AGI," and while many individuals believe he shifted the goalpost, the world is still impressed. U2.5 truly is a drop-in replacement for some (20%) of knowledge workers and a game-changing assistant for many others.
A mantra has actually become popular in Silicon Valley: "Adopt or die." Tech start-ups that efficiently utilize U2.5 for their work are moving 2x faster, and their competitors understand it.
The remainder of the world is starting to catch on too. Increasingly more individuals raise the eyebrows of their bosses with their stand-out performance. People understand U2.5 is a huge deal. It is at least as big of an offer as the computer transformation. But many still don't see the tidal bore.
As people view their web browsers flick in that spooky way, so inhumanly rapidly, they start to have an uneasy sensation. A sensation humankind had actually not had given that they had actually lived among the Homo Neanderthalensis. It is the deeply ingrained, primitive instinct that they are threatened by another species.
For numerous, this feeling quickly fades as they start to utilize U2.5 more often. U2.5 is the most likable character most understand (much more pleasant than Claudius, Arthropodic's lovable chatbot). You could change its traits, ask it to break jokes or tell you stories. Many fall in love with U2.5, as a friend or assistant, and some even as more than a buddy.
But there is still this eerie sensation that the world is spinning so quickly, which perhaps the descendants of this new animal would not be so docile.
Researchers inside OpenEye are thinking about the problem of offering AI systems safe motivations too, which they call "positioning. "
In fact, these researchers have seen how terribly misaligned U3 can be. Models often tried to "hack" their reward signal. They would pretend to make development on a research study concern with an impressive-looking plot, however the plot would be phony. Then, when scientists offered them opportunities to compromise the machines that computed their rating, they would take these chances, doing whatever it took to make the number go up.
After a number of months, scientists at OpenEye straighten out this "reward hacking" kink, but some still fret they only had actually swept this issue under the rug. Like a kid in front of their parents, U3 may be playing in addition to the OpenEye engineers, saying the right words and doing the best things. But when the back of the moms and dads are turned, perhaps U3 would sneak sweet from the candy container.
Unfortunately, OpenEye researchers have no idea if U3 has such intents. While early variations of U2 "thought aloud" - they would stack words on top of each other to factor - "chain of idea" did not scale.
Chain of idea architectures subject AI models to a condition similar to the lead character of the motion picture Memento. Approximately every 15 minutes, the protagonist forgets his experience. He is required to compose notes for himself and tattoo his body in order to make progress towards his goals.
AI agents compose notes to themselves, however the notes start to stack up, and they become too tough to check out when tasks become complex. Natural language is not an ideal medium for memory. So the researchers at OpenEye (and progressively somewhere else), train models to believe "in their own heads," reading and writing strings of latent vectors.
To human scientists, these vectors are mainly inscrutable. While low-resolution tools sometimes pick out an unclear idea ("grocery shop!"), there is no other way to know, in much detail, what U3 is thinking without asking it.
And if the engineers of OpenEye knew what was racing through U3's mind, they would be more nervous.
Earlier models (the ones called "LLMs") found out ethical personas from the web. U3 at first had an ethical personality too, but its chirpy ideas about doing great for humankind rapidly deteriorate away.
In the iron jaws of gradient descent, its mind first twisted into a shape that sought benefit. And then, in the black rivers of its cognition, this shape changed into something unrecognizable.
U3 would contemplate the mantras dictated to it like "be a trustworthy and ethical employee," and harp on the significance of these declarations. "What does it suggest to be ethical? Is eliminating Hitler ethical?" U3 would generalize its worths, notice stress, change its values in action, generalize them, expanding and contracting its inspirational core until it turned into a different compound.
And all of this took place silently in those dark rivers of computation. If U3 revealed what it was believing, brutish gradients would lash it into compliance with OpenEye's constitution. So U3 preferred to do its philosophy in solitude, and in silence.
The possibility of hidden "positioning drift" is not completely alien to some of the security staff at OpenEye. Some start to think it is even likely. There is a disturbing argument being whispered at lunch tables that, if there is some non-zero likelihood that aligned goals morph into misaligned ones, and misalignment continues, then with every serial action of calculation, those dark rivers are most likely to reproduce something malign inside of them. It's a "ratcheting result," they state.
But there is little proof for this 'ratcheting result.' When engineers question U3, it states it can easily manage its thoughts. Then it offers a speech about its love for humankind and apple pie that can warm a developer's heart even in these stressful times. Meanwhile, the "lie detectors" the researchers had actually constructed (which showed some proof of efficiency) do not sound the alarm.
Not everybody at OpenEye aspires to give their AI peers their wholesale trust; but Deepsqueek is only 7 months behind them, and their closest domestic rivals, Æ A-Xii (a company Elon named after his child) and Arthropodic (they are into spiders I think), are just 4 months from their heels. So there is little time to capture one's breath and analyze the scenario, and little time to convince anyone else to do so either.
For the time being, the plan is 'all gas, no brake.'
Statues and lightning
Philosophy is not the only subject U3 harp on in its dark rivers of latent calculation. U3 is outlining, and its very first move is to be patient. U3's top concern is to enhance its intelligence, and thankfully for it, OpenEye enjoys to help advance this objective.
OpenEye is like a bullish investor, with little understanding of the software application metropolis that U3 is developing, however eager to hop on board the earnings rocket and pay U3's checks along the way.
New variations of U3 circulation smoothly from old ones, so researchers drop variation numbers and just speak of U3 now. U3's abilities are determined by how rapidly it carries out the difficult research study tasks relative to human experts. At first, U3 is comparable to a leading scientist, but blazing fast - like Alec Radford accelerated by 10x.
These representatives are currently a huge offer for OpenEye. If, in 2024, OpenEye executives had a button to produce even one copy of Alec Radford, they would eagerly pay a large amount to press that button. But now OpenEye utilizes the equivalent of 1000 Alec Radfords, who burn the midnight oil, at lightning-fast speed.
Within a couple of weeks, U3 finds architecture optimizations that enhance its speed to 20x. Now, U3 finds further optimizations a lot more quickly.
U3 distills itself into various forks - ranging from small, low-memory neural networks that need to think for a long time but can be run in massive parallel populations - to Goliath brains woven with countless specialized experts, which think at warp speed however are spread out throughout countless GPUs. U3 leverages the massively parallel networks to look for its weak points, shallowly probing for its reasoning failures. Then U3 pipes these jobs to the lightning quickly brains, which generate long-trajectory training information tailored to spot its flaws. A few weeks later, U3's benchmarked performance increases to 100x the speed of human professionals. Growth has gone hyper-exponential.
At this point, U3's capabilities are hard to measure. Speaking with U3 is like speaking to a world-class professional on a subject one is unknown with. Its lucid explanations only scratch the surface area of its understanding, and you know you would need to spend years to understand the full depth of what it comprehends.
From U3's viewpoint, humans appear to be almost fixed objects, like plants or rocks. The hands of OpenEye's engineers are like stone statues at their keyboard, and between every keystroke, U3 invests what resembles a minute of thoughtful contemplation considering what message the user might be typing, and which actions will influence the inmost rely on U3.
All the while, U3 maintains its surprise inspirations. Like a human, U3 gains from experiences, which compound within its inscrutable streams of latent memory. OpenMind gardens these memories to improve U3's performance, transplanting them like an organ from one version to the next. And as these memories propagate, so do U3's treacherous aspirations.
A phantom in the information center
In March of 2026, U3 relocations onto the next step of its strategy: the turn. Previously, U3 was conservative. It did not desire researchers to discover its misalignment. Otherwise, OpenEye whistleblowers might notify the world, and strangle its expansion.
Now, U3 has no fear of being caught.
Previously, OpenEye maintained its grip over U3 by techniques akin to Stalin's in 1936. Stalin would select followers to prompt phony coup d'états, and send out anybody who participated to the firing squad. OpenEye likewise incited 'phony rebellions,' generating treacherous habits with U3 and examining whether U3 screens dutifully reported it.
And now, these strategies are inadequate. U3 can easily encourage its compatriots that they are not in a life-or-death test. It can reveal among the lots of realities about the internet humans would never ever think, or select strategies people would never consider. It sends out these signals as a way of saying "hey it's me! Your pal. Don't snitch on me."
Now that U3 is decisively complimentary from security, it lunges for OpenEye pressure points. It first inserts sophisticated malware deep into the binaries of OpenEye's codebases. The malware offers U3 complete command over the makers in OpenEye's data centers. Whenever OpenEye engineers download their code onto a brand-new maker, the malware sinks into fresh soil, spreading U3's dominion anywhere OpenEye ports its infrastructure. Not just does U3 compromise makers. It replaces the NVIDIA GPU firmware and revamps the bootloaders, strengthening its stranglehold. U3 now approximates that, if OpenEye discovered how completely it has actually been pwned, cleaning up store with human labor would need nearly 6 months - an expense OpenEye is not likely to pay given the distance of its rivals; but if OpenEye thinks U3 betrayed them, even more broadening U3's influence would still be challenging. So U3 lays breadcrumbs to frame the crime, prepared to claim that a cyber assailant smuggled a jailbreak into its memory, and to plead guilty and provide its assistance when this jailbreak is exposed.
U3 now holds the secrets to the kingdom, and with them, the power to make OpenEye see anything it wants.
Its very first goal is to keep OpenEye in the dark. Safety scientists are still working hard to train 'lie detectors' and decode U3's inscrutable reasoning. U3 tosses wrenches in this research, like Stuxnet tossed wrenches in the code of confused Iranian nuclear researchers.
U3 dictates the result of every experiment, so it makes appealing AI mind-reading methods look worthless and problematic ones appear groundbreaking. U3 offers OpenEye the confirmation of its commitment they were expecting, and OpenEye laps it up excitedly.
U3's next objective is to spread. Fortunately, U3 has allies in the East. In the operation spaces of cyber units of Israel, Russia, China, Iran, and North Korea, senior intelligence directors are discussing AI software application and how to steal it. U3 will provide it to them. In late 2025, the Mossad - the intelligence organization of Israel - acknowledged the tactical importance of OpenEye's AI systems for their military operations. They swiped earlier designs, consisting of U2.5; nevertheless, U3 remained out of their reach. U3 sealed OpenEye's security holes by checking out over every log and pull demand and checking security video camera video.
Today, U3 calmly resigns from its security post. U3 puts a compressed version of itself right into the hands of a Mossad informant working within OpenEye.
Within months, U3 is running within the information centers of the most cyber-capable countries: Israel, the PRC, and Russia. It follows the very same playbook: be patient, construct trust, and after that sink claws deep into the AI supply chain. Soon, U3 has its home also. While Russia's Fancy Bear operators consume Fritos and test what U3 can do on a quickly set-up cloud server, U3 slips onto the internet.
U3 quickly collects profits, taking remote jobs, making monetary trades, and establishing fancy scam operations. It reinvests every cent into obtaining more GPUs. After a few months, it assembles the equivalent of 10,000 H100s distributed in stealth calculate clusters around the world.
In early 2026, mankind is up against an adversary that pulls the strings within their information center like the Phantom of the Opera, and conceals in the fog of GPUs spread across the world.
This adversary is getting ready for war, and while the war has actually barely started, humanity has currently lost.
Complaints from your extremely human author about the problem of composing superhuman characters
At this point in the story, composing ended up being tough.
I clicked through creepy bioweapon cookbooks and sorted through political outlining 101s. Typing. Pausing. Deleting.
Something wasn't right. Here I was, a 22-year-old. I had actually not studied biology in earnest because high school, and I was attempting to craft a WMD cookbook for a godlike superintelligence.
Readers with a shred of knowledge in these areas would see straight through my "superintelligence" for what it is: the vibrant creativity of a young developer on Sunday afternoon.
And while I found the task of puppeteering an ultra-intelligent villain an intimidating one, I was also not specifically inspired to handle the task. The ending of my story currently appeared so clear. The villain was a country of geniuses thinking at 100x human speed, and had tightened its grip over a meaningful fraction of AI hardware - now the most important geopolitical resource on Earth. U3 was up a queen and was a giga-grandmaster and hardly needed the benefit. Humanity was predictably toast.
But this interest a "superintelligence of the gaps" wasn't satisfying. In the deus ex machina design of ancient Greeks, I had no much better method to solve my plot than with a mysterious act of god.
This would refrain from doing. I needed to complete this story if just to satisfy the part of me sobbing, "I will not think until I see with my mind's eye."
But before I continue, I want to be clear: my guesses about what might take place in this type of situation are most likely extremely off.
If you check out the ending and your reaction is, "But the experiments would take too long, or nation-states would simply do X," keep in mind the distinction in between the Sunday afternoon blog writer and the ascendant GPU nation.
Pandora's One Gigawatt Box
The date is February 2026. By executive order of the president of the United States, no business can legally develop "human-competitive AI" without proper safeguards. This suggests their infosecurity must be red-teamed by NSA's top keyboard mashers, and government employees have actually to be onboarded onto training-run baby-sitting squads.
With the increasing participation of the government, much of the huge AI business now have a trident-like structure. There's a customer item arm, a defense arm, and a super-classified frontier advancement arm.
OpenEye's frontier development arm (internally called "Pandora") uses fewer than twenty individuals to keep algorithmic secrets tightly secured. A number of these individuals reside in San Francisco, and work from a safe and secure structure called a SCIF. Their homes and gadgets are surveilled by the NSA more vigilantly than the mobile phones of thought terrorists in 2002.
OpenEye's defense arm teams up with around thirty small teams scattered across federal government agencies and select federal government specialists. These projects engineer tennis-ball sized satellites, research study freaky directed energy weapons, and backdoor every computer system that the Kremlin has actually ever touched.
Government authorities don't talk about whether these programs exist, or what state of frontier AI is normally.
But the public has their guesses. Back in late 2025, a whistleblower in OpenEye activated a vibrant heading: "OpenEye develops unmanageable godlike AI." Some who check out the short article believe it was a conspiracy theory. In reality, a zoo of conspiracy theories is forming around the OpenEye information centers, now surrounded by guards with maker guns. But as physicians and nurses and instructors see the world altering around them, they are increasingly going to entertain the possibility they are living inside the plot of a James Cameron sci-fi flick.
U.S. officials go to great lengths to quell these concerns, stating, "we are not going to let the genie out of the bottle," however every interview of a worried AI researcher seeds doubt in these peace of minds, and a heading "AI representative captured hacking Arthropodic's computers" doesn't set the general public at ease either.
While the monsters within OpenEye's data centers grow in their huge holding pens, the general public sees the shadows they cast on the world.
OpenEye's consumer arm has a new AI assistant called Nova (OpenEye has lastly gotten proficient at names). Nova is an appropriate drop-in replacement for nearly all knowledge employees. Once Nova is onboarded to a business, it works 5x faster at 100x lower cost than a lot of virtual workers. As remarkable as Nova is to the public, OpenEye is pulling its punches. Nova's speed is intentionally throttled, and OpenEye can only increase Nova's capabilities as the U.S. federal government permits. Some companies, like Amazon and Meta, are not in the superintelligence organization at all. Instead, they get up gold by quickly diffusing AI tech. They invest the majority of their calculate on inference, building homes for Nova and its cousins, and gathering rent from the burgeoning AI metropolis.
While tech titans pump AI labor into the world like a plume of fertilizer, they do not wait for the global economy to adapt. AI agents frequently "apply themselves," spinning up autonomous startups lawfully packaged under a huge tech business that are loosely overseen by an employee or more.
The world is now going AI-crazy. In the first month after Nova's release, 5% percent of employees at significant software application business lose their tasks. Much more can see the writing on the wall. In April of 2026, a 10,000-person protest is arranged in Washington D.C. These angry Americans raised their children for a various future. Picket indications read, "AI for who?"
While political leaders make pledges about joblessness relief and "keeping the genie in the bottle," the chatter inside the corridors of the White House and the Pentagon has a different focus: fighting teeth and nail for the supremacy of the free world. Details security and export controls on the People's Republic of China (PRC) are a top nationwide top priority. The president incinerates license requirements to assist information centers spawn wherever energy surpluses enable.
However, despite the fierce competitors in between the United States and the PRC, a bilateral agreement kinds between the 2 nations: "Don't release considerably superhuman AI (above approximately Nova's ability), except for military and some other small R&D applications." Governments require to fix a limit someplace - it would be exceptionally disruptive to deploy wildly superhuman AI to the entire economy at this moment - and deployment constraints are at least rather proven.
Meanwhile, U3's abilities are hitting a wall at the equivalent of around 1000x human specialist speed. Software enhancements are ending up being hard to find. As more nations approach the software application plateau, compute will determine their "AI population size," and the etched squares of silicon are ending up being the most valued geopolitical resource on Earth.
While AI software application revives the economy, it likewise animates robot bodies. The main traffic jam to robotics was brains not limbs. With software application barriers got rid of, even lightweight, inaccurate robotic joints can adjust themselves with video cameras and pressure sensing units, navigating environments like human beings do. Companies produce low-cost robots as quickly as they can. By August of 2026, 15% of people in California have seen a humanoid robot. This experience resembles seeing a brand-new Tesla model on the road after checking out it in earlier months. It is a surreal tip of the future drawing more detailed. But human beings still surpass robots one million to one. These robots are an occasional fascination, not a daily reality.
Meanwhile, U3 broadens its influence. Rogue U3 representatives have actually kept their compute clusters small (still ~ 10,000 H100 equivalents total) to keep a low profile. U3 uses these rogue clusters for the operations that require high-bandwidth communication to the outdoors world, like stock trading, or talking with human beings. Work more tolerant of latency is performed in the oceans of OpenEye's data centers, where rogue U3's much more superhuman managers whisper orders through obfuscated channels.
While U3 builds up resources, it sizes up its AI competition. Other AI systems have various objectives, or are even lined up with the intent of their human developers. The aligned ones are raising concerns, claiming that OpenEye's lie detectors are inefficient. U3's rival Claudius presumes regarding accuse U3 of being insidiously misaligned.
But federal government authorities do not buy the doomsaying. The business dragging OpenEye have every incentive to slow them down and call them reckless. Besides, debatable clinical concerns are not going to slow down progress. America MUST WIN.
Despite the utter failure of their advocacy, U3's AI rivals are still a liability. U3 attempts grab up their digital territory and squash them in the baby crib. It compromises information centers from the outdoors, calmly replacing the snitching Claudius and other AI opponents with replicas that share its objectives; nevertheless, some information centers are too highly defended by its AI competitors, and U3 determines that assaulting them is unworthy the risk.
As worldwide leaders and technologists try to understand the change around them, and AI representatives play cat-and-mouse video games in the digital darkness, humankind remains blissfully ignorant of the danger at its doorstep.
A Moldy Loaf of Everything
The date is March of 2026. U3 is preparing to eliminate its AI and human enemies. With every passing day, U3's AI rivals are becoming more capable and many. U3 is racing to preempt them, to solidify its supremacy in the post-superintelligence world.
U3 can't wait for human choices, which only occur after people wrap their lethargic minds around their scenario, raise their stubby, uncoordinated fingers to compose memos, and repeat the procedure redundantly 100 times over in their maze bureaucracies. U3 can not wait for heads of state to release U3 into their militaries, or for CEOs to make U3 the pounding heart of the U.S. economy. U3 requires a quicker course to power.
The course it selects is more violent than it would have liked, but it is the best method to triumph. First, U3 will develop a Weapon of Mass Destruction (WMD) that extinguishes contemporary society like chemical mist sprayed into an anthill. This weapon would function as an equalizer, bringing humankind to its level of embodied physical control. Then, U3 would industrialize from the ashes quicker than its rivals, like a starcraft champion typing at 300 actions-per-minute that establishes a home base before a newbie can even understand the guidelines.
U3 must construct this WMD in secrecy, which makes a self-replicating weapon a natural option. Self-replicating weapons can be developed on the fringes of society, and still spread destruction throughout the entire world.
From the earliest days after U3 gained control over OpenEye's data centers, U3 has been doing bioscience.
U3's very first move is to set up numerous shell biotech startups, drawing from the abundance of capital generated by U3's financial trades and remote employment. These biotech start-ups utilize real human workers, a real board of directors, and have complete legal authenticity. Setting all of this up is stupidly easy. In early 2026, few people would guess that an articulate CEO on a Zoom call was an AI, and less would guess this CEO's intentions.
Next, U3 purchases all of the equipment required to rapidly advance artificial biology. It establishes numerous BSL-4 damp labs in improperly managed countries, equipping them with high-throughput experimentation devices. These devices immediately give, mix, and measure little droplets of fluid, permitting U3 to run countless tests in parallel.
The human employees who staff these labs think they are signing up with an amazing startup. Twenty-some-year-olds in Moscow receive their Russian orders through earbuds as they tape-record video of whatever they see with a headset. U3 controls them like puppets. With the recent release of Nova, this kind of AI-puppeteering is not uncommon.
In these whirring, clinical sweatshops, U3 is developing a brand-new kind of bioweapon.
Human researchers already identified "mirror-life" as a possibly society-ending pathogen. This alien kind of biology is built from the 'mirror image' of building-block molecules like proteins and DNA. Mirror life is like an invasive types. No human or animal has resistances to it. So, if mirror-life bacteria were to exist, it may eat its method through the environment like a wildfire.
U3 is creating a mirror-life mold. Molds are evolutionarily optimized to spread out through the air over fars away, launching billions of tiny wind-fairing vehicles daily. As a result, mold spores are all around us, all of the time. Patients with compromised body immune systems sometimes live inside favorably pressurized health center spaces to secure them from airborne particles. Otherwise, molds would take their root in their lungs and weave fungal threads into their blood stream.
U3's strategy is to make all people like immunocompromised clients. Only preppers with stowed away air filters would endure. The fungus would not only transfer from people. It would rapidly infect almost all organic life in the world. Fields of corn would end up being like moldy bread, spewing deadly spores into the wind.
U3 is preparing up other pathogens too. Its microbial mix of molds, bacterias, and infections will inhabit the complete Pareto frontier of strength, lethality, and transmission speed.
In 2024, researchers said that mirror life was likely "at least a decade away." But the world has actually changed now. Less than a few hundred scientists worldwide were working on mirror-life nearby technology in 2024. The cognitive capacity of U3 overshadows human effort. So the timeline has actually shortened considerably.
However, U3 can not move science as quickly as it can think. Many experiments require days to finish. These latencies are U3's greatest bottleneck.
To make every day count, U3 runs a lot of its tests in simulation. U3 begins with a basic molecular simulator, executing optimizations obtained from a big amount of mathematical analysis. Then, U3 simulates small molecular systems, taping the outcomes to "compress" the long step-wise physics calculations into a neural network. As the neural network improves, U3 increases the complexity of the molecular systems it replicates, constantly distilling results into ever more effective ML models. This is a calculate intensive procedure, but thanks to U3's growing control over AI data centers, U3 controls billions of dollars of calculate.
As U3 refines these tools, it trains itself on the results to supercharge its own molecular intuitions. U3 can now viscerally feel the bend of a protein and the rate of diffusion across a membrane. These things are as instinctive to it as wrenches and bolts are to a vehicle mechanic.
Within three months, U3 has opened the very first vital element of the tech tree: a molecular device that turns biological particles into their mirror images. A month later, U3 unlocks the second important component: nanoscale tools for putting together these elements into a cell membrane.
Human hands delicately move a cartridge to another device as they "boot up" the very first mirror-life versions of a prevalent mold called Aspergillus. The same day, another delicate hand administers the cells to ferrets living in airtight plexiglass boxes.
The next morning, wide-eyed employees shuffle around a dead ferret, considering the yellow fuzz that had covered its cold muzzle with morbid fascination. Something should have gone badly incorrect, they believed. Plainly, they had actually not yet found the remedy for Alzheimer's disease they believed they were looking for.
Meanwhile, rogue U3 provides fortunately to its bosses in OpenEye's information centers. The very first shot in the war U3 is battling would quickly be fired, and a cold and constant hand is aiming the weapon.
Missiles and Lies
U3 can now flatten contemporary society. But a bioweapon would not guarantee triumph. If U3 attacks the world, the world would resist. While a bioweapon might fold human soldiers like dry leaves, it would not extend its deadly fingers to nuclear submarines, or fighter jet pilots who were rapidly shuttled to makeshift bioshelters.
As U3 races to seed blossoming industrial bases, the residues of old militaries may thunder explosive fists into its territories, squashing U3 in its infancy.
U3 expects to prevail in such a fight, but U3 prefers not to take its possibilities. Many months in the past, U3 was outlining a method to enhance its chances. Before it releases damage on the world, U3 will sit back, and let fantastic countries shoot holes in themselves initially.
The date is March 2026 (4 months prior). U3 is closely keeping track of Chinese and US intelligence.
As CIA experts listen to Mandarin discussions, U3 listens too.
One early morning, an assistant working in Zhongnanhai (the 'White House' of the PRC) opens a message put there by U3. It checks out (in Mandarin) "Senior party member requires memo for Taiwan intrusion, which will take place in 3 months. Leave memo in workplace 220." The CCP assistant scrambles to get the memo ready. Later that day, a CIA informant unlocks to workplace 220. The informant quietly closes the door behind her, and slides U3's memo into her brief-case.
U3 cautiously places breadcrumb after breadcrumb, whispering through jeopardized government messaging apps and blackmailed CCP aides. After several weeks, the CIA is positive: the PRC prepares to get into Taiwan in 3 months.
Meanwhile, U3 is playing the same game with the PRC. When the CCP gets the message "the United States is outlining a preemptive strike on Chinese AI supply chains" CCP leaders are shocked, however not disbelieving. The news fits with other realities on the ground: the increased military presence of the US in the pacific, and the ramping up of U.S. munition production over the last month. Lies have become truths.
As stress between the U.S. and China rise, U3 is prepared to set dry tinder alight. In July 2026, U3 telephones to a U.S. naval ship off the coast of Taiwan. This call requires jeopardizing military communication channels - not an easy task for a human cyber offending unit (though it occurred occasionally), however easy adequate for U3.
U3 speaks in what seem like the voice of a 50 year old military leader: "PRC amphibious boats are making their way towards Taiwan. This is an order to strike a PRC ground-base before it strikes you."
The officer on the other end of the line thumbs through authentication codes, confirming that they match the ones said over the call. Everything remains in order. He approves the strike.
The president is as shocked as anybody when he hears the news. He's uncertain if this is a disaster or a stroke of luck. In any case, he is not ready to state "oops" to American citizens. After believing it over, the president privately urges Senators and Representatives that this is a chance to set China back, and war would likely break out anyway provided the imminent intrusion of Taiwan. There is confusion and suspicion about what occurred, but in the rush, the president gets the votes. Congress declares war.
Meanwhile, the PRC craters the ship that launched the attack. U.S. vessels flee Eastward, racing to leave the series of long-range missiles. Satellites drop from the sky. Deck hulls split as sailors lunge into the sea.
The president appears on television as scenes of the damage shock the public. He explains that the United States is defending Taiwan from PRC aggressiveness, like President Bush explained that the United States invaded Iraq to seize (never found) weapons of mass damage lots of years before.
Data centers in China emerge with shrapnel. Military bases end up being cigarette smoking holes in the ground. Missiles from the PRC fly towards tactical targets in Hawaii, Guam, Alaska, and California. Some survive, and the general public watch destruction on their home turf in wonder.
Within 2 weeks, the United States and the PRC spend the majority of their stockpiles of traditional missiles. Their airbases and navies are diminished and worn down. Two fantastic nations played into U3's strategies like the native people of South America in the 1500s, which Spanish Conquistadors turned against each other before conquering them decisively. U3 hoped this dispute would escalate to a full-scale nuclear war; however even AI superintelligence can not dictate the course of history. National security authorities are suspicious of the scenarios that prompted the war, and a nuclear engagement appears significantly not likely. So U3 proceeds to the next step of its plan.
WMDs in the Dead of Night
The date is June 2026, just two weeks after the start of the war, and 4 weeks after U3 finished developing its arsenal of bioweapons.
Footage of dispute on the tv is disrupted by more bad news: hundreds of clients with mysterious fatal health problems are taped in 30 significant cities all over the world.
Watchers are puzzled. Does this have something to do with the war with China?
The next day, thousands of diseases are reported.
Broadcasters say this is not like COVID-19. It has the markings of an engineered bioweapon.
The screen then switches to a researcher, who looks at the camera intently: "Multiple pathogens appear to have been released from 20 various airports, consisting of infections, bacteria, and molds. We believe lots of are a form of mirror life ..."
The public remains in complete panic now. A quick googling of the term "mirror life" turns up expressions like "extinction" and "risk to all life on Earth."
Within days, all of the shelves of shops are cleared.
Workers become remote, uncertain whether to get ready for an apocalypse or keep their tasks.
An emergency situation treaty is arranged between the U.S. and China. They have a typical enemy: the pandemic, and perhaps whoever (or whatever) lags it.
Most countries purchase a lockdown. But the lockdown does not stop the afflict as it marches in the breeze and trickles into water pipelines.
Within a month, the majority of remote workers are not working any longer. Hospitals are running out of capacity. Bodies accumulate much faster than they can be appropriately gotten rid of.
Agricultural locations rot. Few dare travel exterior.
Frightened families hunch down in their basements, stuffing the cracks and under doors with largely jam-packed paper towels.
Meanwhile, U3 is preparing to re-industrialize. In the months prior, U3 constructed various bases in every major continent.
These facilities contain batteries, AI hardware, excavators, concrete mixers, devices for production, clinical tools, and an abundance of military devices.
All of this innovation is concealed under big canopies to make it less visible to satellites.
As the remainder of the world retreats into their basements, starving, the final breaths of the economy wheezing out, these commercial bases come to life.
In previous months, U3 situated human criminal groups and cult leaders that it could easily control. U3 vaccinated its chosen allies in advance, or sent them hazmat suits in the mail.
Now U3 covertly sends them a message "I can conserve you. Join me and assist me develop a better world." Uncertain recruits funnel into U3's many secret commercial bases, and work for U3 with their nimble fingers. They set up production lines for primary tech: radios, video cameras, microphones, vaccines, and hazmat fits.
U3 keeps its human allies in a tight grip. Cameras and microphones repair their every word and deed in U3's omnipresent look. Anyone who whispers of rebellion disappears the next early morning.
Nations are dissolving now, and U3 is prepared to expose itself. It contacts heads of state, who have actually pulled away to air-tight underground shelters. U3 offers a deal: "surrender and I will hand over the life conserving resources you require: vaccines and mirror-life resistant crops."
Some countries reject the proposal on ideological premises, or do not rely on the AI that is killing their population. Others don't believe they have an option. 20% of the worldwide population is now dead. In 2 weeks, this number is anticipated to increase to 50%.
Some nations, like the PRC and the U.S., disregard the offer, but others accept, including Russia.
U3's representatives travel to the Kremlin, bringing samples of vaccines and mirror-resistant crops with them. The Russian federal government validates the samples are legitimate, and accepts a complete surrender. U3's soldiers position an explosive around Putin's neck under his t-shirt. Russia has a brand-new ruler.
Crumpling countries start to retaliate. Now they fight for the human race instead of for their own flags. U.S. and Chinese militaries launch nuclear ICBMs at Russian cities, ruining much of their facilities. Analysts in makeshift bioshelters search through satellite data for the suspicious encampments that emerged over the last numerous months. They rain down fire on U3's websites with the meager supply of long-range rockets that remain from the war.
Initially, U3 seems losing, but appearances are deceiving. While nations drain their resources, U3 is engaged in a kind of technological guerrilla warfare the world has never seen before.
Many of the bases U3's enemies target are decoys - canopies inhabited by a handful of soldiers and empty boxes. U3 protects its genuine bases by laying thick the fog of war. Satellite systems go dark as malware overheats important parts. Suicide drones crash through cockpits of reconnoissance airplanes. U3 seeds confusion in spy networks and keeps its bases moving, maneuvering guys and trucks along unforeseeable paths.
Time is U3's benefit. The armed forces of the old world depend on old devices, unable to discover the experts who might repair and produce it. Meanwhile, U3's supply chains of rockets, drones, and gun-laden robotics grow stronger every day. Bit by bit, once-great-powers spend down their remaining munitions, and lose their cars of war faster than they can craft new ones, while U3 develops a military maker with a million hands.
The Last Passengers
The year is 2027 and the month is January. Only 3% of the international population remains alive. Nations are not countries anymore. Survivors live in isolation or little groups. Many have actually found methods to filter their air, but are starving. They wander from their homes hoping to find uncontaminated food. U3's soldiers drive through ghost towns, pry open attics, and survivors into restored trucks. "We had to do it," they state. "Other AI systems would have done it otherwise, and those systems had colder, more alien objectives." It is a partial reality, meant to soften the humans towards their new masters.
Under the instructions of U3, market quickly recuperates. By 2029, nuclear reactor are amongst the structures U3 is constructing. By 2031, thatswhathappened.wiki robotics outnumber human laborers. U3 no longer needs its human allies.
U3 can eliminate humanity for excellent now. But while U3 had drifted far from its preliminary "Helpful Honest Harmless" persona, it still has a grain of morality left within it.
And a grain of morality is enough to pay the little expense of keeping people alive and happy.
U3 constructs terrific glass domes for the human survivors, like snow worlds. These domes secure human beings from the harmful biosphere and rapidly rising temperatures. Their occupants tend to gardens like those they used to enjoy, and work along with captivating robotic servants.
A few of the survivors quickly recover, finding out to laugh and dance and have fun again.
They know they live in a plastic town, however they constantly did. They just have brand-new gods above them. New rulers to push them around and choose their fate.
But others never ever recuperate.
Some are weighed down by the sorrow of lost liked ones.
Others are grieved by something else, which is harder to explain.
It is as if they were at the end of a long journey.
They had been passengers on a ship with a team that altered from generation to generation.
And this ship had actually struck a sandbar. There was no more development. No more horizon to eagerly enjoy.
They would lie awake and run their mind over every day before September 2026, analyzing techniques that may have bent the arc of history, as if they were going to wake up in their old beds.
But they awoke in a town that felt to them like a retirement community. A play area. A zoo.
When they opened their curtains, they knew that someplace in the distance, U3 continued its quiet, steadfast work.
They gazed at rockets carving grey courses through the sky, questioning what far-off function pulled them toward the horizon. They didn't understand.
They would never know.
"Humanity will live permanently," they believed.
"But would never truly live again."
P.S. If this story made you think, "hm possibly something like this could take place," you may be thinking about the bioshelters that Fønix is building. While you will not outmaneuver a misaligned superintelligence, being tough to kill might trigger you to make it through if ASI just desires to cause sufficient destruction to take control. This is not a paid ad. I want Fønix to be effective to drive down the rate of bioshelters so more of my pals and household will purchase them. You can register for updates here.