AI Goes Galactic
Issue 13: Altman's Code Red // Amazon's Nvidia Challenger
Hello Futurists,
Building AI and going to space might seem like very separate — very cool — ambitions, but zooming way out they’re actually one big moonshot.
Read on for Josh’s bold take on AI going galactic.
Also, catch up on this week’s biggest AI headlines:
Sam Altman’s Competitive Code Red
Amazon’s Bold AI Chip Moves
Runway Takes the Lead, Again
Anthropic 2026 IPO?
China’s Nvidia Gets Buzzy IPO
Sam Altman’s Competitive Code Red
Sam Altman is terrified… and he’s entering war mode.
The CEO/co-founder of the ChatGPT giant issued a ‘Code Red’ this week to his team, calling on every OpenAI employee to refocus their efforts on three things:
Creating a better model than Google and Anthropic
Improving personalization
Improving image gen
In an internal memo, seen by The Information, Sam stated, “It’s a critical time for ChatGPT”. The memo comes after Google and Anthropic both released models over the past two weeks that obliterated ChatGPT across all benchmarks.
OpenAI’s next model release will come as soon as next week, dubbed ‘Shallotpeat,’ which supposedly beats Gemini 3… we’ll see.
Amazon’s Bold AI Chip Moves
If you thought Amazon was asleep at the wheel when it comes to AI, think again.
In a surprise move, the cloud and fulfillment giant released their new Trainium3 chip which is 4X more powerful than their previous model and up to 50% cheaper compared to Nvidia’s GPUs.
They also released three new AI agents aimed at cutting down operational burden for AWS deployments, a trillion-dollar market.
This Trainium3 bet pushes Amazon into the same bracket as Google and their TPUs, with both posing a very real threat to Nvidia’s GPU dominance.
Thanks for reading! Subscribe for free to support and plug into Limitless.
Runway Takes the Lead, Again
There’s a new frontier video model in town and it’s scary good.
Codenamed ‘David’ (as in, David vs. Goliath), Runway Gen 4.5 has officially dethroned the king, Google Veo 3.
The videos it makes? Well, they’re just stunning. But don’t take my word for it, check them out.
Anthropic 2026 IPO?
This week the Financial Times reported that Anthropic, creator of the leading coding LLM Claude, is planning to IPO as early as next year after record revenue milestones this year.
The startup, currently operating at a loss, is projected to be revenue-positive as early as 2028, 3 years ahead of rival OpenAI after flipping them to become the top enterprise AI provider.
In separate news, CEO Dario Amodei is reported to be closing a private raise in the interim that will value the company at $300 billion.
China’s Nvidia Gets Buzzy IPO
Moore Threads, a Beijing-based GPU company founded by an ex-Nvidia exec, IPO’d on the Chinese stock exchange this week, soaring to a $40 billion valuation after the first day of trading.
But the real story came in the IPO ramp-up where investors tried to pile over one another to get into the deal. The $1.1B public offering was reportedly 4126x oversubscribed, signaling there was $4.5 trillion worth of interested capital at the suggested deal terms.
What this all points to is insatiable demand from Asia-based investors looking for exposure in the AI sector. Until now, China hasn’t had many publicly investable vehicles for AI outside of giants like Alibaba, Huawei, etc.
AI’s Path Off-Planet
Josh Kale on what our friendly yellow Sun can do for AI.
If you follow the curve of our computing ambitions out far enough, our current “software eating the world” will lead to something crazier — “compute eating the Sun.”
Asimov foresaw this energy phenomenon in his short story The Last Question. The gist is that each time humanity levels up, it feeds more energy into bigger computers and asks the same question: how do we push back entropy?
When I’m thinking about the future that all of these frontier technologies are building toward, I often like to look to sci-fi as a blueprint. Sure, Asimov had no idea that we would invent a transformer to convert electrons to intelligence and while the fine-tuned details were all fiction, the overarching direction — that compute expands until it hits an energy wall — was spot-on.
We’re already feeling the first edge of that wall.
All of our data centers use on the order of 1–2% of global electricity today, and AI is on track to at least double that share by 2030. In some regions, planners now worry that clusters of AI data centers will grab close to 10–12% of the grid on their own.
This is with a civilization that isn’t even Type I on the Kardashev scale. A Type I civilization on this scale can harness roughly 10¹⁶ watts, the full power budget of a planet, which is still thousands of times more than what we use now.
The Kardashev scale often gets swept as a sci-fi fantasy plot device, but in practical terms it’s a blunt statement about bottlenecks for humanity reaching its most transformational goals.
If the end state of AI is “energy in, compute out,” then the final problem isn’t just smarter algorithms. It’s more energy. Past a certain point, that’s not a policy problem or a chip-design problem, it’s a… solar system problem.
On Earth, we’re trying to do something slightly perverse: feed star-level computation from a single planet’s surface. We fight over land use, transmission lines, cooling water, and NIMBYs. Meanwhile, 99.999…% of the Sun’s output just flies past us into space.
The obvious long-term move is to go closer to where the energy actually is.
We don’t NEED to invent a better nuclear reactor, we’re orbiting a pretty juicy one. The Sun fuses four million tons of hydrogen into energy every second and sprays it in all directions for free. Nuclear on Earth is us trying to bottle a tiny, temperamental copy of that process inside a pressure vessel, with all the engineering, regulation, and politics that entails. Space-solar skips all of that: go closer to the source, spread out huge collectors, and turn starlight directly into compute.
The window is starting to open now as launch costs collapse, from something like $60,000 per kilogram in the Shuttle era to a hoped-for ~$20 per kilogram enabled by fully reusable SpaceX Starships. Once space looks more like container shipping than a moonshot, you start to treat orbit as another place to put infrastructure.
In that world, you don’t just send satellites, you send power and compute. Wide solar arrays in high orbit feed attached compute modules. Racks of GPUs and power electronics live in free-flying platforms or in the cold bowls of permanently shadowed lunar craters, where temperatures hover just tens of degrees above absolute zero. (Pretty great for quantum computers too, but that’s for a future piece!)
If AI really is the compression of sunlight into thought, then the path to true abundance runs through space. First we move our sensors and satellites outward. Then our factories. Sooner or later, we move our minds, or at least the machines that host them.
Seen that way, building AI and going to space are not two projects. They’re one project with two phases: learn how to turn energy into intelligence, then go where the energy is.
Thanks for joining us for our 13th issue. Now go listen to our podcast :)








