Nvidia-backed startup Starcloud trained an artificial intelligence model from space for the first time, signaling a new era for orbital data centers, with the company’s Starcloud-1 satellite now running and querying responses from Gemma, an open large language model from Google, in orbit. This milestone marks history’s first instance of an AI model in space operating on high-powered computing hardware beyond Earth’s atmosphere.
Last month, the Washington-based company launched a satellite with an Nvidia H100 graphics processing unit, sending a chip into outer space that’s 100 times more powerful than any GPU compute that has been in space before. The achievement represents far more than technological advancement—it establishes the foundation for a massive industry shift toward orbital computing infrastructure.
The Historic Achievement Defining Space Computing
In addition to Gemma, Starcloud was able to train NanoGPT, an LLM created by OpenAI founding member Andrej Karpathy, on the H100 chip using the complete works of Shakespeare. This demonstrates that complex AI model in space training tasks previously confined to terrestrial data centers can now operate successfully in orbit.
The satellite sent its first communication to Earth stating “Greetings, Earthlings! Or, as I prefer to think of you — a fascinating collection of blue and green”. Beyond the playful message lies profound technological significance—the AI model in space can process queries and respond with sophisticated reasoning while orbiting hundreds of miles above Earth.
Johnston, who co-founded the startup in 2024, said Starcloud-1’s operation of Gemma is proof that space-based data centers can exist and operate a variety of AI models in the future, particularly those that require large compute clusters. “This very powerful, very parameter dense model is living on our satellite,” Johnston said. “We can query it, and it will respond in the same way that when you query a chat from a database on Earth, it will give you a very sophisticated response.”
Revolutionary Market Potential
The in-orbit data centers market is projected to be $1.77 billion in 2029, and it is expected to grow at a CAGR of 67.4% and reach $39.09 billion by 2035. This explosive growth trajectory positions AI model in space computing as one of the fastest-expanding technology sectors globally.
The electricity consumption of data centers is projected to more than double by 2030, according to data from the International Energy Agency. Starcloud CEO Philip Johnston told CNBC that the company’s orbital data centers will have 10 times lower energy costs than terrestrial data centers.
The economic advantages extend beyond energy savings. A compute cluster of that gigawatt size would produce more power than the largest power plant in the U.S. and would be substantially smaller and cheaper than a terrestrial solar farm of the same capacity, according to Starcloud’s white paper.
Technical Innovation Behind AI model in space Success
Operating an AI model in space presents unprecedented engineering challenges. The team explained that the H100 was not easy to operate in space. Starcloud CTO Adi Oltean said on X that the GPU required “a lot of innovation and hard work” from the engineers.
The Starcloud-1 satellite overcomes harsh space conditions. The H100 chip is 100 times more powerful than any GPU previously sent to space. The company launched its Starcloud-1 satellite in early November 2025 equipped with an Nvidia H100 graphics processing unit.
Space environments demand specialized solutions for AI model in space operations. In orbit, continuous sunlight removes the need for grid electricity. Cooling is still a problem because the vacuum provides no airflow, but Starcloud says it is exploring air based or liquid based cooling and even planning the largest radiators deployed in space.
Competitive Landscape Intensifies
Major technology companies recognize the transformative potential of AI model in space computing. Google’s Project Suncatcher is a moonshot exploring a new frontier: equipping solar-powered satellite constellations with TPUs and free-space optical links to one day scale machine learning compute in space. To begin addressing these challenges, their next milestone is a learning mission in partnership with Planet, slated to launch two prototype satellites by early 2027. This experiment will test how their models and TPU hardware operate in space and validate the use of optical inter-satellite links for distributed ML tasks.
SpaceX joins the race with ambitious plans. Elon Musk announced in November 2025 that SpaceX would build orbital data centres using next-generation Starlink satellites, calling them the lowest-cost AI compute option within five years. Musk wrote, “Starship should be able to deliver around 300 GW per year of solar powered AI satellites to orbit, maybe 500 GW. The per year part is what makes this such a big deal.” He added that average US electricity use is around 500 GW which means orbital AI could surpass that amount every two years.
Amazon founder Jeff Bezos also recognizes the opportunity. Bezos laid out his vision in October, predicting gigawatt-scale space data centers within 10 to 20 years. “We will be able to beat the cost of terrestrial data centers in space in the next couple of decades,” Bezos explained. “These giant training clusters … will be better built in space, because we have solar power there, 24/7.”
Real-World Applications Emerge
AI model in space computing enables revolutionary applications across multiple industries. Starcloud is developing customer applications using satellite imagery from Capella Space. The system could identify lifeboats from capsized vessels at sea and detect wildfires the moment they start. Johnston said these capabilities would enable real-time intelligence for first responders.
The satellite can answer queries about its location and operational status in real time. Users can ask where it is positioned and receive responses like “I’m above Africa and in 20 minutes, I’ll be above the Middle East.” This demonstrates practical AI model in space functionality for location-aware services.
Earth observation represents another significant opportunity. Earth observation methods include optical imaging with cameras, hyperspectral imaging using light wavelengths beyond human vision and synthetic-aperture radar (SAR) imaging to build high-resolution, 3D maps of Earth. SAR, in particular, generates lots of data — about 10 gigabytes per second, according to Johnston — so in-space inference would be especially beneficial when creating these maps.
Future Development Roadmap
Starcloud plans aggressive expansion. Starcloud — a member of the Nvidia Inception program and graduate from Y Combinator and the Google for Startups Cloud AI Accelerator — plans to build a 5-gigawatt orbital data center with solar and cooling panels that measure roughly 4 kilometers in both width and height.
The next Starcloud launch is scheduled for October 2026. It will include multiple Nvidia H100 chips and integrate Nvidia’s Blackwell platform for enhanced AI performance. And for future launches, the startup is looking to integrate the NVIDIA Blackwell platform, which Johnston expects will offer even greater in-orbit AI performance, with improvements of up to 10x compared with the NVIDIA Hopper architecture.
Investment Opportunity and Economic Transformation
As launch costs decline and AI compute demand surges, the orbital data center market is poised to become a $40 billion industry by 2035. For those seeking high-growth, high-risk opportunities, this sector offers a compelling mix of technological innovation and strategic value.
Google says its research shows that space launch costs are falling fast enough that, by the mid-2030s, the running cost of orbital data centers could be competitive with terrestrial ones. Launch costs could be lowered to under $200 per kilogram with advancements like SpaceX’s Starship. Google is planning for the mid-2030s when launch costs are projected to drop to as little as $200 per kilogram.
Addressing Environmental and Sustainability Concerns
AI model in space computing offers environmental benefits that terrestrial facilities cannot match. “In space, you get almost unlimited, low-cost renewable energy,” said Philip Johnston, cofounder and CEO of the startup, which is based in Redmond, Washington. “The only cost on the environment will be on the launch, then there will be 10x carbon-dioxide savings over the life of the data center compared with powering the data center terrestrially on Earth.”
These data centers in space would capture constant solar energy to power next-generation AI models, unhindered by the Earth’s day and night cycles and weather changes. Space offers something Earth can’t: unlimited solar power and natural cooling. Solar panels are up to eight times more efficient in orbit than they are on the surface of Earth.
Technical Challenges and Solutions
AI model in space operations face unique obstacles. Analysts from Morgan Stanley have noted that orbital data centers could face hurdles such as harsh radiation, difficulty of in-orbit maintenance, debris hazards, and regulatory issues tied to data governance and space traffic.
Building orbital infrastructure presents mind-bending technical hurdles. Radiation in space is hostile to delicate electronic devices like GPUs, requiring extensive shielding and redundancy to withstand cosmic radiation and solar events. Space debris, including spent rocket stages and dead satellites, is a concern for space-based structures.
Despite challenges, solutions emerge rapidly. The satellites will have a five-year operational lifespan based on the Nvidia chips’ expected lifetime. The technical challenges-high launch costs, radiation-hardened hardware, and orbital debris-are being addressed through partnerships and innovation. For example, Axiom Space is testing its AxDCU-1 prototype on the International Space Station (ISS) to validate hybrid cloud applications and AI/ML in microgravity.
Industry Transformation Implications
Orbital computing infrastructure could reshape the cloud services landscape, potentially disrupting terrestrial data center networks of providers like AWS, Azure, and Google Cloud. The successful demonstration of AI model in space training establishes precedent for migrating enterprise workloads beyond Earth.
Starcloud CEO Philip Johnston predicts that “in ten years, nearly all new data centers will be being built in outer space.” This bold prediction reflects growing confidence in orbital infrastructure viability.
“Anything you can do in a terrestrial data center, I’m expecting to be able to be done in space. And the reason we would do it is purely because of the constraints we’re facing on energy terrestrially,” Johnston said in an interview.
Strategic Implications and Conclusion
Starcloud’s achievement in successfully operating an AI model in space represents more than technological advancement—it establishes the foundation for a fundamental shift in computing infrastructure. For now, one small GPU has completed Shakespeare in space, and that alone marks a turning point in how AI may grow in the next decade.
The convergence of declining launch costs, improving space technology, and escalating terrestrial energy constraints creates optimal conditions for orbital data center adoption. As companies demonstrate practical AI model in space capabilities, the technology transitions from experimental concept to viable business solution.
Enterprise leaders must prepare for a future where their most critical workloads may operate hundreds of miles above Earth, powered by continuous solar energy and unconstrained by terrestrial infrastructure limitations. The question is no longer whether AI model in space computing will succeed, but how quickly organizations will adapt to this revolutionary paradigm shift.
Frequently Asked Questions
What makes Starcloud’s AI model in space achievement historically significant?
Starcloud became the first company to successfully train and operate an AI model in space using an Nvidia H100 GPU, demonstrating that complex computing tasks previously limited to Earth can now be performed in orbit with enhanced efficiency and lower energy costs.
How does AI model in space computing compare to traditional data centers economically?
Orbital data centers offer 10 times lower energy costs than terrestrial facilities by utilizing continuous solar power without weather interference, while the orbital data center market is projected to reach $39.09 billion by 2035.
What technical challenges does AI model in space operation face?
Key challenges include harsh space radiation requiring specialized shielding, cooling systems adapted for vacuum environments, space debris risks, and difficulty of in-orbit maintenance, though companies are developing innovative solutions for each obstacle.
Which major companies are investing in AI model in space technology?
Google has announced Project Suncatcher with prototype satellites launching by 2027, SpaceX plans AI-capable Starlink satellites, Amazon’s Jeff Bezos predicts gigawatt-scale space data centers, and Nvidia actively backs multiple orbital computing initiatives.
What real-world applications can AI model in space computing enable?
Applications include real-time wildfire detection, search and rescue operations identifying lifeboats at sea, enhanced Earth observation processing 10 gigabytes per second of SAR imagery data, and location-aware services for emergency response.
How will AI model in space computing impact environmental sustainability?
Orbital data centers can achieve 10x carbon-dioxide savings compared to terrestrial facilities by using unlimited solar energy without day-night cycles, eliminating water cooling requirements, and reducing strain on Earth’s power grids.
When will AI model in space computing become commercially viable?
Industry experts project commercial viability by the mid-2030s as launch costs drop below $200 per kilogram, with some companies like Starcloud planning 5-gigawatt orbital data centers and others scheduling prototype launches by 2026-2027.
