On-Demand GPU Startup Andromeda Raises $60M From Paradigm at $1.5B Valuation

Andromeda AI Inc., a startup that helps companies rent artificial intelligence infrastructure on a flexible basis, just closed a new funding round at a $1.5 billion valuation, cementing its place in the explosive on-demand gpu market. Venture capital firm Paradigm led the deal, bringing its total investment in Andromeda to $60 million. The timing is no accident. In 2025, AI startups alone captured 52.7% of all global venture capital — the first time artificial intelligence outpaced every other sector combined. Infrastructure plays like Andromeda sit at the very center of that frenzy, offering gpu infrastructure for ai startups that cannot afford or justify long-term cloud contracts.

Why the On-Demand GPU Model Matters Now

Securing GPU compute used to mean signing multi-year agreements with hyperscalers. That model works for tech giants. It doesn’t work for a two-person lab that needs to train a small language model for a few days or weeks. Andromeda solves this mismatch by operating a marketplace where companies can rent capacity from providers who have unused GPUs — without the administrative nightmare.

The numbers back up the shift. The GPU-as-a-Service market is estimated at $7.34 billion in 2026 and is projected to reach $25.94 billion by 2031, growing at a blistering 28.74% CAGR. Artificial intelligence workloads already account for 46.78% of that market’s revenue. Meanwhile, over 40% of new AI startups in 2024 ran entirely on cloud-based GPU clusters. These teams need on-demand gpu access — not rigid, locked-in contracts. They need cloud gpu instances for llm training that spin up fast and shut down the moment a job finishes.

Inside Andromeda: How the Platform Works

Andromeda’s origin story reads like a Silicon Valley fever dream. The project began before Thanksgiving 2023 when investor Daniel Gross called entrepreneur Wil Moushey with a job offer. Gross and Nat Friedman, his co-founder at venture firm NFDG, had accidentally built what resembled a GPU trading desk. A small engineering team already worked on the project, then called the Andromeda Cluster. Somebody needed to run it full-time.

Today, Andromeda operates with a team of about 20 from a new San Francisco office and hopes to scale its compute-under-management to as many as 100,000 GPUs this year. The platform lets customers purchase capacity from multiple providers through a single interface and consolidated billing. Before any provider’s hardware goes live, Andromeda evaluates not only graphics cards but supporting equipment such as storage arrays — because malfunctioning flash drives and network issues can derail AI training runs worth millions of dollars.

That quality-control layer is what separates Andromeda from a bare marketplace. Anyone searching for the best gpu rental for ai projects doesn’t just want cheap silicon. They want reliability. They want an affordable gpu server for machine learning that actually completes the job.

The On-Demand GPU Cloud Pricing Landscape in 2026

Understanding on-demand gpu cloud pricing helps explain why Andromeda’s marketplace model is gaining traction. The spread between providers is enormous. NVIDIA H100 GPUs — the workhorse of modern AI training — rent for as low as $1.49 per hour on Vast.ai and as high as $6.98 per hour on Azure. Specialist providers like Lambda Labs offer H100 instances at roughly $2.99 per GPU-hour, while hyperscaler on-demand rates hover between $3 and $5 per hour.

That pricing chaos is actually Andromeda’s opportunity. A researcher looking to rent nvidia h100 for deep learning currently has to compare dozens of providers manually. Andromeda consolidates that process, standardizing quality and letting buyers shop across providers seamlessly. Paradigm co-founder Matt Huang described compute as a commodity in the making, similar to how wheat is graded and traded at scale. Technical standardization of GPU quality is the key enabler.

For organizations that need to rent nvidia h100 for deep learning without locking in capital, the comparison to purchasing is stark. Buying a single H100 card costs approximately $25,000 to $40,000, with a full 8-GPU server reaching $200,000 to $400,000 before you factor in power, cooling, and staffing. Cloud rental delivers better economics unless utilization exceeds 60-70% continuously — a threshold few startups hit.

Andromeda’s Competition: A Crowded but Booming Field

Andromeda enters a market teeming with well-funded rivals. CoreWeave, the poster child for GPU-first cloud platforms, raised $1.5 billion in its March 2025 IPO and saw its revenue surge 737% to $1.92 billion in 2024. FluidStack raised $450 million in equity financing in January 2026 and is in talks for a further round at a $7.5 billion valuation. Both serve as gpu infrastructure for ai startups at massive scale.

Then there are the hyperscalers — AWS, Google Cloud, and Azure — who are losing on pure GPU compute pricing to specialist providers but winning on compliance, global reach, and 99.99% SLAs. GPU-first providers are offering 50-70% cost savings compared to the big three, a gap that makes the best gpu rental for ai projects increasingly found outside traditional clouds.

Andromeda isn’t trying to out-build CoreWeave or out-spend FluidStack on data centers. Its play is more subtle. By aggregating supply from diverse providers and standardizing quality, it acts as a broker — a “market maker for compute,” as CEO Wil Moushey describes it. This asset-light model means lower capital requirements and faster scaling. It also opens the door to financial instruments like GPU futures and hedging, as the compute market matures toward commodity-like trading dynamics.

Why GPU Infrastructure for AI Startups Is the Defining Investment Theme of 2026

The venture capital data is unambiguous. Global investors poured $425 billion into startups in 2025, the third-highest year on record. AI captured close to 50% of all global funding, up from 34% in 2024. Five companies alone — OpenAI, Scale AI, Anthropic, Project Prometheus, and xAI — raised $84 billion, or 20% of all venture capital in 2025.

Every one of those companies needs cloud gpu instances for llm training at staggering scale. That demand cascades directly to infrastructure providers. The whole sector is experiencing what Paradigm described as “a massive new market that really needs a lot of infrastructure to help it work smoothly.”

An affordable gpu server for machine learning isn’t a luxury anymore. It’s oxygen for any AI company. Small and medium enterprises are adopting GPU cloud services at a 29.02% CAGR through 2031, drawn by pay-per-use pricing that starts as low as $0.66 per hour. Andromeda’s on-demand gpu marketplace sits at exactly the right intersection of supply fragmentation and exploding demand.

What This Means for AI Builders and Founders

If you’re building AI products today, the message is clear: the on-demand gpu ecosystem is maturing fast, and the days of overpaying for underutilized hardware are numbered. Andromeda’s $1.5 billion valuation — achieved with a 20-person team — signals that investors see compute brokerage as a fundamental layer of the AI stack. The best gpu rental for ai projects will increasingly come from platforms that aggregate supply, enforce quality standards, and offer transparent on-demand gpu cloud pricing.

For founders evaluating where to rent nvidia h100 for deep learning or find cloud gpu instances for llm training, the action items are practical. Compare specialist providers against hyperscalers. Look for platforms that verify hardware quality. Factor in the total cost of ownership — not just hourly rates, but networking fees, storage costs, and support.

The race for gpu infrastructure for ai startups is accelerating. Andromeda just raised the stakes. Whether it becomes the definitive marketplace for compute or gets overtaken by larger competitors with deeper pockets remains an open question. But one thing is certain: the era of the affordable gpu server for machine learning — elastic, quality-assured, and instantly accessible — has arrived.


Frequently Asked Questions

What is Andromeda AI and what does it do?

Andromeda AI is a startup that operates a marketplace where companies can rent GPU compute from multiple providers through a single interface. It vets hardware for performance and security before making it available, streamlining the procurement process for AI teams that need flexible, short-term access to GPUs.

Who funded Andromeda’s latest round?

Paradigm, a venture capital firm focused on crypto and emerging technologies, led the round. The deal brought Paradigm’s total investment in Andromeda to $60 million and valued the startup at $1.5 billion.

How much does it cost to rent an NVIDIA H100 GPU in 2026?

H100 rental prices vary significantly by provider. Rates range from approximately $1.49 per hour on marketplace platforms like Vast.ai to $6.98 per hour on Azure. Specialist GPU cloud providers generally charge between $2 and $3 per GPU-hour, while major hyperscalers charge $3–$5 on-demand.

How big is the GPU-as-a-Service market?

According to Mordor Intelligence, the GPU-as-a-Service market is estimated at $7.34 billion in 2026 and projected to reach $25.94 billion by 2031, growing at a 28.74% compound annual growth rate.

Who are Andromeda’s main competitors?

Andromeda competes with GPU-first cloud providers like CoreWeave (which went public in 2025 and serves OpenAI) and FluidStack (which raised $450M in January 2026), as well as hyperscalers AWS, Google Cloud, and Azure. Decentralized marketplaces like Vast.ai also overlap with its model.

Should AI startups rent or buy GPUs?

For most startups, renting is the better option. Purchasing a single H100 costs $25,000 to $40,000, and full server setups can exceed $400,000. Cloud rental only loses its cost advantage when GPU utilization exceeds 60-70% continuously — a threshold most startups don’t reach.

Who founded Andromeda?

The project originated with AI investors Nat Friedman and Daniel Gross, who were running a GPU procurement operation through their firm NFDG. Wil Moushey was recruited as CEO in late 2023 to spin the project into a standalone company.