- AI Street
- Posts
- Silicon Data's Carmen Li on Building the Bloomberg of GPU Pricing
Silicon Data's Carmen Li on Building the Bloomberg of GPU Pricing
INTERVIEW
Five Minutes with Silicon Data’s Carmen Li

When I first heard about trading “compute,” it struck me as odd. Why would we need market infrastructure for GPU chips the same way we do for oil?
Computing capacity, powered by GPU chips that run AI models, is reusable, unlike oil or wheat. But the more I thought about it, the more it started to make sense.
The closest analogy is the freight market. Like a cargo ship that hauls different loads on different routes, the same compute can train different models.
The freight parallel goes deeper than just reusability. When freight markets matured in the 1990s, they became fully financialized—traders could buy and sell exposure to shipping rates, hedge price swings, and trade Baltic Dry Index futures.
New commodities often become financialized once volatility and market demand make risk hedging necessary.
That moment may be arriving for compute, with some well-known Wall Street traders believing compute demand will rival and then exceed the demand for oil.
Back in May, I highlighted this quote from DRW founder Don Wilson, who has a history of bringing new markets to the mainstream:
"The total dollars spent on compute will, over the next 10 years, exceed total dollars spent on oil.”
Wilson is betting on that future.
He invested in Silicon Data, a company that provides GPU pricing data and benchmarking services to hedge funds, banks, and AI firms.
I spoke with Silicon Data’s founder, Carmen Li, who’s just as bullish. After stints at Bloomberg, Citi, and DRW Trading, she launched the company in April 2024 to bring financial infrastructure to compute. The company raised $4.7 million in May, which I covered here.
Full disclosure: Carmen and I worked at Bloomberg in 2022, but never crossed paths. This interview has been edited for clarity and length.

Compute is sort of like freight moving digital tokens through AI Models
You've worked at some major financial firms—DRW, Citi, Bloomberg. What made you take the entrepreneurial leap?
"I wasn't a 20-year-old genius with a great idea in college. I kept thinking about ideas throughout my career, but this was the first idea where I thought, 'I'm the only person who can do this.' I was completely biased, crazily overconfident about the whole situation. When I told everybody, I started my own company, everyone was like, 'I knew it'—Everyone was like, 'that's who you are.'"
What made you commit?
"I strongly believe compute will be the largest human resource going forward—it will surpass any energy product in a few years. We need all the traditional financial infrastructure: indexes, data benchmarking, futures, options, swaps for compute. I felt like I was one of the few people who understood both the trading side and GPUs. People who understand GPUs have limited experience with derivative products—options, futures, indexes. Those with expertise in derivatives aren’t necessarily fully attuned to the GPU space.”
What does Silicon Data do?
"Think of me as the Bloomberg for GPU pricing. You cannot buy compute from Silicon Data, just like you cannot buy actual stocks from Bloomberg. We're different from the spot exchanges—you can actually get physical compute from them. They do spot, we do the data layer. We're working with futures exchanges to launch products based on our indexes."
What problem are you solving in the GPU market?
"Even if you're a sophisticated user—say, a PM or machine learning engineer at a hedge fund—it's not your job to config and double-check GPUs. It's almost like being a great driver—it's not your job to fix the engine.
So let's say you're looking for GPU clusters and I tell you, 'Hey, I have 20 nodes in New Jersey, all H100s with the same configuration and Linux environment. You can run your workflow right away with good latency.' You pay—10 days, maybe a month—and it's expensive. Very expensive. You then discover it's 20 nodes with a different setup than promised, or some Linux environments are inconsistent. So it takes a lot of time to synchronize.
There's no insurance, no guarantee, no standardization. It's mind-blowing. If you and me buy a t-shirt on Amazon, we can return it. But GPUs are freaking expensive and there's no insurance policy, no guarantees, nothing."
How do you solve that?
"One of our benchmarking services helps clients verify everything before they start their workload. Think of it like Carfax for GPUs. A third party verifies everything the provider promised—the chip UIDs, connectivities, latencies, performance within expected distributions. If performance is 20% below spec, you can negotiate a lower price. There's a price for everything—it doesn't mean it's worthless, but you need transparency."
You mentioned building a family of indexes. What does that look like?
"Right now we have H100. We're pushing out A100 in the next two weeks, which has longer price history. Then we'll have a token index. What's fascinating is H100 and A100 have almost zero correlation. They have completely different use cases and client bases—great for financial products. You'll see this chart with Bloomberg and Refinitiv very soon.”
Who's using your data today?
"All my clients are inbound so far. If you look at my client list, it's almost like you took a screenshot of the top hedge funds in the world. The calls are different from my Bloomberg days. Usually it's me, the CTO of the hedge fund, the head of AI, the head of machine learning, some PMs, and people who cover semis—all with different perspectives and questions about the data."
How are hedge funds actually using this pricing data?
"They're using our 4 million pricing points globally as leading indicators—for earnings next quarter, for supply-demand shifts in manufacturing cycles, for benchmarking against hyperscalers. For banks financing GPU clusters, they need to mark risk to market every day. They're not going to take your word that you're renting GPUs for $9 per hour—they'll look at public indexes to verify it's actually $3.20."
Where do you see token pricing heading?
Li explains that right now, token pricing is completely static across different platforms. But she thinks the future should have floating markets where tokens become tradeable currency.
"What I envision is pretty much like what DeepSeek started experimenting with," Li says. "You want a floating market for everyone's benefit. Say a new model comes out and people love to try it out, they pay a premium. Then they realize it is not the greatest. So I bought some tokens but I don't want them anymore."
"Token becomes a currency—like stock credit that can be exchanged."
Are infrastructure providers passing GPU cost savings to customers?
"No. Their costs are going down as GPU prices fall, but they're not reducing token prices. Their margins are supposedly increasing. DeepSeek is experimenting—they charge lower prices during off-peak hours. But most providers aren't doing dynamic pricing yet, even though it makes sense."
Any final thoughts?
"Every single stack is changing every day—GPU level, inference, models. Everything can be disrupted, even Nvidia. For us at Silicon Data, this constant change is exciting because we create benchmarking and data sets. But if you're in the industry, you really have to be extremely vigilant."

ADDENDUM
Future “Five Minutes with” Interviews
This is the first “Five Minutes with” interview I’ve done this year. I had sunsetted written Q&As once I started my podcast as I thought it would be duplicative. But I have a lot of organic conversations that I think `this would make for a great interview’ (this is what happened with what was an intro chat with Carmen). And, luckily with AI recording, I could quickly transcribe our conversation and share with you. I plan on doing more in the future.
ICYMI: Here’s a list of the interviews I did last year covering many evergreen topics:
Fitch’s Jayeeta Putatunda on causal AI reducing hallucinations
Former JPM Exec Tucker Balch on scaling investment analysis with AI
USC's Matthew Shaffer on using ChatGPT to estimate “core earnings”
Moody’s Sergio Gago on scaling AI at the enterprise level
Ravenpack | Bigdata.com’s Aakarsh Ramchandani on AI and NLPs
PhD candidate Alex Kim on signals with executive tone in earnings calls
MDOTM’s Peter Zangari, on AI for portfolio management
Arta’s Chirag Yagnik on AI-powered wealth management
Finster’s Sid Jayakumar on AI agents for Wall Street
Sov.ai's Derek Snow on AI for fundamental investors
Bain’s Richard Lichtenstein on AI adoption in private equity
Snowflake’s Jonathan Regenstein on AI building novel datasets
Skadden’s Dan Michael on the SEC’s AI stance
Stardog’s Matt Lucas on hallucination-free AI
Celent’s Monica Summerville on AI Adoption in capital markets
Aveni's Joseph Twigg on building a finance LLM
Persado’s Assaf Baciu on tailored AI marketing at banks
Professor Alejandro Lopez- Lira on AI-driven stock predictions
Reply