Kirk McKeown spent about 15 years running what he calls the “factory”—some of the largest fundamental channel-check and data driven operations on the Street – first, at Glenview, and later, at SAC Capital and Point72. At its peak, his team was conducting several thousand calls each year. Kirk’s role evolved and ultimately, he ran all proprietary research at Point72 across calls and data. After years of managing this massive human-capital engine, he realized the “moat” in institutional finance was shifting from access to data towards the architecture used to structure it.
In 2021, he co-founded Carbon Arc, a platform built to structure data to be sold by consumption, as opposed to locking it up as a long-term asset. Carbon Arc unlocks data trapped on balance sheets and is fresh off a period of rapid institutional adoption. Now, Carbon Arc is betting that the future of alpha resides in a “refinery” capable of structuring 100 trillion transactions for the coming wave of 30 billion AI agents, solving problems not only on the Street, but for all types of businesses around the world.
I spoke with Kirk about his journey from the “manual” Wall Street factory to Carbon Arc’s “agentic” refinery. Here is what readers will learn:
How the “several thousand calls per year” grind forged a mathematical framework for global market structures.
Why Carbon Arc treats data as a derivative with time decay (Black-Scholes for data).
The transition from “drilling” (data collection) to “refining” (knowledge graphs).
Why the next 12 months belong to “automated agent onboarding” over retail chat.
This interview has been edited for length and clarity.
Matt: You had a long history in the hedge fund industry. What made you decide to jump ship and start something of your own?
Kirk: In 2012, I went to SAC to build what is now called Canvas. I had run a similar business at Glenview Capital, and built a large fundamental research business collecting information in supply chains. From 2006 to 2014, I did thousands of calls a year, myself. That kind of volume created for me strong principles around scaling research problems.
Research frameworks are patterns. The same story happens over and over. What’s happening in the U.S. government right now has happened three or four times. It’s just different names and different clothes. The world follows rules based on market structure, business models, management teams, and personality types.
I started to learn that hospitals and hotels are the same business because they both get paid on length of stay. I learned that TSMC and US Steel are the same business because while they have different end markets, they make money the same way. In the Global 2000, there aren’t 2,000 companies. I believe that there are four market structures and nine business models.
You start to find these scale points. For example, Tractor Supply is like a home center for rural areas. 25% of their business is animal feed and 25% is Texas. If you get a handle on animal feed in Texas, you get a handle on that business and can make a better risk-adjusted bet. I developed mental hacks and mathematical decision frameworks. It wasn’t because I was smart. It was because the “n” was so big.
By the time I left Point72, I was running proprietary research, managing the people responsible for generating actionable insights for the Firm’s investment teams to use as inputs in their process.
While working in research at these great firms, I started looking at the friction in the data market. It traded like 1930s equities: big block trades for bags of cash for market insiders with massive balance sheets. I looked at the legal and compliance frictions—it takes forever to get a data set approved. There were technical frictions. In 2016, Snowflake had been around for two years and Databricks had just come onto the scene. The infrastructure to manage this data at scale didn’t exist before 2010.
The pricing and commercial construct was built for 500 qualified buyers, not five million, so the clearing price was very high. If you could bring supply and demand closer together, smash down the cost of the insight, and sell the insight rather than the asset, you could achieve density and velocity of consumption.
Matt: How has the business evolved since you started it in 2021?
Kirk: We started building in 2021. Fast forward five years, and we have a two-sided consumption-based platform. Data asset owners bring their data, and we structure and graph it for the AI economy. We created composable infrastructure that allows people and agents to plug into the front of the stack. You can hit a modularized data structure to request an entity (like Lululemon), a framework (like revenue growth), or an asset (like credit card data). You compose that element and buy it for $5. We built an ontology that manages the modular analytical framework and a payment processor to meter it.
We started the stack in 2021, but when ChatGPT came along, we recognized we needed to be in graph. We tore down what we had built and started fresh. We rebuilt the platform as a knowledge graph. We have 100 trillion transactions structured in graphs. We modularize the entity structure around companies, brands, people, and locations.
For example, if someone wants to buy the average salary in 40,000 zip codes monthly to understand wallet structure, they can buy that aggregated from us. They don’t have to buy the whole data set for several hundred thousand dollars. I’m making a market for them and running the business like Goldman in the 90s.
Matt: What is happening to the price of data now that the cost of intelligence is decreasing?
Kirk: Companies like FactSet or Bloomberg have valuable data, but they don’t monetize it the right way. The structural commercial relationship between how agents interface with data and how they value it is changing. If you’re selling cases like Westlaw and an AI model ingests it once, they own it. Rewriting that licensing construct is hard because IP rules and laws were not built for agents.
Analytical platforms and database companies get hit because they get paid on compute. Compute is going to be socialized and optimized. It’s the wrong pole in the tent to get paid on. In the oil business, you don’t want to be a driller. You want to own the field or the refiner. Drilling is a bad business. Refining is a fixed-cost, high-volume framework. We are building a refiner.
Matt: How do your former colleagues on Wall Street react to these concepts like knowledge graphs and ontologies?
Kirk: I’ve been evangelizing this for a long time. This is just Wall Street from 1984 to 2025 on a truncated time horizon.
In 1973, the Black-Scholes paper was published. In 1983, Goldman launched the first quant desk, the beginning of the quant age on Wall Street. Between 1984 and 1990, early quant shops competed on models and saw 80% per annum alpha returns. Alpha degraded through the ‘90s as models competed. After the 1998 LTCM crash, ETFs formed because quants needed bigger liquidity profiles. Following the 2007 quant crash, factors came along, rates went to zero, and the factor market formed. Over that time, commissions went from $2 a share in 1983 to less than a penny today, while volumes went up 10,000x. In 1984, 50% of New York Stock Exchange trades were blocks. Today, it is 7%. Our stack is built to remove frictions to allow models to engage.
In 1985, when models started to proliferate, the traditional guys poo-pooed them. In 1995, when electronic trading came along, floor traders said it would never work because people liked talking to people. It’s the same thing as Blockbuster. There are resistors, but everyone uses data.
To us, OpenAI and Anthropic are just hedge funds. They are writing models to create lift in decisioning and competing on that lift. They are buying and selling scientists the way Millennium and Citadel do. I’d argue they are on the wrong capital structure—they should be raising GP/LP stakes rather than VC money. They don’t have a moat other than capital. Wall Street ends up winning the AI wars over the medium term because of regulation and their historical relationship with modeling the world.
When I worked at Point72, I had to find simple analogies to manage a big group. Data is a content business. You can’t own data end-to-end as an individual. You need engineers, scientists, analysts, and salespeople. Content must be relevant, differentiated, and accessible. Accessibility is asymptotic and relevance is table stakes. Differentiation is the only thing that separates us, and in content, that means more data and better questions.
OpenAI and Anthropic have trained on a relatively small amount of data, mostly scraped from the web. To manage the world’s inventories and inform global decisions, you need access to transaction data that shows how people spend their time and money, and their balance sheets. That is what Carbon Arc has built. We have 75 assets, three petabytes of data, and daily granularity for $150,000 a month in compute. We smashed the cost down. Now we are scaling both the supply and distribution sides.
Matt: Can large hedge funds or banks build this themselves, or do they face structural issues?
Kirk: They can build it, but they have a competitive problem. They monetize data through the market, so they won’t share their alpha back with data providers. We built a data transaction processor that creates liquidity for data providers and opens up their distribution. Data providers are coming to us because they can distribute broadly rather than doing one big exclusive check with a firm like Two Sigma, Citadel or D.E. Shaw.
Hedge funds want exclusive data and don’t want it proliferated. Data providers just want to get paid. Because data has historically been expensive and hard to work with, only global businesses and large funds could buy a million-dollar data asset. That market structure is what we are changing.
We launched platform 2.0 in mid-2025. We started last year with 35 customers and ended with 75, quadrupling revenues. Half of our customers are Wall Street buy-side and sell-side. We work with five of the top eight consulting firms, and we have good coverage in media and Hollywood. Companies like Paychex are both suppliers and customers. We are launching automated onboarding for agents on February 17th. We didn’t build this platform for eight billion people. We built it for 30 billion agents.
Matt: How do you see the market for small and mid-sized businesses (SMBs) and retail users?
Kirk: We are launching retail in March 2026. We will launch our MCP server for people with Robinhood or Kalshi accounts. We’re going on Reddit to offer a hedge fund data stack for $20 a month. For SMBs, a VP of Finance at a small healthcare business can pay $200 a month to do competitive analysis by plugging their Claude into our stack via the MCP to query credit card, paycheck, and healthcare claims data.
A consumption-based model needs volume. We give away publicly sourced data, like SEC data, for free. This is a cost game. Models are democratizing analytics. If you are building on top of public data and overpricing it, you will lose. We think about the business in terms of cost per megabyte and price accordingly.
Markets are forming for things that seem bizarre, like Kalshi’s contracts, but the real differentiation is composable contracts on anything, anytime, anywhere. We are in the third inning of a doubleheader. This technology cuts friction and makes things economically viable that weren’t before.
I am an AI bull, but I am concerned about the next 10 to 15 years. The dislocation in the labor market will take time to absorb. There are massive regulatory and ethical issues. Civilization-changing situations—like electricity in the 1880s or the rise of quants in the ‘80s—always involve these cycles.
Matt: Where does the “moat” for your business exist in the long term?
Kirk: The moat ends up being regulatory. Our General Counsel came from Schulte Roth & Zabel, the largest data compliance shop. She is standardizing compliance as a product. As agents proliferate, we must ensure legal and regulatory standards are met. Right now, it’s the Wild West, with major publishers suing Silicon Valley shops for scraping.
We are leading with scalable compliance frameworks. It’s like Stripe. You need regulatory infrastructure to scale. We model the business after Goldman Sachs. Someone once said to me that Goldman is a regulatory wrapper that allows you to do cool stuff in 100 countries and apply capital against it. They are an enablement platform that marries regulatory access and capital.
In the future, someone will emerge as the “Moody’s of data,” scoring inputs. Centralization will happen around core scale points, just as it did with Coinbase in blockchain. Maintaining a competitive advantage when things move this fast is hard. It’s an infrastructure build. Some people stand up application layer businesses on top of OpenAI and hit $10 million in revenue in six months, but that’s a gold rush. It’s not sustainable because there’s no underlying moat once others join. We are building the infrastructure. We think that scales. We think that sustains. We think that’s permanent.
We aren’t going anywhere any time soon.


