• AI Street
  • Posts
  • How This AI Hedge Fund Updates Itself: Q&A with XAI’s Aric Whitewood

How This AI Hedge Fund Updates Itself: Q&A with XAI’s Aric Whitewood

Hey, it’s Matt. You’re receiving this email after signing up for AI Street, which covers how investors are using AI to spot trading opportunities. This week:

🎤 A Q&A with Aric Whitewood, CEO of XAI Asset Management, on building an evolving AI hedge fund.

Forwarded this? Subscribe here. Join readers from McKinsey, JPMorgan, BlackRock & more.

INTERVIEW

Aric Whitewood runs XAI Asset Management, a systematic hedge fund built on self-updating AI that trades with minimal human intervention.

In this interview, we discuss:

  • How his fund “evolves” with markets through a closed-loop, Bayesian framework that updates relationships as new data arrives while keeping human tuning rare.

  • His background from aerospace and radar systems to leading early ML at Credit Suisse to launching a fully systematic fund.

  • Why LLMs should be treated as a tool, not the center of “intelligence,” and where they fit alongside time series models, information theory, and neuro-symbolic methods.

  • His concerns that too little focus on compute efficiency can inflate costs, encourage synthetic data shortcuts, and lead to stretch valuations.

Here’s how the strategy has performed since launch. The emerging manager's XAI Systematic Macro strategy returned 38.6% in 2022, 0.2% in 2023, 16.3% in 2024, and 20.3% so far in 2025. The results reflect a steady level of risk aimed at keeping annual volatility near 15% and are reported in U.S. dollars before fees.

This interview has been edited for clarity and length. 

Matt: Tell me about yourself

Aric: I started my career in the aerospace industry, after completing a PhD in radar systems and electronic engineering. I spent a number of years working on drones, back when drones weren’t yet mainstream, studying how swarms fly together. I worked on ship-based sensor systems, very interesting. Also, some jet and helicopter projects.

Then I moved into banking, which was extremely interesting as well. I ran pretty much one of the first machine learning teams at Credit Suisse, and eventually became Head of Data Science across sales, trading, all kinds of different functions.

I moved around—London, New York, then Zurich. And then I left at the beginning of 2017. It was a while ago.

Matt: Tell me about your fund.

Aric: The vision of the firm is to create a kind of multi-strat, but with AI creating all the pods. I know other people have claimed, ‘Oh, we have LLM traders, they do everything for you,’ but I’m not convinced by that. What we have is a real track record—actual trading of real assets and with double-digit returns over multiple years.

We’ve done it for macro assets, to some extent for stocks, and we’re now looking at options and other asset classes. The idea is to have pods, but all powered by a very consistent underlying framework—what I call a causal reasoning platform.

This platform pulls together elements of signal processing from my aerospace days, combined with AI and ML. It handles regime shifts and uncertainty. In fact, it embraces uncertainty. That was one of my early realizations: many quants see uncertainty as a nuisance. They widen their distributions, or they avoid it altogether because it doesn’t fit neatly into an equation.

But in signal processing, uncertainty is everywhere. In radar systems, you’re trying to detect targets with imperfect data, constant noise, and competing signals. Sometimes even your own radar system interferes with what you’re trying to see. In finance, the signal-to-noise ratio is just as bad and worse, it changes over time. That’s the challenge, but also the opportunity.

Our system makes uncertainty a feature, not a bug. It’s fundamentally Bayesian in nature. When you fly drones, you often use Markov decision processes to control them. The environment is uncertain, you never fully know what’s going on, but as you observe more data, you refine your understanding. That’s exactly what we’re doing in financial markets: continuously observing, updating, and adapting as prices come in and regimes shift.

We’re now considering raising VC funding to expand and commercialize this causal reasoning engine.

Matt: How might that work? Applying the structure you have in finance to these other scenarios?

Aric: Yeah, exactly. When you think about drones—and autonomous vehicles more broadly—you can represent their behavior in terms of regimes. For example, a car can be in a regime where it’s approaching a junction and turning, and that turn can have its own subtypes. For a drone, landing is one regime, taking off is another, and there are many others.

The same framework we use in finance, handling uncertain information in a Bayesian way, fits these scenarios naturally.

But the key point is that the underlying representation engine, the way we encode causal relationships and make predictions, can be shared across these domains. It’s a common framework that works whether you’re dealing with financial markets or autonomous systems.

Matt: How would you characterize your mix of AI? It sounds like it’s mostly traditional techniques. Are you using LLMs for some of the causal aspects?

Aric: We use a combination of time series techniques, other ML methods, information theory, compression, all kinds of things mixed together. The LLM work is a bit more recent, but more generally, we’ve done language processing work—data pipelines, creating our own sentiment indicators from high-quality news data. That worked fine. Now we’re experimenting with open-source LLMs to infer information from text.

The issue with applying LLMs to time series is that we don’t have enough data. There’s some research on how small an LLM can be while still producing reasonable output. You can get down to around a million or a few million parameters and still get text that’s not bad. But training a system with millions of parameters on our data would be a disaster — an overfitting mess. Time series are very high-dimensional.

I’d say the AI we rely on is closer to neuro-symbolic AI. It’s not new from a component perspective — but then again, LLMs aren’t really new either. They’re the result of years of iterative progress on architectures.

My point is that using the LLM as the center of reasoning for the whole architecture is risky. It’s not explainable, it’s not deterministic, it hallucinates — all the issues people have been talking about for a while. Whether that will change, I don’t know.

Matt: How do you view traditional AI versus LLMs?

Aric: When I ran a team at Credit Suisse, we looked at all kinds of different techniques. My view has always been that you match the technique to the problem you have. You don’t just take the “best” technique you think there is and try to apply it to everything. Sometimes a rule engine works very well—even though it’s from the 1980s and no one likes them anymore. But they can work extremely well.

At other times, an ontological system is really great because it can represent some of the knowledge stores you have. I’ve always been agnostic about these things.

I think a lot of AI researchers tend to go too far into pure computer science and don’t look at cognitive psychology or the human-driven research that’s out there, which is very interesting. Our approach is to actually draw some elements from that research.

Matt: So you’re reading psychology papers?

Aric: Yes, think about it. It’s not that we as humans are just continually pulling in data, processing it, and immediately spitting out an answer. There are more kinds of representations and reasoning going on. So why not be flexible in that approach?

I’ve seen people recently talking about world models [a machine’s internal “brain” modeling how the world works]. We’ve been doing that for years. We’ve effectively had a world model for financial markets running and doing trading for us for years. That seems like a very obvious thing to do. Why wouldn’t you?

Why would you assume that it’s just going to appear implicitly because you add more and more layers to your neural network? I’ve never understood that approach.

It almost feels like going down the world-model route is quite a lot of work. You have to think: how do I represent the information? Should I put it this way? How do I join it with everything else? It’s not easy at all. The other route — more compute, more data — feels simpler. You’re using the same techniques, just scaling up. But I don’t think that gets us anywhere. We see that with common sense failures, hallucinations, and all the other issues LLMs have.

Matt: How often do you tweak your model?

Aric: We don’t. We are completely systematic. Once we’ve trained the model, we run it — and it just keeps running. We might tune it after a year or two, but tuning doesn’t mean changing it completely. It might just mean adjusting parameters slightly if the model has drifted.

But generally, the system — the framework — takes in new information, updates its own parameters, updates its knowledge representation as new data comes in, and then trades. It’s completely a closed-loop system.

Matt: It’s evolving by itself?

Aric: Exactly. It’s an expanding window of knowledge, but it’s not like the number of relationships is exploding. It does increase, but because we compress information, some of the new data strengthens existing relationships and some creates new ones. The new ones might fade over time, becoming less relevant, or they might get reinforced and become long-term relationships that actually generate alpha. So, we have a system that continually updates its knowledge of financial markets.

Matt: That’s not true of all systematic hedge funds, right?

Aric: No, it’s not true of all of them. Many have a human in the loop somewhere. Many funds use a bit of systematic trading and a bit of human trading. We’ve really gone in and built the fund as an end-to-end, fully systematic system.

It’s quite rare. One large investor said to us, “We haven’t seen many firms like you.”

Matt: Do you think AI shrinks the number of discretionary money managers?

Aric: People have been predicting the death of the discretionary trader for a long time, but it still hasn’t happened. There’s still value there. There are things machines can’t do well.

Matt: Any predictions on how AI will evolve over the next few years?

Aric: We’re going to carry on with neuro-symbolic and our approaches and carry on hopefully making money, and, as I said, maybe moving into some other areas. But my view: there’s obviously been a lot of money spent on people to do superintelligence, with some people saying there are only 200 people in the world who can deliver on this, and so on and so forth. I view that as bubble mentality. There are literally thousands of good researchers in the world who know all about AI and can deliver all kinds of interesting things. There are loads of different techniques you could use. The problem is we, as an industry, parked ourselves in this little cul-de-sac: deep learning, transformer-based architectures, which are certainly powerful, but have their limitations.

Matt: What did you make of Deep Seek’s progress?

Aric: The main story is that they trained their model with far less compute power by being clever about how they trained it and introducing reinforcement learning.

So can you train a similar-quality model with less money? Yes, you definitely can. And there’s more and more research showing that you don’t need as large a model. You can get rid of about half of the weights and the model accuracy is only minimally affected. And you think, how much redundancy is there in this model? You really don’t need such a huge model. You don’t need so much money to train to produce that output.

Matt: The assumption that everyone’s making: oh, this is just what it costs.

Aric: It is an assumption. And then the other dangerous thing is going into this simulated-data area where you’re saying, I don’t have any more internet data to train on. I’ve used all the Reddit posts that exist and all the Wikipedia articles that are out there, and now I’ve run out. So what do I do? I’m going to generate some more data. I think that’s an incredibly dangerous path to take, because I really think the quality of the models will go down, not up. I think you need to use the data you have much more carefully and efficiently, not add more data. If you reach a brick wall and you’ve used all the data on the internet or whatever it is, there must be something wrong.

My aerospace background shapes how I think about compute. Back then, we were getting systems to run on FPGAs—embedded devices with limited memory—so efficiency was everything. We spent enormous effort getting fairly complex algorithms to work under tight compute constraints. Today, it feels like that isn’t a consideration. Instead of optimizing, we just throw massive GPU clusters at the problem and spend hundreds of billions on compute. I worry that this might not be good scientific practice. There should be more focus on efficiency rather than brute force. Does that not set off alarm bells?

We still don’t know if this approach will truly work out. My own gut feeling is that pure LLM approaches won’t lead to “superintelligence” or whatever term you want to use. I’m just not a believer in that outcome.

How did you like today's newsletter?

Login or Subscribe to participate in polls.

Reply

or to participate.