• AI Street
  • Posts
  • AI Stack with Harry Mamaysky of QuantStreet

AI Stack with Harry Mamaysky of QuantStreet

Hey, it’s Matt. This week on AI Street Markets:

🎙️ An interview with QuantStreet co-founder, Harry Mamaysky, on how he’s using both traditional AI and LLMs to guide investment decisions.

Forwarded this? Subscribe here. Join readers from Citi, JPMorgan, Finra & more.

INTERVIEW

Harry Mamaysky straddles two worlds: academia and markets.

A professor at Columbia Business School, he teaches and researches in areas spanning quantitative investing, fixed income, and machine learning. At the same time, he’s co-founder of QuantStreet, a wealth management and analytics firm that applies systematic, data-driven models to asset allocation. Before academia, Mamaysky spent more than a decade trading credit on Wall Street, an experience that helped shape his view of risk, modeling, and the limits of traditional investing approaches.

In our conversation, Mamaysky, who has a PhD in finance from MIT, explains how QuantStreet uses AI in practice, where the models fall short, and why he sees them as tools to make investors more efficient rather than replacements for human judgment. He also publishes research on Substack, which you can subscribe to here.

How did you get started with QuantStreet? 

I started QuantStreet like three and a half years ago—actually, probably four years ago—with my brother. And our idea was to have a systematic asset allocation strategy for individual investors as well as potentially institutional investors who allocate to public markets.

I am currently a professor at Columbia Business School, in addition to what I do at QuantStreet. I traded single-name credit for about 13 years—at QuantStreet we don’t do anything with single names.

We’re not in a position to have the information we need to be competitive in trading single names, so we don’t do any of that stuff. We work at the level of ETFs, very liquid. We do asset allocation. The way we do it is we use mean-variance optimization.

We have targeted risk levels. At each risk level, we want to find the highest expected return portfolio. The implementation of that idea is much more complex. We track about 40 different asset classes. For each one, we have a machine learning model that creates a return forecast.

We feed it a lot of data. It selects the elements of the data that it thinks are relevant to that asset class. We look at the trend, and we combine the trend with the model—that forms the basis of our return signal.

For each of our asset classes, we have estimates of historical volatility, correlation, and tail risk. And then we say, okay, we want a 60/40 portfolio, which for us means the same risk level as the 60/40 stock-bond portfolio over the prior year. That’s the risk budget. Within that risk budget, let’s construct the highest expected return portfolio we can, subject to only using up that much risk.

The process has a lot of constraints. You can’t have too much exposure in this sector or that sector. There are limits on tail risks. So not only does it have to be a 60/40 portfolio, but it can’t have excessive tail risk at that risk level. So that’s our process.

It’s very data-driven. In that process, we use some non-traditional data, but data that we get from other sources.

What kind of data?

We throw in the kitchen sink.

It’s realized volatility, implied volatility, level of interest rates, inflation, GDP, growth, valuation metrics, profitability metrics if it’s an equity asset class, price-to-earnings multiples, price-to-book ratios—kind of everything. We have some other series, like we use Economic Policy Uncertainty as a measure of the craziness in government. So that’s one of the inputs. The San Francisco Fed publishes a series called Economic News Sentiment, which we can just get from their website. It’s like an average sentiment of economic news from 16 or 20—I forget the exact number—major regional U.S. newspapers.

So that’s an input. For each asset class, we use a machine learning approach to whittle down the 30 different forecasting variables. Not all of them are relevant to every asset class, so the machine learning part of it throws out 27 of the 30 and says these are irrelevant for this asset class.

It keeps the three it thinks matter, and then it creates a return forecast. We have the same process for every asset class, even though every asset class ends up having its own distinctive model, because the forecasting variables selected for each one are different.

The coefficients with which the forecast returns are combined are different, but the process is the same for every asset class.

So that’s our framework.

It is AI in the traditional sense—it’s machine learning in the traditional sense—it’s statistical AI. A few years ago, this was just called data, and now it’s called AI. Today AI is thought of as LLMs, but anyway, AI isn’t the right solution everywhere. You don’t need a sledgehammer to nail a small nail into a table. There’s the right tool for the right problem.

For this problem, this is the right tool. We use AI in different ways. For example, one of the other supporting pieces of evidence we have for our portfolio construction process is a model that tries to forecast the probability of a large sell-off.

We want to know, for the assets we’re invested in, how likely we think it is there’s going to be a large sell-off. What we’re asking the model to do is generate a probability measure between zero and one of the likelihood that a given asset class experiences a greater than 20% sell-off over the next year, let’s say.

For that, you need to use some kind of machine learning tool. We actually use two different ones. One is called logistic regression, which is identically the same as a one-layer neural network. And then in addition to that neural network, we use a deeper neural network with more layers to create this forecast of a large sell-off.

I would imagine there would be investor demand for that forecast.

Yes, whether we can answer it in a definitive way is another matter. For example, we completely missed the sell-off this year. Nothing in our model told us we’d have a massive trade war, and we had no variable that captured that risk. It was simply outside the scope of our model.

When the sell-off happened, I told clients that if you strip out the geopolitics, the market fundamentals looked fine. Some media narratives compared it to the dot-com bubble, but that was total nonsense. The variables that were flashing red before the dot-com bubble weren’t elevated this time. Everything was benign.

While the model failed to predict the 20% correction, it did help us evaluate it. We could say with confidence this was not a fundamentally driven correction—it was policy noise. Based on that, we made no portfolio changes. That restraint paid off when the market bounced back.

That’s the point of these models. They’re not crystal balls, but they can rule out bad explanations. In this case, they showed the sell-off didn’t resemble the dot-com bubble, which gave us the confidence to sit tight—a decision that proved right in retrospect.

What do you think is the right use case for AI?

Perplexity is an AI engine tied into a lot of financial data. You go to Perplexity’s site or chat, upload your portfolio spreadsheet, and say: here are my positions, now find me all the news flow over the past month. It scrapes the web, pulls the headlines, and you ask, “How does this news impact each of my positions?”

That’s the idea. You can already do it with consumer-facing tools—ChatGPT, Gemini, Perplexity. It’s a trust-but-verify process: I trust the output, but I always check it. If it flags a headline, I confirm it exists. Perplexity alone can sift through thousands of headlines and surface the few that matter. You still have to decide on the impact, but it saves you from hiring an analyst.

We also use this in financial planning. A client might say, “My employer’s offering a new tax-advantaged plan. Should I invest?” I’ll do the analysis, then double-check with an LLM: here’s the scenario, here are my assumptions, how would you analyze it?

I’ve been in finance nearly 25 years, and these models often replicate what I do. Sometimes they make algebra mistakes—because they guess rather than calculate—but when I correct them, they fix it. That’s valuable. They can also generate a two-page client explanation in seconds, something that would take me hours to write. I review it, verify it, and send it along with attribution. Occasionally, the model even catches a nuance I missed.

For example, Gemini 2.5 Pro once flagged a different cost-basis assumption than mine. It didn’t change the overall result, but it showed a detail I had overlooked. That’s the kind of backstop that makes these tools useful.

Six months ago Gemini couldn’t do this. 2.5 Flash still can’t. But 2.5 Pro, with its chain-of-thought reasoning, can. And in another six months, the models will handle even more.

It’s remarkable.

The gap between AI and adoption is widening. It’s moving faster than the humans using it.

I’m relying on it more every day than I was three months ago. Constantly, I’m asking, “How can I use this to be more efficient?” For example, the scripting language under Google Sheets is JavaScript; for Excel it’s Visual Basic. I’m not an expert in either. In the past, if I wanted Excel to do option pricing, I’d need to write a Black-Scholes pricer. I know the math, but I didn’t know the syntax—it was tedious and could take me an hour and a half.

Now I just go to Gemini: “Write me a JavaScript module for Google Sheets that does Black-Scholes pricing.” It spits it out in seconds. I copy, paste, and it works. What used to take hours is now instant.

How do you see AI impacting jobs? 

People will keep doing the same jobs, but with new tools. These models aren’t at a point where they can make decisions on their own—humans have to learn how to use them. It’s like when Excel first came out. Before that, people used calculators, or Lotus 1-2-3. Spreadsheets didn’t eliminate analysts; analysts just had to learn spreadsheets. That’s where we are with AI today. The jobs won’t disappear, but the people doing them will need to adapt.

I don’t anticipate mass layoffs in the financial sector. Our students at Columbia already use AI for everything, and professionals will do the same. The tools will make people more productive.

As a species, we’re heading toward greater complexity. We may have people on the Moon or Mars in my lifetime. Society isn’t what it was a thousand years ago—it’s more complex every year. We need tools like AI to navigate that complexity and expand the frontier of what’s possible.

It’s the same in finance. Companies themselves are more complex. Microsoft today doesn’t look anything like Microsoft 30 years ago. The scale, the supply chains—it’s a different order of complexity. And when we’re mining the Moon for resources, analyzing companies will get even harder. That’s why we need these tools. 

How did you like today's newsletter?

Login or Subscribe to participate in polls.

Reply

or to participate.