AI Turns Plain English Into Backtests: Lord Abbett’s Tal Fishman
Two months ago, vague prompts failed about 80% of the time. With the latest models, they now often work on the first try, he says.
INTERVIEW
For Tal Fishman, AI was little more than autocomplete a year ago.
That changed in December. Vague prompts that once failed began producing correct results.
Now, AI can turn a plain-English trading idea into a full backtest report that includes data cleaning, code, and analytics, says Fishman, head of fixed income quantitative research at the $248 billion asset manager Lord Abbett.
“The error rate from a vague prompt used to be 70–80%. In December that flipped. In many cases it started working right the first time about 80% of the time,” he told me in an interview.
For Fishman, AI is not infallible, but it makes testing quant ideas dramatically cheaper and faster. Projects that once required weeks of quant time can now be attempted in days or hours.
Counterintuitively, he sees demand for quant work rising, not falling.
“If testing an idea used to take a month, you might say it’s not worth it. But if AI cuts that to a week or a day, suddenly there are a lot more projects you want to do. So far it hasn’t reduced headcount. It’s just increased how much we tackle.”
In our conversation, Fishman discusses:
Why December’s model releases marked an inflection point for quant research
How models use internal documentation to reproduce a firm’s research process
Why cheaper research is increasing demand for quants
What makes fixed income difficult to systematize and where AI actually helps
Why some finance professionals underestimate how much AI has improved
This interview has been edited for clarity and length.
Matt: When did you realize how big an impact AI was going to have on your job?
Tal: It was a JPMorgan conference in the city for quants, I think last spring. Prior to that conference, I had started using AI as autocomplete, basically, for coding. The vast majority of the day-to-day work that I do and that my team does is done via code. Its capabilities were starting to slowly get better — it would go from completing a line to completing a block of code, maybe three or four lines at a time.
What I saw at that conference was that Man Group had put on display their own AI model. It was able to go from a very basic research idea — like, “here is a new dataset, and I would like to test whether the momentum effect can be found within this dataset” — and it was a relatively short paragraph that they submitted to the LLM. From there, you push go, and the prompt said something like, “I would like you to produce a backtest report with our usual graphs and tables.” Of course, it was hooked up to a lot of stuff on the backend for them. You push go, and it’s just churning and producing code. They showed a fast-forwarded video of it literally doing everything, and out comes the report. At the time I was like, whoa — if this is real, this is a game changer.
That really changed my thinking from “AI is going to be a type of model we use when we want to do sentiment analysis” to “this is going to fundamentally change how we do our work.” I tried to replicate what they had done, and I think they must have had a really advanced model for that day back then, because I tried and failed to get that working on my end — until December of last year.
Matt: What changed in December?


