What Works in AI and Investing
CFA Institute's Brian Pisaneschi on workflows, skill files, and where AI is actually useful.
Brian Pisaneschi, Senior Investment Data Scientist at CFA Institute, works with institutional investors to figure out what actually works in AI and investing—not what sounds impressive.
One pattern shows up again and again: many investors tried AI a year ago or so, had a bad experience, and wrote it off because it got facts wrong, misstated figures, or invented citations.
Yet they keep hearing about AI in finance. That creates a different kind of pressure: not wanting to be left behind, without a clear sense of what has changed.
In this interview, he explains why product overload is slowing adoption, why “skills files” matter more than model training, and how to structure workflows so outputs can be trusted.
He also points to areas like fixed income, where these approaches may matter more than people expect.
The conversation also covers something that doesn’t get enough attention in finance: how bias shows up in ways that aren’t obvious. Not just demographic bias. Positional bias (the same information, presented in a different order, can produce different outputs), framing effects (the same odds stated two ways lead to different decisions), and the fact that models reflect the biases in the data they’re trained on.
We cover:
Why many investors are still anchored to early AI failures
Why comparing models is the wrong approach
How “skill files” and workflows actually drive results
Where early ROI is showing up (including fixed income)
How bias shows up in model outputs
This interview has been edited for clarity and length.
Matt: You talk to a lot of investment professionals just getting started with AI. What’s your sense of adoption among investors?
There’s a large group that tried ChatGPT when it first went mainstream, had a bad experience — it made something up, got a calculation wrong — and wrote it off. That’s a reasonable response based on what it was at the time. The problem is anchoring. They’ve frozen their view of the technology at that moment, and the tools are genuinely different now. I tell them: forget everything you knew about this 18 months ago. You have to be experimenting again.
The other group has FOMO, but the anxiety from not knowing where to start is actually keeping them from doing anything. My advice to both groups is the same: treat it like a new employee. Give it a task. Check the output. See what it can do.
From Models to Workflows
Matt: We are seeing a total product overload right now. It isn’t like comparing phones based on pixel counts; it is very difficult to compare these AI models side-by-side. How should people navigate this?
Brian: It is very hard to compare them, and getting all of them at once can be overwhelming. I recommend trying to understand what you can do with the “Frontier” models and Claude’s skills—as well as the skills OpenAI is developing—and what can be achieved with connectors. For example, Notion already acts as an agnostic transcript writer that can connect to Claude. Many investment professionals are not yet aware of the tools that are used ubiquitously in the computer science realm.
Matt: I’ve had Claude skills on my radar for a few months, but I’m still trying to get my arms around them. How are you using them currently?



