ADOPTION
Small AI, Big Impact

AI doesn’t really understand cause and effect.
If someone can show me evidence that AI actually thinks using formal or deductive logic, I’ll bow to our new machine overlords. Until then, it’s essentially sophisticated pattern matching --which is still useful!
For all the headlines that AI is going to take everyone’s jobs, not being able to reason logically seems like a serious limitation. AI excels at tasks that don’t really require thinking — rote, repetitive, boring tasks. And for those, you don’t need a massive model. A small model will do.
(Side note: A model’s “size” refers to how many parameters it has — essentially, the number of connections in the network. The biggest models have billions or even trillions, while smaller ones have just thousands. There’s no universal cutoff of what makes a model “large” or “small.”)
Small models can be trained to handle specific tasks often with better accuracy than their bigger counterparts. I’ve written before about how JPMorgan showed that smaller models improve credit data accuracy and I interviewed the CEO of Aveni.ai on why they’re building small models for financial services last year.
The WSJ recently highlighted how companies are leveraging small models for specific tasks and how they coordinate with large ones, describing the process as an AI assembly line.
Meta uses small, specialized AI models to deliver ads, while its largest models develop and pass on effective targeting strategies.
Airbnb employs small models from Alibaba to automatically resolve a large portion of customer-service issues.
Gong combines small and large models to analyze sales calls, with smaller models handling summarization and filtering before a larger model produces final insights.
Aurelian uses small generative models to automate responses to nonemergency 911 calls.
Hark Audio fine-tuned small models on a library of human-edited podcast clips to automatically identify and collect memorable audio moments.
From the story:
“The reality is, for many of the operations that we need computing for today, we don’t need large language models,” says Kyle Lo, a research scientist at the nonprofit Allen Institute for AI.
Takeaway
AI helps the most when it’s applied to narrow tasks using smaller and cheaper models that specialize in specific use cases.
Further Reading
AI Adoption on Wall Street
KPMG to Grade Employees on AI Adoption in Annual Reviews
KPMG will begin evaluating staff on how effectively they use AI tools like Microsoft Copilot in their 2026 performance reviews, part of a firmwide push to ensure all employees integrate AI into their work. BBG
Ex-Consultants Are Helping Train AI for Entry-Level Tasks
Roughly 150 former McKinsey, Bain, and BCG consultants have been hired by a data-labeling startup to train AI models like Google’s Gemini on routine consulting work. BBG
Why AI Needs More Data for Investment Banking Tasks
Companies building AI models are hiring financial professionals to supply the specialized, private data needed to train systems on real-world workflows. Forbes
FINRA Details Its Expanding Use of AI
FINRA has deployed its own internal generative AI system to all employees, with 40% now using it weekly to summarize documents, compare filings, and support compliance reviews. FINRA

RESEARCH
How Simple Word Choices Lead AI Astray

Even slight wording changes can sway an AI’s financial judgment, Domyn researchers find
For better or worse, people are using AI for all sorts of things from therapy to romance to picking stocks. (Def not medical/love-life/financial advice!)
AI has earned a surprising amount of user trust, which is pretty wild since no one really knows exactly how large language models work. The folks building them say they’re grown rather than built.
After you ask a chatbot a question, you might see “thinking” as it comes up with an answer, but it’s not really thinking in the deductive, formal logic sense. It’s a probabilistic machine that predicts the next token based on its training and that can lead to strange biases.
If you asked a (human) financial analyst whether Microsoft or Apple is the better investment, the answer wouldn’t depend on whether you said Microsoft or Apple or Apple or Microsoft. For LLMs, that word order matters, according to new research.
A new paper from the team at Domyn digs into this problem, so-called positional bias, and finds it’s common in large language models used for financial decisions.
Using Qwen2.5, an open-source model family, they built a benchmark to test whether the order of two stocks in a question changes the model’s answer. They tested 18 major tech companies across 10 different financial evaluation categories—everything from fundamental analysis and ESG criteria to risk assessment and growth potential.
Smaller models were especially biased toward whichever company came first, while larger ones mostly reduced the bias, but in a few cases even reversed it, favoring the second company instead. Telling the AI to act as a "conservative" versus "aggressive" advisor changed its answers even when asking the same question.
The team also traced the bias to specific layers and attention mechanisms inside the models—showing where it originates.
Takeaway
Bigger models can help reduce “positional bias,” but it doesn’t totally eliminate it. Even the best models can still prefer “Microsoft over Apple” simply because Microsoft was mentioned first. Careful how you use AI.
Further Reading
Tracing Positional Bias in Financial Decision-Making: Mechanistic Insights from Qwen2.5 | arXiv

SPONSORSHIPS
Reach Wall Street’s AI Decision-Makers
Advertise on AI Street to reach a highly engaged audience of decision-makers at firms including JPMorgan, Citadel, BlackRock, Skadden, McKinsey, and more. Sponsorships are reserved for companies in AI, markets, and finance. Email me ([email protected]) for more details.

INTERVIEW
The Boring but Essential Side of AI
An interview with Alan Pelz-Sharpe on the explosion in document processing

One of the more boring but most important aspects of AI is its ability to take messy data—like PDFs and tables—and organize it.
While AI often impresses with its latest capabilities, the business world moves more slowly and, in many cases, still runs on paper. Before LLMs, turning a printout into digital data required the document to be fairly clean and standardized.
To understand the current state of this market, known as document processing, I spoke with Alan Pelz-Sharpe, founder of research firm Deep Analysis and an analyst who has covered this space for more than two decades.
This interview has been edited for clarity and length.
How did you get into this industry?
“I’ve been an industry analyst 26 years. I’ve always covered document management and workflow. One of the reasons is it doesn’t matter whether it’s a boom or a recession — there’s always a need for document management. It’s the most boring topic in technology by a country mile, but at the end of the day, what is the currency of business? Whether it’s finance, whether it’s supply chain — it’s documents, and it always will be.”
Why does document management matter now in the age of AI?
“In this world of agentic AI and generative AI, when you really look at the projects, where do they all start? Documents. We have this world now we call intelligent document processing. Take Salesforce — up until two years ago, they didn’t do document processing. Now they do, because if they’re going to have an agentic future, documents are a real part of it.”
What’s changed in the past decade?
“Within the last decade, unstructured data — PDFs, Word docs, videos, anything not neatly in a database — has gone from hard to handle to central to AI. We track well over 400 vendors in that space. Roll the clock back ten years and I could have named you ten. It’s been an explosion.”
How is this shift impacting enterprises?
“Roughly 80% of enterprise data is unstructured. Up to now, when you think about data lakes, data warehousing, ETL, business intelligence — it’s been focused on the 20%. That’s a massive shift. You can now process invoices or contracts without a human ever looking at them. There’ll always be exceptions, but that’s transformational.”

ROUNDUP
What Else I’m Reading
How Hudson River Trading Actually Uses AI | Odd Lots Podcast
Powell Says AI Boom Differs from Dotcom Era, Cites Profits | Fortune
Man Group Assets Soar to a Record $214 Billion | BBG
The blurring lines between HFTs and hedge funds | Rupak Ghose
Ripple gets $40 billion valuation after $500 million funding | CNBC

CALENDAR
Upcoming AI + Finance Conferences
Recently added events in BOLD
ACM ICAIF 2025 – November 15–18, 2025 • Singapore
Top-tier academic/industry conference on AI in finance and trading.
Momentum AI Finance 2025 – November 17–18, 2025 • New York
Reuters summit featuring execs from major banks, asset managers, and fintechs, with sessions on AI infrastructure, ROI, agentic systems, and agent demos.
AI for Finance – November 24–26, 2025 • Paris
Artefact’s AI for Finance summit, focused on generative AI, future of finance, digital sovereignty, and regulation
NeurIPS Workshop: Generative AI in Finance – Dec. 6/7 • San Diego One-day academic workshop at NeurIPS focused on generative AI applications in finance, organized by ML researchers.
Is there a conference I missed? Reach out: [email protected]

