Hidden Text Tricks AI Trading Systems
A new study shows that invisible text in news headlines can alter AI trading decisions.
Hey, it’s Matt. Welcome back to AI Street. This week:
Interview: How Norway’s $2 Trillion Fund Uses AI
Research: Tricking AI with Hidden Text in News
Use Case: An AI agent joins investment committee at Mubadala
Regulation: SEC’s Daly on reimagining risk disclosures with LLMs
INTERVIEW
The $2 Trillion AI Lab
Matt’s note: This interview is the first in a new paid content track on AI Street. Longer, primary interviews and original reporting like this will now go directly to subscribers. The regular Thursday newsletter will continue to include analysis, research notes, and excerpts.
Norway’s sovereign wealth fund runs the biggest pool of capital in the world.
And its CEO, Nicolai Tangen, might just be the biggest advocate of AI in investing, calling himself a “total maniac” about it.
Stian Kirkeberg is responsible for turning that enthusiasm into production systems across roughly $2 trillion in assets and about 8,600 companies as NBIM’s Head of AI and ML.
In this interview, Kirkeberg walks through how NBIM is navigating this transition. He explains their partnership with Anthropic, the move from a bottom-up ambassador model to a more centralized strategy, and how small autonomous teams are replacing traditional Scrum structures. He also gets specific about how they reserve GPU capacity from hyperscalers and how LLMs are being used to screen thousands of companies for ESG compliance.
RESEARCH
Tricking AI with Hidden Text in News
AI systems that trade on news can be steered off course by text that’s imperceptible to humans.
New academic research shows that small, obscured changes to news headlines can trick AI models into misclassifying sentiment or even failing to associate the headline with the correct stock.
These automated trading systems use AI to ingest news from the web and social media, convert those headlines into sentiment scores, and feed those signals directly into rules-based trading decisions. If the model reads the news as positive, the system may buy. If it reads it as negative, the system may sell or stand aside.
In backtests, a single day of manipulated headlines sometimes led to large downstream effects, with the most extreme cases cutting cumulative returns by up to 17.7 percentage points over 14 months, even though average losses were much smaller.
How the trick works
The researchers looked at two ways headlines can be altered without changing what a human reader sees.
First, they swapped letters in company names with look-alikes from other alphabets. To the eye, “Alphabet” still reads as Alphabet. To the AI, the name often no longer maps cleanly to a stock ticker. In tests using FinBERT, the LLM-based stock-association step failed to map the headline to the correct ticker almost every time.
Second, they added extra words to the headline that were hidden from view in the underlying page. Humans only see the original headline. The AI reads everything. Those hidden words were enough to flip the model’s interpretation of the news from positive to negative in roughly two-thirds of cases, with many remaining headlines becoming significantly more negative.
Why it matters
Using a standard trading engine, the team ran side-by-side simulations with and without these invisible edits. Prices, costs, and execution rules were identical. The only difference was what the AI thought it had read.
On average, the system earned less money. In some cases, much less. The worst outcomes came from a single day where the model either missed an opportunity or took the wrong side of a trade, setting off a chain reaction that affected later decisions.
Importantly, the system still made money. There was no obvious failure to flag. That is what makes the problem hard to spot.
Not just one model
The researchers tested the attacks across a range of systems, including finance-specific models like FinBERT, FinGPT, and FinLLaMA, as well as general-purpose models from OpenAI and Google. Most broke in similar ways. Models trained on financial text were often the most fragile, reliably misrouting stock names or flipping sentiment when exposed to hidden characters. More advanced reasoning models performed better, catching some of the manipulation, but even those still failed often enough to produce real trading losses.
Takeaway
I asked lead author Advije Rizvani for her takeaway:
The key takeaway is that this isn’t “AI hallucination” or a model-only problem, it’s a systems / data-ingestion integrity problem: if the text pipeline isn’t robust, the downstream trading decision can be manipulated.
The authors argue that many news pipelines still pass raw headline text straight into AI models without cleaning it first. Instead, if firms are going to let AI trade on news, they need to treat that news as untrusted input. Otherwise, hidden text may turn into real losses.
Paper: Adversarial News and Lost Profits: Manipulating Headlines in LLM-Driven Algorithmic Trading
Authors: Advije Rizvani, Giovanni Apruzzese, Pavel Laskov
USE CASE
AI As Devil’s Advocate
Here’s a fun interview with Waleed al Muhairi of Abu Dhabi’s Mubadala talking about how AI has joined the investment committee. Her name is MAIA, for Mubadala AI & Analytics, and she goes beyond back-office automation:
MAIA “now presents the argument to not do something, while the team that’s sitting across from it is presenting why you should do something. And that’s a great way to test belief, conviction, and measure risk,” al Muhairi said.
I’ve heard from a couple different folks that they’re using AI as a sort of gut check on their investment thesis.
Man Group's Ziang Fang explained why this works when I interviewed him in December:
"Because the language model isn't part of the group, it doesn't develop the same blind spots. When you work alongside colleagues, you eventually start thinking alike. The model learns from us but doesn't sit next to us, so it can surface angles we might have missed."
As for MAIA, she has made the sovereign wealth fund a better and “more than earned her keep,” Al Muhairi said.
Takeaway
More firms are using AI as devil’s advocate to test their investment ideas and at least one firm has a name for their AI.
REGULATION
How AI Can Help Make Risk Disclosures Actually Usable
I made this meme a while back when Goldman said it uses AI to write 95% of IPO prospectuses.
It surprises people outside finance, but almost no one actually reads risk disclosures. They are dense, often hundreds of pages long. They are effectively unreadable. Wall Street is buried in documentation, but it’s still so hard to answer simple questions. Like how much will I pay in fees in total if I invest in this mutual fund? You would think that’d be like easy, but no. There are of course rules that firms need to follow for “adequate disclosure,” but that doesn’t mean things are easy to find. Information can technically be disclosed and still be impossible to find or understand. How many retail investors even know that they can look up a financial advisor’s regulatory history with BrokerCheck?
Hopefully, this is one area where AI can improve this process.
In a speech this week, U.S. Securities and Exchange Commission Investment Management Chief, Brian Daly, floated a simple, yet once radical, idea: using large language models to rethink how fund disclosures actually reach investors.
“What if we reimagined disclosure using large language models?”
Instead of asking investors to read hundreds of pages, a fund could offer an AI agent trained on its full disclosure library. Investors could ask direct questions in plain English: what the fund invests in, what fees apply, how redemptions work, whether short positions are used, where conflicts exist, and how performance should be evaluated.
A lot of legal questions need to be answered as a whether the tool counts as marketing or investment advice. But Daly’s point is that these questions are real but not novel. Other high risk industries already deploy AI systems under regulatory scrutiny. Finance can do the same.
Disclosure today is treated as a document obligation. Daly is suggesting it be treated as an interactive system.
Takeaway
AI is not going to replace risk disclosures anytime soon. But it could finally make them usable by letting investors ask basic questions and get clear answers without wading through hundreds of pages.
Financial data and infrastructure stocks like S&P Global, MSCI, London Stock Exchange Group, FactSet, and Intercontinental Exchange all sold off this week after Anthropic rolled out a new legal-focused plugin for its Claude assistant.
The new tool looks fairly modest. The market reaction was not. Stocks across the software, financial services and asset management dropped $285 billion in market value, according to Bloomberg.
I’m scratching my head on this one. I’m not sure how AI competes with proprietary datasets plus current models are not near the level of accuracy and auditability necessary for Wall Street… We’re discussing in the Subscriber Chat.
ROUNDUP
What Else I’m Reading
Human connection still needed in credit investing FT
Financing the AI boom: from cash flows to debt BIS
AI Leads Family Office Investment Themes, JPM Says BBG
The $3 Trillion AI Data Center Build-Out Becomes All-Consuming For Debt Markets BBG
Bank of America’s big bet on AI started small CIO
Private Equity’s Giant Software Bet Has Been Upended by AI BBG
JPM hired the head of UBS’ AI lab to assess employee performance eFinancialCareers
CALENDAR
Upcoming AI + Finance Conferences
CDAO Financial Services – Feb. 18–19 • NYC
Data strategy and AI implementation in the financial sector.
Future Alpha – Mar. 31–Apr. 1• NYC
Cross-asset investing summit focused on data-driven strategies, systematic investing, and tech stacks.AI in Finance Summit NY – Apr. 15–16 • NYC
The latest developments and applications of AI in the financial industry.
Momentum AI New York – Apr. 27–28 • NYC
Senior-leader forum on AI implementation across financial services, from operating models to governance and execution.AI in Financial Services – May 14, 2026 • Chicago
Practitioner-heavy conference on building, scaling, and governing AI in regulated financial institutions.AI & RegTech for Financial Services & Insurance – May 20–21 • NYC
Covers AI, regulatory technology, and compliance in finance and insurance.
If you read this far down
Do me a favor and just hit reply to this email with the number of your favorite story from today:
NBIM interview
Tricking AI with hidden text
Mubadala’s AI as Devil’s Advocate
Using AI to read risk disclosures









