- AI Street
- Posts
- Fed Economist Tests If AI Can Do His Job
Fed Economist Tests If AI Can Do His Job

Hey, it's Matt. Here’s what’s up in AI + Wall Street.
Was this email forwarded to you?
Join CEOs, CTOs, quants and more from firms like Bloomberg, BlackRock, JPMorgan, Two Sigma, Millennium, and more.
RESEARCH
Buy AI, Sell the Human

A Fed economist, concerned that he might lose his job to AI, asked AI: How do I stay relevant?
Andrew Y. Chen, PhD, was rattled by how fast AI was advancing. Models could reason, prove theorems, and write code.
"Six months ago, I had thought each of these things is impossible." he said. "What will happen in the next six years?! Will my entire job be replaced by AI?"
So he did what economists do: research.
But instead of teaming up with human co-authors, he tested if the technology could perform academic-level work.
The result was a working paper in theoretical finance, “Hedging the AI Singularity,” co-written with ChatGPT and Claude, modeling what happens if AI progress destroys labor income but boosts the value of AI firms.

He used ChatGPT-o1 for theory, Claude for writing, and OpenAI’s Deep Research for literature review—iterating across multiple prompt files until the math, citations, and prose clicked.
If AI harms workers but boosts AI companies, then owning AI stocks could help offset the damage. That may be why investors might pay more for them—even if they’re scared of where AI is going.
In scenarios where AI breakthroughs are bad for households but good for AI firms, investors may bid up AI stocks not in spite of the risk, but because of it.
To be sure, Chen admits this wasn’t the fastest way to write a paper.
“Writing this paper would have been much easier if I had done more of the work myself,” he wrote on GitHub.
Claude “often fails to recognize that an economic model does not capture an important channel,” and AI’s literature discussions were “insufficiently careful.”
It can write clean prose, solve equations, and passably mimic academic voice but it still struggles to understand what makes a model economically meaningful.
But he says: “How long will these limitations last?”
In the future, “2024-style economic analysis will be on tap.” Ask a chatbot for a paper about hedging AI disaster risk, and you’ll get something like this—“probably something much better.”
He’s not making predictions, just thinking through an uncomfortable thought experiment.
“I’m not saying I expect a disaster for the economics labor market. But even if it’s highly unlikely, it’s still a scenario that economists should consider.”

NEWS ROUNDUP
Big Tech Spends Billions on AI—And Still Hits Compute Limits
Meta, Microsoft, and Alphabet are all ramping up AI spending. Meta alone is targeting up to $72B in capex this year. The company’s CFO, Susan Li, said they’re having a “hard time” meeting compute demands across the business on its Q1 earnings call.
Regulators See AI as the New Search: Google Trial
The DOJ is treating generative AI, including tools like Google’s Gemini, as the next gateway to the web, and wants to prevent the company from dominating it like it did search. It's a telling sign of how regulators view the next phase of web dominance. (Bloomberg Law) $
RELATED: As users shift from Google to AI chatbots for search, brands and ad agencies are aiming to show up in chatbot answers. (FT) $
Morningstar Founder Says AI Finance Models Lag in Accuracy
Financial models based on artificial intelligence are a long way from being able to compete with market research firms, said Morningstar Inc. Chairman Joe Mansueto, who built a fortune by providing investment reports, research and management. (Bloomberg)
Big Tech Wants Power, Chips, and Clarity in U.S. AI Plan
Amazon, Meta, Anthropic, and others are urging the White House to boost AI infrastructure, unify state and federal rules, and secure chip supply. While companies differ on open-source policy and copyright, most agree the U.S. needs more energy and less regulatory fragmentation to stay competitive. (PYMNTS)
Morgan Stanley CTO on Getting a Tech Job on Wall Street
Morgan Stanley tech chief Hina Shamsi says comp-sci grads who want to work on Wall Street should learn how the business works, talk to real users, and show they can solve problems—not just write code. (Business Insider)
External Context Can Make LLMs Riskier: Bloomberg
In a new study, models like GPT-4o and Claude 3.5 were more likely to produce harmful or misleading outputs when using Retrieval-Augmented Generation (RAG)—a common technique that adds outside context to boost accuracy. A second paper finds standard guardrails miss finance-specific risks like data leaks and investment-like language. Bloomberg’s fix: a custom risk framework for capital markets. (Bloomberg)

PODCAST 🎙️
New Episode of Alpha Intelligence Out Next Week
The Alpha Intelligence Podcast returns next week with a conversation with Alicia Vidler and my co-host Francesco Fabozzi.
Alicia brings a rare blend of deep trading experience and AI expertise. She started her career in equity derivatives at Deutsche Bank and later worked as a proprietary trader on the macro desk at Merrill Lynch during the 2008 financial crisis—a formative time in markets. | ![]() |
Since then, she’s co-founded an AI-driven hedge fund in London, served as a consultant across AI and finance, and advised multiple fintech ventures. She’s currently on the advisory board of 6Star Capital, a VC firm focused on AI safety and disinformation, and is also completing a PhD on agentic AI in capital markets at UNSW.
Check out our previous episodes below:
Bin Ren on how AI is changing financial analysis.
Ren, who spun SigTech out of Brevan Howard’s Systematic Investment Group in 2019, holds a PhD in computer science from Cambridge and has experience in quant investing and equity exotics trading. He shares insights on the expanding role of AI agents in hedge funds, banking, and equity research. Listen: YouTube, Spotify

First published March 6
Tharsis Souza, PhD, former Senior VP of Product Management at Two Sigma, Lecturer at Columbia University, and author of an upcoming O'Reilly book on LLMs' pitfalls.
Our chat focuses on DeepSeek’s latest model, R1. While DeepSeek claims to have trained R1 for just $6 million without Nvidia’s most advanced chips, the reality is more complex.

FUNDRAISING
Fortress Backs Dataminr With $100M for Real-Time AI
Dataminr, whose platform scans public data for emerging threats, raised $100M from Fortress in convertible bond deal weeks after an $85M round last month. With tariffs and volatility rising, real-time AI is edging closer to Minority Report territory. (Bloomberg, Press Release)
AI Analyst Rogo Raises $50M at $350M Valuation
Rogo, which builds domain-specific LLMs for bankers, raised a $50M Series B from Thrive, JPMorgan, and Tiger Global—boosting its valuation from $80M to $350M in seven months. This is one of the largest raises since Hebbia's $100M round last summer. (FT, Press Release)

WHAT ELSE I’M READING
Bank of America’s Big Bet on AI Started Small (CIO)
With So Many Tech Choices, What's an Advisor to Do? (Think Advisor)
Morgan Stanley: AI Chip Demand Is Only Getting Stronger (Business Insider)
Deepfakes Now Target Finance Leaders (Fortune/Yahoo)
JPMorgan, Wells Fargo, Citigroup double down on AI hires (TechInformed)
Anthropic Forms Council to Explore AI’s Economic Implications (PYMNTS)
Exploring potential of large language models for investment firms (PWM)
How did you like today's newsletter? |
Reply