Inside Manulife's Early AI Adoption
Manulife’s Robi Krempus on Adopting AI Early
When generative AI began gaining traction on Wall Street, many firms responded cautiously, often firewalling off the technology from its employees. At Manulife Investment Management, the reaction was different. After years of investing in cloud, data, and machine learning infrastructure, the firm moved early to establish an AI framework across the organization, building governance, risk controls, and a process for prioritizing use cases.
I recently interviewed Robi Krempus, who leads AI for global wealth and asset management, which has $1.3 trillion in assets under management and administration. Earlier in his career, Krempus was a control systems engineer in the energy sector, working on nuclear power thermodynamics and other high-stakes modeling problems. That background now shapes his role overseeing Manulife’s AI platform. Rather than committing to a single vendor, his team has built a model-agnostic framework that allows the firm to move between providers such as OpenAI, Anthropic, and Google as the technology evolves.
In our chat below, Krempus explains why Manulife moved quickly, how his team co-designs tools with portfolio managers, and why the firm shifted from project-based experimentation to a platform strategy. He also discusses how Manulife evaluates large versus small language models, how it manages tech debt as models change, and where AI is already proving useful, particularly in extracting qualitative signals from standard financial disclosures.
This interview has been edited for length and clarity.
Matt: Manulife moved quickly when generative AI first emerged. At the time, many big firms were banning or firewalling it. What drove that decision?
Robi: We truly saw the opportunity. We had already established a strong data science and machine learning community at Manulife. When you build traditional machine learning models, it is often about forecasting or predicting a variable, which still requires a massive infrastructure. When generative AI came into the mix, we quickly understood that this is much broader and will impact everything—decision-making and how you think about intelligence. Organizationally, we saw the opportunity, and with the CTO, CIO, and Chief AI officers, we were certain this technology was not going to go away. Unlike emerging technologies like blockchain that take time to embed, it was quite apparent that this would be transformational.
Matt: How do you architect this technology? How do you organize it to get started?
Robi: In asset management, there were three ingredients where we believed this would really make a difference. One is strong leadership support. We had huge support from Colin Purdie, the Global Chief Investment Officer for Public Markets, and his leadership team. Secondly, we co-designed solutions with the investment professionals. We have CFAs on my team, but we are not managing the money; the investment professionals are. That co-design allowed us to tackle specific pain points together.
Thirdly, our mindset shifted from being project-based to a platform mindset. We wanted to establish a platform so that whenever we have an additional use case, we can give AI to the end user through that platform. We have seen adoption over 70%, and we hold weekly office hours where investment professionals can stay on top of new features.
Matt: Some firms use various models as an engine and build an application layer on top. Can you walk me through your thinking on building those applications?
Robi: From a Manulife perspective, we have a robust model risk management process in place. Before anything goes into production, it is vetted against hallucinations and quality. In working with investment professionals, quality matters a huge deal. If the LLMs do not produce an output that hits the investment context, it will not work. We architected our AI with feedback loops and tested various systems to increase output quality and reduce hallucinations. It is not a straight-through process to a reasoning model; spending time on the AI architecture to increase quality was really impactful.
Matt: Are you agnostic to the model? Can you swap different models in and out of your infrastructure?
Robi: Yes. That goes back to the ten years of investment we put into infrastructure and cloud. What is amazing now is the availability of all these models. Even when OpenAI released 3.5, we had access to it quite fast. The idea was to create a data framework that allowed us to productionalize models in a responsible way. We have a broad lineup available, whether it is OpenAI, Anthropic, or Google. It is fast and responsible.
Matt: What are the most common use cases right now, and how have they evolved?
Robi: We started with discrete use cases. One that seems obvious is earnings call transcription. We co-designed solutions with investment professionals to build standard prompts for things like red flags, concerns, or bull-and-bear situations. This was managed in a prompt library and helped support investment conviction.
What we did next was aggregate that data. If you take earnings calls and add outside reports or notes, it allows you to search across the board. You can search across your portfolio or a sector for specific topics. That has been very helpful for deeper intelligence across coverage. We also use it for sustainability, which is an efficiency play to quickly get information out of very long documents.
Matt: How are you thinking about small models versus larger ones?
Robi: We have teams that constantly task new models out. We have looked into small language models for operations or distribution areas and have seen a fit there. We are not deploying small language models into asset management right now because we are very pleased with what we can do with large language models. Organizationally, we work strategically on small language models regarding cost and scale, but it hasn’t impacted asset management yet.
Matt: How do you decide between building something internally versus using a third-party source?
Robi: We look at it as “buy, build, or reuse.” Because Manulife is a large organization, we first see if we can reuse something already built. We have a stream that constantly evaluates vendors to see if a solution makes sense. The last thing we want to do is manage internal tech debt. In some cases, we bring in vendors; in others, we build. The platform mindset matters here because it reduces tech debt while allowing us to fine-tune and differentiate ourselves in the marketplace.
Matt: What do you think is currently overhyped or underhyped?
Robi: I am particularly curious about autonomous coders in the bigger tech space. In the past, if you built a machine learning model and a new algorithm came out, you had big expectations for accuracy, but you still had to do so much feature engineering to improve it. Now, innovation and design matter because you have so many options in how you architect data and AI.
Regarding what is overhyped, the real impact is AI’s ability to deeply analyze structured and unstructured data in an automated way. In asset management, with the enormous amount of qualitative and quantitative data, that is where it gets interesting. When these things merge toward AGI, AI will be able to figure out insights and analysis from any sort of data using natural language. You won’t need to know Python to dig that information out.
Matt: How do you see AI impacting alternative data sets and how people use them?
Robi: There is a progression and a cultural change involved. Our mindset was not to wait and establish a perfect, integrated, scalable data infrastructure before building AI. Instead, we put AI into the hands of end users to learn and adjust based on feedback. While we haven’t fully tackled alternative data sets in our AI journey yet, there is an opportunity within standard data sets. For example, in sustainability reports or the footnotes of documents, there can be instrumental nuggets that take a lot of time to find manually. We ask ourselves how we can get critical information out of the data sets we already have readily available.
Matt: What has been surprising to you in building this out?
Robi: The agility really matters. Two years ago, ChatGPT 3.5 came out, and now the world is talking about the AI workforce and humanoids. What was initially a surprise is how fast you have to rethink things. We might build a solution in January, and then a new model like Claude comes out which is an excellent execution engine. The surprise is the constant reimagining and adapting. If you asked me a year ago, I would have been surprised by how far we have come.


