Why hasn’t GenAI been transformative for FIS yet?
Generative AI is changing how entire industries synthesize and analyze information – but for financial institutions, adoption has arguably been less enthusiastic. While large language models (LLMs) like Claude, ChatGPT, and Gemini have incredible language processing power, their outputs often fall short when it comes to the rigorous, quantitative use cases that are key for financial analysts.
Why? Analysts have found generated results for financial prompts to be inaccurate and unreliable because LLM’s don’t know where to source reputable data from. Financial analysis and research requires data precision, auditability, and context – and that’s where traditional generative AI hits its limits.
A deeper look at major adoption challenges, and how Daloopa MCP solves them:
We recently launched Daloopa MCP to directly solve these challenges, by delivering our hyperlinked and fully sourced data directly into LLMs. At launch, we spoke with over 20 Product and AI leaders in the FIS industry to identify key challenges to GenAI adoption in analyst workflows – and how Daloopa’s MCP solves them:
Problem #1: Trust in Outputs – The Hallucination Problem
AI leaders in FIS have stated that hallucination is the number one reason why financial analysts have been hesitant to adopt GenAI beyond writing emails and editing presentation language. Without quality assured data sources, the best deep research LLM’s hallucinate one in every two financial data points, which means that prompts use incorrect data to return false conclusions. In investment workflows, where a single error can invalidate a model, a thesis or a position, the stakes of actioning on hallucinated data are extremely high.
Daloopa’s MCP solves this, by delivering a vetted fundamental database directly into the LLM. This means that instead of aimlessly searching the web and taking data from unvetted sources (like Youtube or Reddit), the LLM can directly source from Daloopa’s data layer, and retrieve figures that are hyperlinked to the primary source. This takes hallucination rates from 50% to 0%, unlocking a whole new quantitative use case for analysts – one that they can trust.
Example:
- Asking a LLM “What was Microsoft’s free cash flow margin in 1Q24?” without Daloopa MCP yielded the answer:
- “FCF was $20.7 billion, and we can’t find 1Q24 Revenue, so making an assumption using a growth rate we could find, 1Q24 Revenue was $56.5 billion which means that FCF Margin = 36.6%”
- When a LLM with Daloopa MCP enabled was asked the same question this was the result:
- “FCF was $20.965 billion, taken from the 1Q24 Earnings Presentation, and 1Q24 Revenue was $61.858 billion, taken from the 1Q24 8K. This means that FCF Margin = 33.9%”
The first result’s 3pp delta could mean a world of difference to an analyst’s position, which shows that for prompts requiring more inputs than the one above, compounding data inaccuracies are risky, but can be mitigated by layering in the correct data source at scale.
Problem #2: Building Mastery and Becoming Part of the Analyst Process – Understanding how to Make AI Work for You
Product leaders in Financial Services have observed that GenAI has largely become just another tool layered onto an already crowded tech stack – alongside data aggregators, dashboards, and productivity apps. For analysts, this creates friction rather than relief. They’ve spent years refining workflows for their unique investment processes: from meticulously built Excel models to transcript parsing and proprietary thesis generation. These systems are precise, personal, and performance-driven. Yet today, GenAI often only enters the picture at the margins – used for communication tasks or final presentations, not in the core analytical work where time and accuracy matter most.
Reliable data layers reduce this friction in AI adoption by ensuring the tool’s capabilities are aligned with how analysts already think and operate. This is exactly what Daloopa’s MCP does. When enabled, analysts don’t have to manually provide context to the LLM, like typing individual KPIs for the chatbot to analyze. MCPs also include tools that give the LLM direction on how to use the database, meaning that instead of the user having to abide by prompting semantics, the tools provide all necessary instructions for an error free output.
A quality assured, auditable and secure MCP takes AI from being an external system to consult to an embedded co-pilot taking over non-strategic tasks —surfacing insights, generating models, and finding comparative data points, all with the click of a button. These functionalities are what turns GenAI from a novelty into a durable advantage.
The Bottom Line
LLMs are only as good as the data you give them. For financial professionals, the right approach isn’t to manually upload all your data, or waste valuable time on auditing sources – it’s to add a scalable, quality assured and auditable data layer right into the tool.
When the correct data is integrated, generative AI’s true power shines. It’s not just for summarizing transcripts or writing memos, but for aiding in complex analysis, performing sensitivity tests, highlighting industry winners, and speeding up time-to-insight—without compromising on accuracy or compliance.
If you would like to start incorporate GenAI into your analytical workflow, start by asking:
“Do my AI tools have the right data?”