Anna Schoff – An MSc graduate in Speech and NLP with expertise in deep learning, data science, and machine learning. Her research interests include neural decipherment of ancient languages, low-resource machine translation, and language identification. She has extensive experience in computational linguistics, AI, and NLP research across academia and industry.
Bhushan Joshi – Competency Leader for Banking ISV, Financial Markets, and Wealth Management with extensive experience in digital banking, capital markets, and cloud transformation. He has led business strategy, consulting, and large-scale financial technology implementations for global banks, focusing on microservices, process optimization, and trading systems.
Kenneth Schoff – Open Group Distinguished Technical Specialist at IBM AI Applications with over 20 years of experience in banking, financial markets, and fintech. He specializes in IBM Sterling solutions, technical sales, and advising C-suite executives on AI-driven transformations in supply chain and financial services.
Raja Basu – A product management and innovation leader with expertise in AI, automation, and sustainability in financial markets. With a strong background in banking technology transformation, he has led global advisory and implementation projects across the U.S., Canada, Europe, and Asia. Currently a doctoral scholar at XLRI, he focuses on AI's impact on financial systems and sustainability.
Discover top fintech news and events!
Subscribe to FinTech Weekly's newsletter
Read by executives at JP Morgan, Coinbase, Blackrock, Klarna and more
Development of AI technology for FinTech is growing with great potential, but the growth may be slower than in other applications due to the complexity of the problem.
AI can capture patterns and anomalies that humans typically miss due to the ability of AI systems to consume very large quantities of data in many structured and unstructured forms.
However, the human brain with over 600 trillion synapse connections has been called the most complex object we know of anywhere – earth, solar system, and beyond. AI can augment human analysis through its ability to process many details at volume, but it cannot think.
In classes on AI at Yale many years ago, they defined AI as “the study of cognitive processes by means of computational models”. This definition still applies. Often the resulting computational models are useful on their own, and these have advanced in capability from Expert Systems and small Artificial Neural Networks to the Deep Learning techniques used to build Large Language Models (LLMs) and the Foundation Models used in Generative AI. Hardware advances have made much of this possible, and we are sure there is more to come.
Back in the 1990’s we knew that the lack of general knowledge in AI systems was a significant limiting factor, and we are now able to provide that in large AI models. Early AI technology was limited to very specific tasks somewhat like idiot savants – capable of doing one very specific task well, but useless for anything else.
That said, they did and still can provide value to their special tasks with much lower computing costs. For sustainability reasons, these technologies can still fulfill their roles in the AI landscape.
The Natural Language Processing (NLP) and Speech Processing capabilities provided by LLMs are now capable of capturing perhaps 90% of the content of a Natural Language exchange accurately which is of very high value for human-machine interaction.
In the current state of the art, the models used for NLP are run at a very high computational cost (read very high electric bill) which goes counter to sustainability considerations. Keep in mind that an experienced librarian or similar professional can provide 100% accurate results and only requires lunch. We should use the appropriate resource at the appropriate time.
More recently, with developments such as DeepSeek, we see optimizations gained by building smaller application specific applications using the same technologies used in the larger comprehensive models. This is a win-win by providing robust AI tech to address a problem domain while reducing computing costs. For example, a Fintech AI system supporting wealth management does not need a background in English literature.
AI Assisted Wealth Management Advisory
Let’s consider wealth management as an example application.
A client interview to create a client profile could be driven by basic AI techniques such as a decision tree or Expert System. However, based on our previous experience with some Expert System driven interviews, a well-qualified adviser will get better results just through a conversation. There is no substitute for people who know what they are doing. AI should assist but not drive.
Portfolio Analysis
If the client has a current portfolio, this needs analysis, and AI can assist here as well. How have the investments performed over time? Does the client tend to focus on specific industries? What is the prognosis as to how these are likely to perform in the future? What is the history of the client’s trades?
Based on the client profile and portfolio analysis, the advisor may introduce specific limits as to what the analysis should consider for the proposed investment portfolio. These may include personal preferences, limits of risk, limits of available funds, and any other consideration that may constrain the choices.
AI Assisted Wealth Management Advisory
There are several companies that use AI models to provide guidance for which stocks or market segments are likely to do well or likely to do poorly. This is either framed as a prediction problem, wherein the movement of the trend may be predicted, or as a classification problem which is an area where AI excels. An advisor may use these existing services to provide this type of information.
Environmental, Social, and Governance (ESG) considerations may also influence the outcome. These may already be included as input into the AI model used to do the analysis. The advisor and client will need to discuss what specifics to include in the portfolio model.
Strawman Architecture
A strawman conceptual view may look something like the diagram below. Many variations are possible.
One very common implementation would be based on a single GenAI foundation model doing all that we describe below, but we think partitioning the tasks is a better approach.
Each model would address a portion of the problem domain and could, therefore, be smaller than one comprehensive model. Some systems might run continuously while others would run on demand.
In the diagram, we are assuming that there would be Predictive Generative AI models serving as advisory systems to other purpose-specific AI models. These GenAI models would do most of the market analysis and would be trained for the various markets and financial instruments.
They would consume data feeds and, combined with other data from the data lake, produce market predictions for growth and anomaly detection which may mitigate risks. We are not convinced that such systems have yet matured to a point to be reliable, but they are advancing in development.
The results of each Predictive GenAI model would be recorded in the data lake. In addition, the analysis models could push notifications to other models to perform specific tasks. These models might be run on a periodic basis or possibly continuously for the period when the market of interest is active.
Autonomous trading systems might use the status feeds from the market analyses to trigger trades. Classification systems would periodically rate assets and keep a running history of asset classifications in the data lake. Finally, we come to the GenAI Portfolio Assistant.
The Portfolio Assistant would be the AI-backed Recommender system that has access to current market data and history. The advisor could interact with the assistant to provide the client’s profile and request recommendations. This could best be done with the client present. The interaction of the advisor with the client should be captured and recorded in the data lake as input to the analysis.
The advisor’s access to the AI systems is through an NLP interface which could be text or speech based.
The Portfolio Assistant would respond to the advisor using information in the model, from the data lake, or API queries into the Market Analysis models. The NLP interface provides a powerful assistant but based on experience, the advisor will need to know how to ask the questions to get useful results.
Without that human intermediary, the experience of interacting with an NLP system for such a complex topic may be frustrating for the novice. Large Language Models are far more capable than any earlier technology in this area, but they are still not likely to pass the Turing Test.
A Turing Test requires that a human being should be unable to distinguish a machine from another human being by using the replies to questions put to both. These machines are not human and can’t respond exactly as a human might. Many companies hire people whose job description is literally to just interact with LLMs and GenAI systems via crafting prompts to get better responses from the model.
According to a 2021 report from Juniper research 40% of global banking customers will use NLP chatbots for transactions by 2025. Adding NLP in front of any customer facing application is often where a company starts. Other AI systems focus on automating common tasks. The latter has been very successful for Supply Chain applications.
AI based automation can eliminate many manual processes and make workflows more efficient. NLP and task automation can benefit nearly any industry application. AI development for Financial Market analysis is a relatively difficult task.
Cornell University developed an GenAI Model StockGPT. See “StockGPT: A GenAI Model for Stock Prediction and Trading” at https://arxiv.org/abs/2404.05101.
Conclusion
Analysis of the financial markets is somewhat more complex than applications such as Supply Chain or even Banking. There are many more variables and complex behaviors driven in part by the market numbers, regulations, and the emotional responses of the participants.
Some of this can be captured using statistics to reduce risk, but predictions for the financial markets fall into the category of algebra problems where there are too many variables and not enough equations. AI can look for patterns and anomalies in addition to just doing the math.
Quantum Computing is another technology that would be good to explore. This is already showing value in certain applications in the sciences. It has been suggested for use in risk management via Monte Carlo simulations for one financial example.
We shall see what the future holds.