AI In Finance: Navigating the Fourth Industrial Revolution
AI In Finance: Navigating the Fourth Industrial Revolution
James Watt did not invent the steam engine, but his innovation of creating a separate condenser for cooling in 1765 resulted in a more efficient, powerful, and commercially successful design. The Watt engine laid the foundation for the widespread adoption of steam engines in industry and transportation, revolutionizing the way work was done in the 18th and 19th centuries and helping to catalyze the First Industrial Revolution.
Today, we live during what many consider to be the Fourth Industrial Revolution. Simply put, it is the Industrial Revolution for knowledge workers. We’ve witnessed rapid advances in telecommunications, data, and digital technologies that have dramatically altered the landscape. Much like the steam engine, the ubiquitous availability of high-speed networks, cloud computing, and smartphones have perhaps been the most significant catalysts of this revolution. These tools enable the collection, processing, and access to massive amounts of data from nearly anywhere, and anything.
THE EMERGENCE OF AI
Just as the Watt engine was not the first steam engine, today’s technologies are also not entirely new. Artificial Intelligence (AI), another exemplar of the Fourth Industrial Revolution, first established its formal research roots in the 1950s. To be sure, it has had its share of fits and starts. With finite resources, early AI endeavors focused on narrow, domain-specific applications. In contrast, today’s nearly unlimited access to cloud compute and storage has propelled AI to employ neural networks and deep learning models that deliver a range of advanced applications. This includes natural language processing (NLP), computer vision, speech recognition, predictive analytics…and, of course, generative AI.
It is this last category – generative AI – that has recently, and understandably drawn the most excitement. The ability of a machine to quickly respond to free form prompts and generate new contextual content is remarkable. This is a step beyond the more structured conversational AI utilized by voice assistants, like Alexa and Siri. OpenAI’s ChatGPT is perhaps the best-known example of generative AI. It is to the field of AI what the original Mosaic browser, and later Netscape were to the Internet. Simply put, it has moved AI beyond the province of research scientists and made it more accessible to the masses. This is a big deal!
POWERING THE ENGINE
The technology is powered by large language models (LLM), essentially a massive corpus of language, syntax, and context assembled by scouring data that largely exists in the public domain. This includes websites, books, articles, journals, news, blogs, forums, social media, and other diverse sources. The current state of the art utilizes what is known as a transformer architecture that tracks, and considers the relationships between sequential data elements, like words in a sentence. In effect, it enables the model to understand inputs and generate contextual outputs based on a large number of semantically similar patterns that it’s encountered during training. Still, despite exposure to a vast number of topics, these models are largely foundational, in that they’re general-purpose and not inherently tuned for a specific industry or application.
The sheer scale of data ingested by these models also makes supervised training impossible, so the efficacy of unsupervised training is largely dependent on the statistical significance of the sample size. The more variations, the better. Additionally, not all training is contemporaneous, so models may not have been exposed to the most recent information. Ask ChatGPT a question about current events, for example, and it will respond that it doesn’t have access to information beyond September 2021. And, like any system, the data is subject to errors, gaps, and bias. So-called “hallucinations” occur when the model produces a seemingly convincing, but wrong or incoherent output, often a byproduct of such data gaps and errors.
Since most publicly and commercially available foundation models are generalized and only trained on public data sets, there’s also the need to construct and maintain more specialized industry and institutional corpora. After all, maximum value to an organization is achieved by incorporating its own proprietary and industry-specific data. In descending order of complexity, the options to fine-tune models are, generally:
- Build a new foundational model, though few companies have the wherewithal or data assets to accomplish this;
- Fine-tune an existing foundational model by further training it on a more specialized corpus of data, and adjusting model parameters to make it more proficient in the target domain; or
- Utilize prompt-tuning to guide the model to produce desired outputs by applying a carefully crafted, and curated set of input prompts.
In all cases, an effective data strategy, architecture, processes, and governance are required to collect, create, trace, curate, tag, train, and secure both structured and unstructured data assets.
AI IN FINANCIAL SERVICES
I’m increasingly asked about the impact AI will have on financial services. Some believe that the value of AI in finance is not yet proven, and that a “wait and see” posture is most prudent. Others believe that recent developments in the space are game changers that you ignore at your own peril. The present-day truth lies somewhere in the middle, but the trend and trajectory certainly appear to favor the believers.
There are many financial organizations that have already embraced AI, though at varying levels of maturity and sophistication. For systematic investment firms that rely on quantitative models, for example, the use of AI in the investment decision process is a natural progression. Fundamental managers are less likely to use AI bots to make automated investment decisions, but some have begun to leverage NLP to quickly pore through large volumes of financial reports and media to more efficiently extract performance metrics, sector analysis, event-driven catalysts, asset allocation, and sentiment.
A number of large banks and insurance companies are also exploring, or already using AI for market research, KYC/AML, customer support, exception processing, underwriting, claims processing, cybersecurity, and other applications. Robotic Process Automation (RPA) is also used by many of these large financial institutions to automate repetitive processes, but this is largely rule-based and does not itself depend on AI.
Bloomberg has built its own LLM, BloombergGPT, training the model on its extensive archive of over 4 decades worth of financial data, news, and documents, combined with large volumes of other publicly available financial filings and Internet data. The model employs over 700 billion tokens, 50 billion parameters, and required over 1.3 million hours of GPU computational power. Clearly an impressive feat, but also not one that is achievable by most organizations.
Morgan Stanley has taken a much simpler route, and employed prompt-tuning techniques to further train a foundational OpenAI GPT-4 model. The objective is to better equip the firm’s financial advisors with crucial information that can be quickly referenced when advising clients. To do so, they meticulously curated a collection of over 100,000 documents covering a wide range of investment insights, general business practices, and investment procedures.
Launched in 2015, P&C insurer Lemonade was built from the ground-up to leverage technology, including AI. It uses a data-driven approach to provide insurance products and services at lower cost, while also improving the overall customer experience. They operate an end-to-end digital platform to collect 100x more data points than traditional insurance companies, and utilize AI for underwriting, risk assessment, claims processing, fraud detection, personalization, and customer service. When processing a claim, AI is used to analyze claim details, policy documents, and even video submissions. Its use of computer vision can not only assess damage, but can also identify non-verbal cues that may indicate fraud. The process is often fully automated and has settled claims in as little as 2 seconds, a world record, while also improving global loss ratios as the AI models continue to learn.
APPLICATION DEVELOPMENT, TOO
Application development teams, and desktop Python jocks are also beginning to use generative AI for programming, though the use-cases tend to be applicable to simpler, standalone software components and coding exercises. We’re a long way from having AI generate a comprehensive, enterprise-class system, or maintain and upgrade a legacy code base. But that’s less of an indictment on the capabilities of AI and more about the limited corpus of knowledge that’s available to train the model for such tasks. Still, ask ChatGPT to generate a Security Master database schema and you’ll be surprised by what it delivers. It’s rudimentary, of course, but fundamentally sound and completed by a readily available, general-purpose machine model in seconds. Impressive!
IT’S ALL ABOUT THE DATA
A comprehensive data strategy is the most important prerequisite to pursuing AI, or any data-driven business model. A proper data strategy, it should be stated, is a business-first endeavor that requires leadership engagement; it is definitively not a backroom IT exercise or tactical deliverable intended to support a single application. The data strategy articulates the “what and why” of an organization’s data pursuits. It embodies the full range of business objectives, operating principles, and functional disciplines. It considers internal and external factors on business growth, profitability, client experience, operations, and change management. It is developed not only for the here and the now, but also for the future…basically, to where the puck is moving.
The data architecture, in contrast, addresses the “how”. It is strongly influenced by the data strategy, and covers the technical design, interfaces, governance, and operational components of the data platform. It’s important to keep in mind that there is no one-size-fits-all solution, and a sound data architecture requires bringing the right toolset to the job. Here, there are some best practices to follow.
Data pipelines should be developed to orchestrate the ingestion, cataloguing, processing, enrichment, delivery, and even purging of data over the course of its full lifecycle. Data warehouses are well-suited for structured and semi-structured data that supports more traditional transactional, business intelligence, and reporting requirements. Data lakes are better suited to house massive amounts of raw “big” data, but governance, curation, and usability presents challenges that must be considered, lest the data lake turn into a data swamp. Some vendors have sought to bridge the divide between warehouses and lakes by combining capabilities of both – the so-called “Lakehouse”. Ultimately, each has its place, but also its limits. And, despite the proliferation of stored procedures, logic simply does not belong in the database…or should at least be kept to a minimum.
Traditional ETL processes that seek to replicate and centralize data into an enterprise database can also introduce schema gaps, timing inconsistencies, exceptions, and greater cost. In contrast, a modern enterprise architecture embraces a more decentralized, domain-driven design that leverages microservices, APIs, and a data mesh, rather than a single, all-singing all-dancing repository. Data virtualization is a logical choice to provide interoperability across federated data boundaries, especially if the mesh spans heterogenous data platforms. It can also better support high volume/high velocity streaming data that would otherwise choke a traditional database.
Notably, all of these concepts are important to any data-driven, or data-dependent organization, whether they pursue AI or not. Far too many firms struggle with basic data issues that result in sizable productivity gaps and outright errors. Indeed, the lack of a coherent data strategy and architecture increases operational risk, reduces operating leverage, and results in a subpar customer and employee experience. In 2023, there are few businesses, especially in financial services, that can afford to operate, let alone thrive under these conditions.
RISK, GOVERNANCE, AND POLICY
Of course, significant questions and challenges remain. Information security and data privacy are critically important considerations for both public and private models, alike. The risks of exposing personally identifiable information (PII), or other sensitive data is significant and real. Some financial institutions have disabled access to public tools, like ChatGPT, for this very reason. Even for internal models, governance frameworks and controls must be established, and data should be anonymized where appropriate. This is especially true for large financial institutions that often require information firewalls between different business lines, or departments. It’s also highly advisable to engage risk, legal, compliance, and information security professionals from the outset of any corporate AI initiative.
While the privacy and security of sensitive data is top of mind, there are also ethical and policy questions that deserve attention. How will an expansion of AI, particularly in its generative and cognitive forms, perpetuate inappropriate bias and profiling? What jobs will be displaced, and how will people be retrained in a world where AI is a more dominant participant in the labor force? Various regulatory regimes are already considering the broad impact of AI, along with associated legislative action, oversight bodies, and evolving legal frameworks.
CONCLUSION
As financial services firms consider their positioning and investment in AI, it’s critical to first address any nagging data gaps. Data has always been the lifeblood of finance, but the ever-increasing volumes, velocity, and variability make a coherent data platform all the more important.
Much like the early web browsers ushered in an era of widespread Internet growth, transforming individual and societal behaviors, ChatGPT and similar tools are poised to be catalysts for massive AI adoption. Yes, there are still challenges to overcome, but the already sizable commitments to R&D, oversight, and investment will surely improve upon current capabilities, limitations, and policy gaps. It simply feels like we are at the heel of the hockey stick, preparing for rapid and exponential growth.
As has been the case for every preceding revolution, industrial or otherwise, massive socio-economic change is an unavoidable outcome. There will be winners and losers. And, with the intensified pace of advancement in AI, standing on the sidelines is not likely to be a viable option.
About Author
Gary Maier is Managing Partner and Chief Executive Officer of Fintova Partners, a consultancy specializing in digital transformation and business-technology strategy, architecture, and delivery within financial services. Gary has served as Head of Asset Management Technology at UBS; as Chief Information Officer of Investment Management at BNY Mellon; and as Head of Global Application Engineering at Blackrock. At Blackrock, Gary was instrumental in the original concept, architecture, and development of Aladdin, an industry-leading portfolio management platform. He has additionally served as CTO at several prominent hedge funds and as an advisor to fintech companies.