Skip to content
ChatGPT – how can banks leverage generative AI?

ChatGPT – how can banks leverage generative AI?

ChatGPT is currently on everyone’s lips. The versatile abilities of this chatbot fire the imagination, but also fuel concerns about the future application of such a simple and widely usable artificial intelligence in schools, the world of work and science.

ChatGPT – how can banks leverage it?

The obvious question is what effects the latest surge in development in the field of generative artificial intelligence (AI) could have on the banking world.

A quick look at ChatGPT

“ChatGPT is an advanced AI model developed by OpenAI that allows users to communicate with a computer program in a natural way. The model is based on the ground-breaking GPT (Generative Pretrained Transformer) technology, which has been widely used in AI research in recent years.

ChatGPT allows users to communicate in a variety of ways, such as by entering text, voice input, or even using images and videos. The model can be used in various ways due to its advanced technology such as customer care, customer support, personalized advertising and many more.

Overall, ChatGPT has the potential to revolutionize many industries and help businesses work more effectively and efficiently.”

The basis for ChatGPT is the powerful language model GPT-3, which processes up to 175 billion different parameters via a deep neural network.

The model was trained on an enormous amount of data, which corresponds to around 25 billion printed pages, consisting of freely available texts from the Internet and books.

It thus exceeds all previously trained language models by a factor of 20.

The recently introduced GPT-4 evolutionary stage is said to be based on 100 trillion parameters and is able to function even more human-like in logic and language, it can also describe images and even recognise humour.

Since the system is essentially based on statistics and generates the content based on probabilities between word connections, it can happen that the algorithm invents factual claims and presents them as facts.

Attempts are being made to minimise these and other undesirable effects by means of extensive manual training of the model.

In addition to OpenAI, in which Microsoft is involved, Google and Meta have developed chatbots comparable to “Bard” and “LLaMA” based on large language models.

In Europe, too, there are initiatives, particularly from the German side, to develop large language models so as not to lose touch internationally.

How and where can banks and their customers benefit? 

AI-based language processing (natural language processing) is already being used in a wide range of applications in banking, for example in customer service support, the evaluation of public reports in real time (news analytics) or in document and contract analyses as part of due diligence processes.

These capabilities could be significantly increased by large language models. In addition, other fields of application that require a greater understanding of texts and creative formulation skills could be considered, such as creating internal analyses and summaries or designing content and texts for product descriptions or advertising campaigns in the areas of marketing and sales.

In the future, a wide variety of work orders such as research or the creation of a structuring proposal could be sent to the language AI in natural language, like an e-mail to an employee.

The language capabilities of generative AI models could also be used for programming tasks, possibly even for maintaining or migrating existing legacy systems to dying programming languages ​​like Cobol, which few IT professionals can master today.

Even if the last example still seems impractical, it illustrates the wide application potential of this technology.

Experts estimate that generative AI models can realize efficiency gains by a factor of 2-10 for normal office work, almost like an e-mail to an employee.

In direct contact with the customer, however, the possible uses of ChatGBT or comparable text robots are likely to be limited in the short term.

Because in direct customer communication, it must be ensured that statements or even recommendations are factually correct, internally understandable and ethically unassailable. Anything else would not only have a negative impact on the quality of banking services and thus customer satisfaction.

There could also be significant liability and reputation risks for banks if generative AI acts in the customer environment without additional filters or human controls.

What challenges still need to be overcome? 

The developers of ChatGPT have learned from the weaknesses of earlier chatbots and have trained the AI ​​to respect ethical values ​​and to rule out misuse as far as possible.

Today, this is essential for success and acceptance in both private and business contexts. However, such effects cannot be entirely ruled out.

With ChatGPT, as in general with the use of AI, the previously unanswered question also arises for banks as to who is ultimately responsible if damage or violation of rights occurs.

This applies in particular to so-called General Purpose AI, i.e. all-purpose AI systems that were developed for a non-specific purpose and can be used for a wide range of tasks.

So far, this is still a gray area from a legal point of view, which has yet to be specified by legislation and case law.

With the AI ​​Act and the proposed directive on AI liability, the European legislator is working on legal regulations that are intended to clarify open liability issues, especially for high-risk AI applications, and to create more trust among users and those affected.

This is to be achieved by fulfilling and demonstrating certain minimum requirements, including those relating to transparency, cyber security and risk management.

With the public hype surrounding ChatGPT, general-purpose AI systems have now also come more into focus.

It is to be hoped that this will create more legal certainty for banks and other users of AI, even if it is currently still unclear what exactly a responsible distribution of risk can look like.

Irrespective of this, it is common practice in the financial sector to identify all risks for the bank and to monitor and control them on an ongoing basis.

This also applies today to the use of AI systems in banking operations.

Risk management must follow the requirements of banking regulation and be subject to supervision by the banking supervisory authority.

In this respect, it is already the responsibility of the banking sector to use AI applications responsibly, whether in direct customer contact or in internal risk management.

To conclude

As a result, so-called large language models, such as those used in ChatGPT, promise potential for realising increases in productivity, especially in the banking sector.

Given the current level of maturity and the uncertain legal environment, a far-reaching application in customer processes and in areas with special risks should still be clearly questioned.

However, the high dynamics of technological developments mean that significant progress in terms of reliability and traceability – keyword “Explainable AI” – can be expected in the not too distant future. This means that generative AI applications will continue to gain in relevance and practicality.

 

The post ChatGPT – how can banks leverage generative AI? appeared first on Payments Cards & Mobile.

Cart 0

Your cart is currently empty.

Start Shopping