Generative AI Use and the Cloud in 2024

By Patrick Bangert

Patrick Bagert, SVP for data, analytics, and AI at Searce, discusses the nature and status of generative artificial intelligence (GenAI), addressing current challenges: quality concerns, legal disputes, and uncertain business value. Patrick makes a forecast for 2024, with hope for realisation of real business outcomes, legal clarity, and industry consolidation.

Write a poem for my wife, draw a picture of a silly goose, play the national anthem in the style of the Beatles – these are just some of the possible requests you can put to  Generative AI (GenAI). This type of artificial intelligence (AI) creates artifacts from prompts written in prose language and could be text, images, videos, or audio. Such technologies existed for several years but emerged into the public consciousness about a year ago with the advent of ChatGPT. Its major innovation was to provide its model free-of-charge with an easy texting-like interface.

One year on, the AI industry has produced many other models, particularly focusing on the creation of text; so-called Large Language Models (LLMs). Millions of people have interacted with these tools and found them to produce remarkably realistic sounding prose texts. However, at second glance, many have observed some glaring flaws such as factual errors (also called hallucinations) and gaps in either causal, logical, or common-sense reasoning that no human being would ever make. This has led to a significant debate on the quality and usability of these models.

Unravelling AI

To make models, AI companies must generate training data to improve its abilities through a ‘fill-in-the-blanks’ learning approach. The more blanks filled in correctly, the more accurate the model is — never mind whether the blank occurs in an inconsequential social media post or in a Nobel Prize-winning patent application. 

Large language models require extensive training data, often obtained by downloading internet content. This includes a lot of text of questionable quality and origin, including incorrect claims, fake news, or racist language. It also includes text that is copyrighted, prompting significant debates on the legality of training AI models on copyrighted material. Governmental regulations like the executive order in the USA and AI Act in the European Union form an initial offering from lawmakers.

The year 2023 was spent in a collective “wow, and now?” debate. Entertaining and frivolous uses of GenAI continue to dominate social media, whereas effective business uses are mostly topics of conversation and plans, rather than pragmatic use. As we enter 2024, these themes will continue to dominate.

Navigating the GenAI landscape 

Presently, the public is not convinced that AI models are ready to be used in applications where mistakes carry real-life consequences. Simply put, recommending a boring movie has no consequences, inadvertently insulting someone normally has some negative effects, whereas misdiagnosing a disease could have catastrophic outcomes. The primary problem of GenAI is trust. Ultimately, in the eyes of the public and business decision makers, the field is brand new. They need time to adjust and carefully evaluate the capabilities to determine real-life business benefit. 

Contrary to common belief in the tech industry, business outcomes are only partly dependent on technology, and primarily dependent on people accepting it and using it properly. Contracting, pricing and various other non-technological factors play a role, which is a major reason why established businesses hesitate to buy GenAI technologies from new companies that cannot guarantee they will be in business next year. The landscape of GenAI is seeing many new entrants and will experience consolidation in the medium-term.

Tightening the reins

The legal stature of GenAI is highly uncertain, with several major lawsuits and a host of regulations by various jurisdictions and governments in the works. For the unknowing producers of training data, the AI companies, and the users of GenAI alike, the future of the field will be determined by what will be allowed, who will be liable for the inevitable mistakes, and what is considered ethical versus discriminatory. The biggest next step for GenAI will not come from technology makers, but lawmakers.

Popular debate continues regarding the dangers of GenAI leaving us in a Terminator-like dystopia. While philosophically interesting, current AI capabilities do not match these concerns. Despite years of research and significant investment, creating a basic personal assistant or house-helper robot remains a challenge. Few researchers have ever seriously pursued an AI with general intelligence – even though popular discourse is dominated by the idea. While GenAI shows promise, the industry is working to turn inventions into innovations. Though it’s not yet a lucrative business, many think it will be a goldmine.

GenAI’s 2024 outlook

GenAI’s primary use lies in office settings. Integrating GenAI intentionally and holistically into company processes can boost productivity by as much as 30-40%. Change management and education are important components of this goal. Rather than replacing individuals, GenAI will augment their working capabilities, making them more efficient over a range of practical tasks. Despite potential benefits, widespread adoption remains limited, with even leading companies still in the pilot phase. 

With that in mind, in 2024 I anticipate a rise in the implementation of GenAI tools, driving productivity and yielding tangible business advantages. My optimism extends to seeing increased adoption of GenAI applications by brick-and-mortar businesses, generating real value. I also hope for greater clarity from lawmakers, and foresee a consolidation in the AI industry, managing the emergence of new players. Scientifically, I expect the focus will shift towards building LLMs into existing software tools and other processes, rather than pioneering significant standalone AI breakthroughs. In essence, 2024 marks a year of metabolising the new innovations of 2023.

About the Author

Patrick BangertPatrick Bangert is the senior vice-president for data, analytics, and AI at Searce, which provides professional services for cloud applications. He heads the profit center that is responsible for all projects with a data scientific character globally. Until recently, Patrick led the AI Division of Samsung SDS from 2020 to 2023 bringing AI tools and services into Samsung Cloud for computer vision, natural language processing, and machine learning with a particular focus on medical imaging. He holds a PhD in AI and has over 20 years of business experience.

LEAVE A REPLY

Please enter your comment!
Please enter your name here