AI vuca

By Nathan Bennett, G. James Lemoine, and Péter Molnár

AI-enabled volatility, uncertainty, complexity, and ambiguity (VUCA) are downsides to the numerous benefits of AI technology. Understanding them will help business leaders to adopt governance methods that enable better decision-making while remaining aware of its limitations.

Executive Summary
In this article, we describe the way AI addresses decision-making challenges related to volatility, uncertainty, complexity, and ambiguity (VUCA) and also introduces new risks. Examples include AI-induced market volatility, uncertainty from deepfake technologies, and challenges in integrating AI with business operations. We emphasize the importance of AI governance and model controls, as AI tools will undoubtedly create new VUCA challenges if not carefully managed. Leaders are cautioned to be aware of AI’s potential to cause false confidence or chaos, and the need for deliberate implementation and vigilant oversight to prevent AI from exacerbating VUCA challenges instead of solving them.

Volatility, uncertainty, complexity, and ambiguity (VUCA) are terms used to describe vexing decision-making environments for leaders. AI’s ability to quickly process and learn from vast amounts of data shows promise for improving decision outcomes in the face of VUCA challenges. Of course, there are cautionary tales illustrating how AI has led decision-makers astray. Amazon’s AI recruiting tool showed bias toward male candidates, and Tesla’s AI-driven systems have faced scrutiny over automobile accidents. AI-powered facial recognition technology has been found to misidentify people of color at higher rates than whites. Recently, the misuse of AI for deepfake identity theft has highlighted a new area for abuse.

iStock-2155518592 [Converted]

Because such examples are so stunningly worrisome, obviously produce errors, or a combination of the two, we expect leaders will be especially attentive to preventing similar issues from emerging in future AI deployments. However, decision-makers must understand the problems inherent in less obvious features of AI tools’ development and deployment. While an AI implementation may address the specific elements of VUCA it was programmed to consider, it also brings new and unanticipated VUCA-related challenges. In other words, the design and implementation of AI tools often come with VUCA “baked in.” Managers must be mindful of these potential issues as they integrate AI into their decision-making processes. However, as one AI executive told us, “From what I have seen, companies invest in AI before they invest in AI governance and model controls. The former without the latter is dangerous.”  Because AI can create VUCA and a rush to adopt AI may occur without sufficient guardrails, it underscores the need for leaders to be aware of AI’s potential to create false comfort or chaos.

AI Creates Volatility

While an AI implementation may address the specific elements of VUCA it was programmed to consider, it also brings new and unanticipated VUCA-related challenges.

There are several demonstrations of volatility introduced to decision-making by AI. Most notably, AI is used extensively in high-frequency trading (HFT), where algorithms make rapid trades based on complex data analysis. Some critics argue that HFT algorithms cause sudden price swings, making markets more volatile. Though AI systems constantly learn and evolve, unexpected changes can lead to unintended consequences. For instance, an AI used for dynamic pricing might misinterpret a sudden surge in demand and set an excessively high price point, potentially leading to a sales slump (or, as Wendy’s hamburger chain recently learned, spark a volatile customer reaction by its mere existence). As another example, biased training data can lead AI to make unfair or discriminatory recommendations. For instance, AI used in the hiring process might favor candidates with specific resume keywords, unintentionally filtering out qualified applicants from diverse backgrounds. A decision-maker relying on such an AI system could make biased hiring choices, leading to potentially volatile consequences like lawsuits or reputational damage.

AI Creates Uncertainty

Several examples of AI’s capacity to build uncertainty have recently made the news. First and perhaps most dramatically, as Warren Buffet noted at the 2024 Berkshire Hathaway shareholder meeting, the ease with which a bad actor can create deep fakes of audio and video content quickly destroys trust in the integrity of anything we hear or see. A recent Washington Post article made it clear that much of the content on social media sites like 4chan results from nefarious efforts to use AI to create false content. Less dramatically, but just as vexing is the uncertainty AI introduces through evolving regulations. The EU’s AI Act, passed in May 2024, is the world’s first comprehensive AI law, and more regulations are expected to follow. This regulatory landscape creates uncertainty about AI’s design, implementation, and liability.

Next, intellectual property issues in AI-generated content add to this uncertainty. This issue was at the core of the Writers Guild strike that shut down Hollywood in the summer of 2023. As AI systems become more capable of composing music, crafting visual art, and writing stories and scripts, questions will arise about who owns the rights. Of course, issues should also be expected to arise regarding the infringement by AI on existing intellectual property rights. Establishing clear legal frameworks to address ownership and rights related to AI-generated content is crucial but presents challenges due to the unique nature of AI’s creative processes. Leaders must navigate these uncertainties while considering AI’s implications for their industries and workforce.

As AI systems become more capable of composing music, crafting visual art, and writing stories and scripts, questions will arise about who owns the rights.

Finally, AI is seeding uncertainty among workforces around the world. At M&T Bank, Chief Data Officer Andrew Foster recognized the rise of generative AI as “a headline-grabbing beast”, and understood employees were unclear about the impact AI might have on their jobs and career security. To address this, the company hosted an “AI Week”, in which nearly one thousand employees (a majority of whom did not work in a technology-related job) came together to discuss AI’s implications for the bank and AI ethics and data privacy. During AI Week, company executives worked to address this uncertainty by describing how they are tying AI to their three ‘pillars’ of work at M&T: Policy, including the expectations and ethical guardrails; Education, including how AI will be used to upskill opportunities; and Opportunity Seeking, explaining how AI can make a positive difference for the organization.

iStock-2158976134 [Converted]

AI Creates Complexity

The inherent complexity of emerging AI technologies poses significant challenges. Practically, integrating AI solutions with existing business operations can be daunting. For example, an accurate demand forecasting model might be difficult to incorporate into an organization’s ERP system, and the availability of historical data may present ethical, financial, and regulatory challenges when training algorithms. Regulators face complexity balancing industry pressures, developer concerns, and public safety. Effective implementation of AI requires overcoming these hurdles. The legal and regulatory uncertainties mentioned above – and the work required to establish compliance – will undoubtedly be an ongoing and complex challenge for leaders. Ethical issues are complex, too. For example, how AI systems learn to prioritize certain patients over others in the medical context raises questions about fairness and justice. Training AI tools to share the values espoused by an organization’s mission and vision is delicate and complex.

AI Creates Ambiguity

Decision-based AI models often output solutions without explanations, leaving managers to guess why a seemingly obscure or obtuse suggestion might be best.

AI algorithms often operate as ‘black boxes’, challenging understanding and communicating the assumptions or rationale behind their results. Decision-based AI models often output solutions without explanations, leaving managers to guess why a seemingly obscure or obtuse suggestion might be best. This ambiguity can lead decision-makers to fill in gaps with their interpretations, which vary based on individual experiences and biases. Without a shared understanding of AI-generated results, leaders may hesitate to act on them. It is troubling when executives like Delta CEO Ed Bastian are on record noting that data integrity issues might lead AI to hallucinate. A hallucination is quite vividly understood as a foundation for ambiguity. As another AI veteran opined, “AI is such a good liar that even though we know it produces hallucinations, we still think we can trust it.”

Conclusion

iStock-2158976133 [Converted]

AI does offer significant potential to address the challenges faced by decision-makers confronting VUCA. However, it also introduces new and often subtle layers to these challenges. Leaders must recognize AI’s potential to enhance decision-making while remaining vigilant about its limitations and the new unintended risks inherent in its adoption. AI tools can create VUCA as surely as they can address it. Absent deliberate consideration, careful implementation, and vigilant oversight, there is a risk that what was intended as a salve for decision-makers confronting VUCA instead becomes its catalyst. Several executives shared this caution with us, including one who succinctly said, “I am not hearing enough conversation about these threats to trusting AI-generated recommendations.”

About the Authors

nateNate Bennett is a Professor at Georgia State’s Robinson College of Business in Atlanta, Georgia. He has published in Harvard Business Review and Wall Street Journal, as well as in several top academic journals. He is co-author of Riding Shotgun: The Role of the COO, published by Stanford University.

jamesJames Lemoine is currently an associate professor of organizations and human resources at the University at Buffalo (SUNY) School of Management. His research on the intersection of leadership, ethics, and creativity has been published in top academic journals and covered by top business news outlets such as Harvard Business Review.

peterPéter Molnár is an associate professor at Georgia State University’s J. Mack Robinson College of Business where he teaches artificial intelligence, focusing on both technical implementation and executive decision-making. His career bridges academia and industry, including work as a data scientist at Amazon Web Services on innovative machine learning applications.

LEAVE A REPLY

Please enter your comment!
Please enter your name here