By Laetitia Cailleteau, Professor Thierry Rayna, and Philippe Roussiere
With the game-changing explosion of AI into the commercial world, business leaders have, predictably, focused on the technology’s potential to boost their organizations’ pursuit of their business objectives. However, as in any sphere of activity, with great power comes great responsibility. Enter “responsible AI.”
In the realm of artificial intelligence, generative AI (“gen AI”) opens up a range of previously unimaginable possibilities, guiding organizations toward a future filled with innovative solutions to complex problems. However, the unprecedented pace of adoption and fears of unintended harm have intensified the discourse on responsible AI. With regulators stepping up efforts to regulate the development and use of generative AI through legislation such as the EU AI Act, organizations, large or small, need to focus on responsible AI now.[i]
Businesses, however, have much catching up to do. Asked, “Do you know where AI is being used within your organization?”, “Do you understand the risks you are exposed to and how to manage them?”, and, “Who is the accountable leader?”, many CEOs today answer “No” to the first question, “In process” to the second and, “It varies” to the third. Only 2 per cent of CEOs self-identified with the leading practices of responsible AI. The good news is that 31 per cent commit to getting there by the end of 2025, propelled by the EU AI Act, which became law in August 2024.[ii]
But they are not alone in grappling with the responsible use of gen AI. Academia, too, has long participated in and informed discussions around the theoretical foundations, ethical considerations, and long-term impact of technology; it’s no different with gen AI. This article aims to present a holistic perspective, juxtaposing solutions that can be implemented today with a vision for more equitable and ethical use of gen AI tomorrow on four key issues: improving reliability and fairness, user experience, impact on workforce, and sustainability.
Improving the reliability and fairness of results
When deciding to use gen AI, enterprises need to consider the suitability of the technology, associated risks, and the cost to deliver responsibly.
When deciding to use gen AI, enterprises need to consider the suitability of the technology, associated risks, and the cost to deliver responsibly. Gen AI algorithms do not retrieve data; they generate it based on existing data. This is why they are better suited for narratives than for facts; they get the general story right, but not details such as dates or prices.
When using gen AI, some of the techniques below can mitigate the risk of generating irrelevant outputs:
- Involving a human in the validation or decision process of the AI system. For example, a recent field experiment conducted by Accenture and MIT revealed that introducing cognitive speed bumps, which prompt users to pause and engage in critical thinking, resulted in improved accuracy. However, depending on the frequency and complexity of the task, this can be laborious and not always precise.[iii] Human oversight is also a requirement of the EU AI Act for high-risk AI systems.
- Reinforcement learning with human feedback. This approach enables AI to learn through feedback from human trainers or users, online or offline.
- Customization of pretrained models for specific tasks with specialized data. Such measures minimize hallucinations and errors.
- Red-teaming. Simulating attacks to test defenses can uncover vulnerabilities that call for preventive measures to avoid the generation of harmful content and data leaks.
- Responsible AI user interfaces (UI). This could include a “validity score” that gives a higher rating to prompts aiming to generate a (well-known) narrative compared to those that aim to dig into details. A UI that graphically illustrates the random nature of the algorithmic process, such as rolling dice, could help users grasp the probabilistic underpinnings of gen AI outputs.
Apart from reliability, fairness of outputs is also of essence when using gen AI. This is because large language models (LLMs), trained on widely available content (such as news and social media), can also reinforce existing biases and create digital echo chambers that eliminate diverse perspectives.
To improve the fairness of output from gen AI, users can employ several strategies, including advanced prompting, retrieval augmented generation (RAG), and fine-tuning. Companies must involve skilled individuals to ensure quality, and use representative and high-quality datasets to supplement LLM technology. Bias may also be reduced by ensuring data lineage, structured curation, and creating synthetic data (if needed) to provide a trusted LLM context or training.
However, since today’s major gen AI algorithms are subject to randomness, we cannot completely remove bias and hallucinations. Attempts to artificially correct them can backfire. For example, when text-to-image models tried to apply bias mitigation techniques, the output’s signal-to-noise ratio and accuracy declined. Unless we have an intelligent system—symbolic AI—that understands the concept mentioned in the prompt, one that doesn’t rely solely on statistics and probabilities, it’s impossible to completely eliminate bias while maintaining the quality of the output. Being responsible means recognizing that this is not a bug but an integral feature of the algorithm that cannot be resolved but only mitigated. So, it is important to set up alert systems along with human oversight.
Delivering a safer, better user experience
Gen AI is reinventing the user experience by predicting and adapting content to users’ needs in real time. This can help provide a hyper-personalized experience, which is critical for inclusive design. Executives recognize the opportunity to forge more natural human-product interactions, but good intentions do not always translate into good outcomes, as consumers are sometimes frustrated with technology that fails to understand their intentions accurately.[iv]
“Gen AI + human” is the ideal when it comes to delivering a hyper-personalized experience.
Organizations will need to rethink their customer experience to be more human-centered and reevaluate their processes and information systems. While gen AI improves, streamlines, and accelerates technical implementations, it does not address outdated systems or process-driven experiences that must integrate with rigid back-end systems to trigger transactions.
“Gen AI + human” is the ideal when it comes to delivering a hyper-personalized experience. The technology is good at producing customized, ordinary results, but it cannot understand intent, symbols, and concepts or read between the lines. Humans, who can provide symbolic knowledge, need to be in the loop.
From a regulatory point of view, the EU AI Act offers safety protection through a risk-based approach. It bans AI systems that carry unacceptable risks, such as the use of subliminal or manipulative techniques, and mandates heightened requirements for high-risk use cases, such as having a quality and risk management system in place. It also imposes transparency obligations on AI systems intended to directly interact with human beings. For instance, chatbots need to declare to users that they are interacting with a machine, rather than a real person.
Preparing the workforce for a gen-AI-enabled workplace
Gen AI can reduce drudgery, but workers are worried. In one study, 58 per cent of workers cited job displacement as a concern, despite 95 per cent of executives agreeing that gen AI will create net new jobs. The reality is that roles will change. Industry will need to help workers acquire the skills required to make the best use of this technology.[v]
A bigger concern is deskilling, where heavy dependence on machines will result in a drop in human intellectual abilities. Even when used for routine tasks, AI can deskill experts when their expertise comes partly from performing these routine tasks. Responsible AI means taking this into account and forcing humans to run tasks manually, similarly to how pilots are forced to occasionally fly without autopilot to maintain their skills.
We found that the most advanced organizations—”Reinventors”, as we call them—are almost twice as likely as others to prioritize soft skills development around gen AI (problem solving, creativity, and critical thinking) alongside AI.[vi]
Training programs focused on the ethics of data and prompt engineering are necessary to prepare the workforce for their future jobs with gen AI. Organizations are off to a good start: although only 5 per cent have trained their entire workforce on gen AI, 63 per cent have trained at least half of it.[vii] Providing responsible AI training will also help organizations fulfill the AI literacy requirement from the EU AI Act, which requires people dealing with the operation and use of AI systems to be trained properly.
Deploying gen AI for a sustainable future
We need to consider “AI for green” and “green AI” holistically. Picture two separate lines on a chart; the first one plots AI’s role in deploying sustainable solutions, like low-carbon cement, and the second depicts the consumption of resources by data centers, like electricity and water. The depletion of resources as a result of AI use becoming more ubiquitous could grow exponentially. Industry must aim for a scenario where the first line grows much faster than the second.
On the one hand, gen AI can play a role in the collection, analysis, and reporting of ESG data that informs decisions on sustainable business models. Use cases include accelerating and refining Scope 3 emission calculations and creating comprehensive sustainability reports and consumer information with a focus on net zero targets and ESG metrics. As businesses in Europe grapple with new reporting requirements coming from the Corporate Sustainability Reporting Directive (CSRD), AI can help organize data and fulfill regulatory expectations.
Gen AI might pose threats to sustainability, and that is why AI regulations such as the EU AI Act also put sustainability center stage.
On the other hand, gen AI might pose threats to sustainability, and that is why AI regulations such as the EU AI Act also put sustainability center stage. The Act requires LLM developers to meet a basket of obligations by August 2025, which includes sustainability-related requirements, such as technical documentation including disclosure of energy consumption, copyright policies, and detailed training data summaries. This will aid users in selecting their gen AI model provider and mitigating downstream risks. With the advent of open source LLMs, such as Llama, it may no longer be necessary to train LLMs from scratch, cutting down on the most energy-intensive aspect of gen AI. However, execution could become a key issue if everyone starts using LLMs instead of the front end of search engines. Responsible solutions include designing gen AI to redirect such requests to a search engine and saving and resurfacing typical responses for frequent queries. Local AI, such as device-bound gen AI that relies on neural chips, could help. While these may use less energy-hungry resources, they still require energy to run. Training people on responsible and appropriate usage is critical.
What should you do now?
The risks and issues we have outlined above seem to cast a shadow on the outlook for gen AI. The reality is that gen AI has much to offer, and the future could be promising if we address the potential drawbacks of the technology. Regulators around the world already recognize this, so they are stepping in and proposing a number of regulations in the space. But the road map for executives does not begin with compliance; it begins with gaining a thorough understanding of what gen AI is and how to use it. The first step is to create an inventory of the organization’s AI systems and how they are used.
With this in place, the next step is to:
- Establish clear governance principles.
- Assess risks according to the EU AI Act (or another applicable framework such as NIST AI Risk Management Framework or ISO 42001).
- Systematically test prototypes in critical areas such as fairness, explainability, transparency, accuracy, and safety.
- Continuously supervise AI compliance while in production.
- Embrace the full impact, both on the workforce and the environment.
In doing so, organizations not only align with emerging global regulatory trends but also contribute to an ecosystem where gen AI can thrive responsibly and sustainably, creating value for all stakeholders.
About the Authors
Laetitia Cailleteau is Responsible AI & Generative AI Studios Europe lead, Accenture.
Professor Thierry Rayna is Professor of Innovation Management at Ecole Polytechnique, fellow of CNRS i3-CRG (Management Research Centre, Innovation Interdisciplinary Institute) and leads the Chair Technology for Change of the Institut Polytechnique de Paris.
Philippe Roussiere is Innovation and AI Global Lead, Accenture Research.
References
-
[i] AI Meets Regulation – Driving Innovation Within The EU AI Act by Jean-Marc Ollagnier, Forbes, 25 March 2024; see also LinkedIn blog by Arnab Chakraborty, March 2023.
-
[ii] Jack Azagury, et al., Reinvention in the age of generative AI, Accenture, 12 January 2024.
-
[iii] Nudge Users to Catch Generative AI Errors, MIT Sloan Management Review, Summer 2024 Issue.
-
[iv] Paul Daugherty, Adam Burden and Michael Blitz, Technology Vision 2024: Human by design, Accenture Technology Trends 2024 | Tech Vision | Accenture, 9 January 2024.
-
[v] Ellyn Shook and Paul Daugherty, Work, workforce, workers: Reinvented in the age of generative AI, Accenture.
-
[vi] Jack Azagury, et al., Reinvention in the age of generative AI, Accenture, 12 January 2024.
-
[vii] idem.