As humans, it is our responsibility to shape the trajectory of artificial intelligence and use the knowledge of machine learning to enhance human intelligence such that it allows for diversity and inclusivity.
Throughout human history, we have excelled at creating products that meet the specific needs of certain individuals while excluding others. We have continuously honed this skill, striving to differentiate ourselves and design products that cater to targeted markets and specific audiences.
This mindset, shaped by our mental, moral, and ethical models, influences how we perceive and interact with the world for most of our lives. Undoubtedly, this approach conflicts with inclusivity and diversity. The better we become at designing and delivering products and services that perfectly suit a specific targeted audience, the more adept we become at discriminating against other non-targeted audiences, purposefully leaving them behind.
Artificial intelligence (AI), built upon our mental, moral, and ethical models, follows this same pattern—it is exclusive by design, not inclusive. And paradoxically, it is already omnipresent.
The global artificial intelligence market was valued at $87 billion in 2021 and is projected to reach $1,597.1 billion by 2030. Its continuous and widespread adoption places it at the core of numerous organisations worldwide:
In an increasing number of hardware and software components.
In various industries such as automotive, healthcare, retail, finance, banking, insurance, telecommunications, manufacturing, agriculture, aviation, education, media, and security, to name a few.
In expanding roles and professions, including human resources, marketing, sales, advertising, legal, supply chain, and many more.
We are just scratching the surface.
One of the key questions arising from the development of artificial intelligence is how to ensure that biases or segmentation models in the data powering AI do not lead to discriminatory behaviours based on characteristics such as gender, race, religion, disability, sexual orientation, or political views. This is one of the significant challenges posed by AI development.
Artificial intelligence is not so… artificial
With the exponential and rapid development of artificial intelligence, the temptation to use it for unprecedented differentiation and unparalleled targeting approaches to achieve economic growth and competitiveness is strong and will continue to grow. There exists a tension between the need for organisations and individuals to embrace diversity and inclusivity to foster greater equality in society and the global economic system that encourages and exacerbates behaviours where differentiation, and therefore discrimination, becomes a rule of the game leading to success. This tension is on the verge of intensifying due to AI’s potential to systemically codify these competition-oriented behaviours in our digital society, presenting one of the greatest challenges of our time.
Artificial intelligence is already permeating every facet of society:
- Personal assistants have now become virtual, enabling the execution of tasks with a human-like level of conversational ability.
- Market analyses are conducted by machines that produce studies such as competitor comparisons and generate detailed reports.
- Customer behaviour, purchasing processes, and preferences are scrutinised by increasingly intelligent customer relationship management (CRM) systems capable of predicting customer needs.
- Customer service is also provided by chatbots that can answer frequently asked questions on a website.
And this is just the beginning compared to the potential applications that are already emerging and rapidly approaching in the near future, including:
- Autonomous vehicles (bicycles, cars, trains, planes, boats, etc.)
- Robots assisting surgeons in operating rooms.
- Content creation (videos, music, articles, etc.) entirely produced by machine work.
- Public policies whose measures would be prescribed and performance predicted through the analysis of large volumes of data.
- And much more.
Considering the societal implications for the future of humanity, artificial intelligence is far from being as artificial as it may seem.
We must decide whether we plan to use AI to eliminate visible and invisible inequalities to an unprecedented extent or if we unconsciously or consciously intend to amplify them on the same scale. As we enter the era of artificial intelligence, there will be fewer and fewer grey areas.
Artificial Intelligence opens a new era for human learning
The responsibility for shaping the trajectory of artificial intelligence rests squarely on our shoulders as humans. At the heart of the challenges faced in 21st-century learning lies the way we teach machines what they need to learn and how they learn it. It necessitates not only an ongoing pursuit of developing our own intelligence but also a deep understanding of how machines acquire their own.
Both human and machine learning face similar challenges:
- Supervised learning versus unsupervised learning
- Structured learning versus unstructured learning
- “Few-shot” learning versus “Blink” learning (as Malcolm Gladwell puts it)
- Long-term versus short-term learning with a trade-off between forgetting and retention
- “Zero-shot” learning versus learning through “dreaming”
- Visuomotor learning versus multisensory learning (AVK)
By unravelling the mysteries of how machines learn, we not only discover new avenues for learning that were previously unexplored and unimaginable but also revolutionise the standards by which we understand our learning process, ultimately enhancing human intelligence. However, let us not be mistaken. Intelligence and knowledge are not synonymous, and increasing our knowledge is a necessary yet insufficient condition for augmenting our intelligence.
Enhancing human intelligence primarily involves expanding our capacity for questioning, challenging the status quo, nurturing curiosity, and fostering the emergence of new questions in our minds, leading to the discovery and rediscovery of what we think we know and who we are.
Artificial intelligence is far less intelligent than commonly imagined
Even without contemplating an artificial intelligence capable of replicating human emotions, there is an inherent distinction that sets AI apart—the comprehension and grasp of context.
Context comprises numerous parameters, some evident to the naked eye, while others are more discreet, nuanced, and constituted by subtle signals and details that play a pivotal role in characterising a context. Considering the ever-evolving nature of any given context, it will take time before artificial intelligence can truly appreciate its emotional complexity.
Building the kind of AI that benefits society necessitates a visionary approach. It involves comprehending which tasks are and will be best executed by machine intelligence in contrast to those that are and will be better handled by human intelligence. It also requires recognising tasks that must and will continue to be carried out by humans, regardless of technological advancements.
The responses our societies develop to establish a framework in which artificial intelligence aligns with human intelligence will shape the future of humanity as a whole. This goes beyond numerous innovations and new forms of competitive advantages that will redefine market dynamics as we know them today. More importantly, it holds sociological implications and affects the world we leave for future generations.
Most often, when we contemplate “Machine Learning”, our mental model leads us to think of a strictly one-way approach in which we teach the machine and provide it with the means to learn autonomously in various domains.
Artificial intelligence heralds a profound transformation in the relationship between humans and machines. This evolving dynamic, which is already becoming increasingly critical and fascinating, is more bidirectional than ever before. Consequently, the question arises: what can machine intelligence teach us to enhance our human capabilities?
We must embrace new ways of thinking to enable machines to perform tasks that would be challenging, if not impossible, for us to accomplish in the same manner. Simultaneously, we have the opportunity to seize new avenues for learning and self-improvement in numerous domains that currently demand extensive effort and years of expertise, with true mastery often only attainable through human execution.
Artificial intelligence is seeping into the decision-making process
While artificial intelligence and the recommendations it produces offer unsuspected opportunities to enhance not only our own intelligence but also the nature of relationships and emotional attachments we may develop with machines in the future, it also raises delicate questions of Environmental and Social Responsibility.
At what point does the decision support provided by AI exert such a degree of influence that it silently decides on behalf of humans? This complex question is upon us. The answer, particularly considering the level of vulnerability that society may recognise in each of us at any given moment, in particular circumstances of existence, can be as nuanced as the individuals themselves. That is why applications, devices, and any technological equipment equipped with any form of artificial intelligence need to be explicitly transparent regarding the limitations of the parameters the algorithm considers or disregards, concerning potential implications that may pose a danger to oneself or others. This will help foster the responsible use of AI and prevent the risks of inappropriate or even prohibited use.
Artificial intelligence challenges us to make it explicitly explainable to all, in terms of the causality behind its results, in order to guide decisions that increasingly impact our lives and society as a whole. Paradoxically, as humans, we cannot explain everything about the reasons behind many of our decisions in a manner that the majority would understand and deem fair.
Artificial intelligence will profoundly change the value of work.
Some fear that artificial intelligence will replace humans. While the concept of science-fiction AI surpassing humanity like Terminator remains fictional, there is a paradigm that needs to be included in what the digital society harbors within itself: AI can be better than humans at certain tasks, yet it will not be better than humans at all tasks.
With the development of AI, we are experiencing and will continue to experience a transformation from the knowledge economy to the trust economy. This shift is motivated by the need for increased predictability, precision, and efficiency on one hand, and the need for more fairness, transparency, and sustainability on the other.
For the future of “knowledge workers”, digital technology, particularly AI, will bring about five types of changes that will disrupt society to varying degrees, depending on the predominant nature of work and work value in each continent:
First, some jobs will disappear. This is not new; similar phenomena have occurred during previous industrial revolutions.
Then, some jobs will be enhanced by AI. Again, this is not new; analogous situations have existed during previous industrial revolutions.
Next, some jobs will evolve to become tech jobs.
There are also jobs that are currently difficult to imagine because their utility is intrinsic to societal needs about which we know little or nothing.
However, we must not be naive: the development of AI already creates and will continue to generate the emergence of precarious jobs, stopgap jobs to compensate for the lack of intelligence in AI. For example, there are shadow workers who label tons of data in a frenzy of repetitive tasks to help AI learn and ensure that certain abhorrent and intolerable content is prohibited from being accessed through the platforms we use, as it violates the law. The long-term impact of viewing such content on the mental health of these “workers” must be considered.
Which of these types of changes brought about by AI will have the greatest impact on the evolution of work in our societies? It is difficult to predict. Nonetheless, while it is not the sole force driving the kinetic transformations that characterise our century, it will ultimately be our responsibility to decide.
Regardless, artificial intelligence has no ethics of its own.
Artificial intelligence lacks ethics; it simply pertains to ours and ours alone. Our ethical principles are, ultimately, an integral part of the functional requirements that consequently digitally encode the biases we intellectually possess. In a way, artificial intelligence inherits the ethical genes of its creator.
Making the invisible codes of our societies visible is perhaps one of the most transformative advancements that artificial intelligence will enable humanity to achieve. Such a level of transparency regarding the unspoken and unwritten will contribute to greater equality and profoundly redefine the citizens’ demand for justice in our societies. It is also an opportunity to ensure that the artificial intelligence that interacts with ours and coexists with us are as much as possible the product of collective intelligence or, at best, the receptacle of the wealth that can be produced by the synergies derived from human diversity in all its forms of intelligence.
The augmentation of our intelligence through that of machines will always, and even more so in the future, be confronted with the existential question of the human cause we assign to this intelligence to serve.
Therefore, we should strive to make “artificial intelligence” an intelligence inspired by the quintessence of the best in our humanity, excluding all the dark aspects of human nature. This is arguably the most dizzying yet most crucial question for the future of humanity.
It is an ethical question to which only our humanity has the power and responsibility to provide an ever-renewed answer, in order to build the future in which we wish to live.
About the Author
Hamilton Mann is the Group VP of Digital Marketing and Digital Transformation at Thales. He is also the President of the Digital Transformation Club of INSEAD Alumni Association France (IAAF), a mentor at the MIT Priscilla King Gray (PKG) Center, and Senior Lecturer at INSEAD, HEC and EDHEC Business School.