Businessman with artificial intelligence brain icon

By Hamilton Mann, Cornelia Walther and Michael Platt

This paper introduces the concept of “brain-machine synchrony,” exploring the potential alignment of human brain waves and physiological responses with AI systems to build trust and enhance collaboration. Heightened physiological synchrony within human teams fosters improved cooperation, communication, and trust. Extending this phenomenon to human-AI interactions invites the hypothesis that aligning neural and machine processes could similarly enhance symbiotic partnerships. To support this thesis, we draw parallels between established examples of “collective cognition” in AI and well-documented neural synchrony facilitating human coordination. The burgeoning field of brain-computer interfaces offers a promising platform for more direct and nuanced communication between the human brain and AI systems.  

Introduction   

Drawing inspiration from human brain function, we advocate for implementing rhythmic information transmission protocols within AI systems. This means designing AI to process and communicate information in rhythmic patterns that align with natural human brain activity. By harnessing the innate advantages of human physiological rhythms, we posit the utilization of verbal synchrony as a foundational principle in AI training. This approach encourages AI to adopt language patterns congruent with human communication norms, thereby fostering a sense of trust and rapport between AI and humans by aligning AI communication patterns with human cognition. Research suggests that such congruence can significantly enhance user trust and engagement. Furthermore, insights from team synchrony research emphasize the importance of synchronized behaviors in fostering cooperation and collaboration among individuals. This highlights the potential benefits of integrating synchrony into AI-human interactions. We contend that aligning AI behavior with human expectations and social cues will enable more intuitive and seamless human-machine interactions, thus opening opportunities for an advanced human-centered AI experience, aligned with the Artificial Integrity concept coined by Hamilton Mann. 

We further explore how AI systems embodied in robots can emulate human synchrony through external physiological cues such as movement and facial expressions. Systematic synchronization of autonomous systems and their human counterparts through mirroring is offered as a potential mechanism to build trust and cooperation, similar to how it occurs between humans. Despite these tantalizing opportunities, we highlight the imperative for an ethical approach, emphasizing consideration of privacy, security, and human agency. The proposed frontier of AI resonating with human brains holds remarkable potential benefits if approached with prudence and careful consideration, highlighting the pivotal role of human guidance in steering these aspirations responsibly. 

Mirroring human nature 

The remarkable evolution of Artificial Intelligence (AI) systems represents a paradigm shift in the relationship between humans and machines. This transformation is evident in the seamless interactions facilitated by these advanced systems, where adaptability emerges as a defining characteristic. This adaptability resonates with the fundamental human capacity to learn from experience and predict human behavior. 

One facet of AI that aligns closely with human cognitive processes is Reinforcement Learning (RL). RL mimics the human learning paradigm by allowing AI systems to learn through interaction with an environment, receiving feedback in the form of rewards or penalties. This process closely mirrors how humans learn from experiences and adjust behavior based on outcomes. Notably, RL contributes to the adaptability of AI systems, enabling them to navigate complex and dynamic environments with a capacity for continuous improvement.  

RL mimics the human learning paradigm by allowing AI systems to learn through interaction with an environment, receiving feedback in the form of rewards or penalties.

By contrast, Large Language Models (LLMs) play a crucial role in pattern recognition, capturing the intricate nuances of human language and behavior. These models, such as ChatGPT and BERT, excel in understanding contextual information, grasping the subtleties of language, and predicting user intent. Leveraging vast datasets, LLMs acquire a comprehensive understanding of linguistic patterns, enabling them to generate human-like responses and adapt to some of the user behavior, sometimes with remarkable accuracy.  

The synergy between RL and LLMs creates a powerful predictor of human behavior. RL contributes the ability to learn from interactions and adapt, while LLMs enhance the prediction capabilities through pattern recognition. This combination reflects the holistic nature of human cognition, where learning from experience and recognizing patterns are intertwined processes.  

AI systems based on RL can thus display a form of behavioral synchrony. At its core, RL enables AI systems to learn optimal sequences of actions in interactive environments to achieve a policy. Analogous to a child touching a hot surface and learning to avoid it, these AI systems adapt based on the positive or negative feedback they receive.  

Take the game of Go, for instance. Google’s AlphaGo, through millions of iterations and feedback loops, bested world champions by evolving its strategies over time. In a business context, consider a customer service chatbot that, after each interaction, refines its responses to align more closely with user expectations. Over time, this iterative learning could lead the bot to ‘synchronize’ its behavior with the collective sentiment and preferences of its user base. In this context, it would not mean perfect alignment but rather a closer approximation to user expectations.  

Building on the concept of deep reinforcement learning in AI, there’s an interesting parallel with the phenomenon of brain synchrony in human interactions. In AI, agents using deep reinforcement learning, such as Google DeepMind’s AlphaZero, learn and improve by playing millions of games against themselves, thereby refining their strategies over time. This self-improvement process in AI involves an agent iteratively learning from its own actions and outcomes. Similarly, in human interactions, brain synchrony occurs when individuals engage in cooperative tasks, leading to aligned patterns of brain activity that facilitate shared understanding and collaboration. Unlike AI, humans achieve this synchrony through interaction with others rather than themselves.  

The parallel extends when considering that AI systems can also learn from interactions with humans. Just as human brain synchrony enhances cooperation and understanding, AI systems can improve and align their responses through extensive iterative learning from human interactions. While AI systems do not literally share knowledge as human brains do, they become repositories of data inherited from these interactions, which corresponds to a form of knowledge. This process of learning from vast datasets, including human interactions, can be seen as a form of ‘collective memory’. This analogy highlights the potential for AI systems to evolve while being influenced by humans, while also influencing humans through their use, indicating a form of ‘computational synchrony’ that could be seen as an analog to human brain synchrony.  

In addition, AI systems enabled with social cue recognition are being designed to detect and respond to human emotions. These ‘Affective Computing’ systems, as coined by Rosalind Picard in 1995, can interpret human facial expressions, voice modulations, and even text to gauge emotions and then respond accordingly. An AI assistant that can detect user frustration in real-time and adjust its responses or assistance strategy is a rudimentary form of behavioral synchronization based on immediate feedback. 

For instance, affective computing encompasses technologies like emotion recognition software that analyzes facial expressions and voice tone to determine a person’s emotional state. Real-time sentiment analysis in text and voice allows AI to adjust its interactions to be more empathetic and effective. This capability is increasingly used in customer service chatbots and virtual assistants to improve user experience by making interactions feel more natural and responsive. 

Just as humans adjust their behavior in response to social cues, adaptive AI systems modify their actions based on user input, potentially leading to a form of ‘synchronization’ over time. Assessing the social competence of such an AI system could be done by adapting tools like the Social Responsiveness Scale (SRS)—a well-validated psychiatric instrument that measures how adept an individual is at modifying their behavior to fit the behavior and disposition of a social partner, a proxy for ‘theory of mind,’ which refers to the ability to attribute mental states—such as beliefs, intents, desires, emotions, and knowledge—to oneself and to others. 

Learning from mother nature 

Just as humans adjust their behavior in response to social cues, adaptive AI systems modify their actions based on user input, potentially leading to a form of ‘synchronization’ over time.

Nature offers numerous examples of synchronization and collective intelligence. Drawing inspiration from nature, where organisms like birds and fish move in synchronized patterns, AI systems can be designed to work in harmony. Such ‘swarms’ of AI agents can collectively process data, make decisions, and execute tasks in a synchronized manner. Analogously, if human brains can synchronize with each other and potentially – albeit implemented in different hardware and software – with AI, might AI systems develop synchrony with each other? 

The concept is less implausible than it might seem at first blush. Just as human brains synchronize under specific conditions—a phenomenon observed in situations like shared gaze or joint musical performances—it is conceivable that as AI becomes more advanced, it might display similar collective behaviors when interacting with other AI systems.  

Brain-Computer Interfaces (BCIs) have ushered in a transformative era in which thoughts can be translated into digital commands and human communication. Companies like Neuralink are making strides developing interfaces that enable paralyzed individuals to control devices directly with their thoughts. Connecting direct recordings of brain activity with AI systems, researchers enabled an individual to speak at normal conversational speed after being mute for more than a decade following a stroke. AI systems can also be used to decode not only what an individual is reading but what they are thinking based on non-invasive measures of brain activity using functional MRI. Based on these advances, it’s not far-fetched to imagine a future scenario in which a professional uses a noninvasive BCI (e.g., wearable brainwave monitors such as Cogwear, Emotiv, or Muse) to communicate with AI design software. The software, recognizing the designer’s neural patterns associated with creativity or dissatisfaction, could instantaneously adjust its design proposals, achieving a level of synchrony previously thought to be the realm of science fiction. This technological frontier holds the promise of a distinctive form of synchrony, where the interplay between the human brain and AI transcends mere command interpretation, opening up a future in which AI resonates with human thoughts and emotions.  

For BCIs and AI, the term ‘resonate’ assumes a multifaceted significance. At a behavioral level, resonance implies a harmonious and synchronized exchange, wherein AI not only comprehends the instructions from the human brain but also aligns its responses with the cognitive and emotional states of the individual. This alignment involves AI systems interpreting neural signals and responding in a manner that reflects the user’s mental and emotional context, thereby facilitating a more intuitive and effective interaction.  

This alignment extends beyond a transactional interpretation of commands, reaching a profound connection where AI mirrors and adapts to the subtleties of human cognition. 

Crucially, the resonance envisioned here transcends the behavioral domain to encompass communication as well. As BCIs evolve, the potential for outward expressions becomes pivotal. Beyond mere command execution, the integration of facial cues, tone of voice, and other non-verbal cues into AI’s responses amplifies the channels for resonance. This expansion into multimodal communication may enrich synchrony by capturing elements from the holistic nature of human expression, creating a more immersive and natural interaction. 

However, the concept of resonance also presents the challenge of navigating the uncanny valley, a phenomenon where humanoid entities that closely resemble humans provoke discomfort. Striking the right balance is paramount, ensuring the AI’s responsiveness aligns authentically with human expressions, without entering the discomfiting realm of the uncanny valley.  

The potential of BCIs to foster synchrony between the human brain and AI introduces promising yet challenging prospects for human-computer collaboration. As this field progresses, considerations of ethical, psychological, and design principles will be integral to realizing a seamless and resonant integration that enhances human experiences with AI. 

The intriguing parallel between the collective diving behavior of fish and neuronal activity in the brain offers insights into the nature of collective systems in both biological and artificial intelligence. This comparison underscores the commonality in the fundamental principles governing systems that rely on discriminating environmental stimuli and ensuring effective communication over extended ranges. (Gomez et al. 2023-a). Drawing inspiration from the swarming movements displayed by groups of animals, there arises the concept of a “collective mind,” wherein a decentralized swarm of AI agents may give rise to real-time collective intelligence. Current studies illustrate the potential of a bridge between observable behaviors in nature and the potential emergence of collective intelligence in AI systems. (Gomez et al. 2023-b)  

Exploration of ‘swarm intelligence’ has gained prominence in current AI research. This approach, influenced by the collective behaviors observed in natural swarms, involves multiple AI agents collaborating to make decisions collectively. Much like ants following simple individual rules collectively exhibit sophisticated behaviors like pathfinding, swarm intelligence in AI leverages the combined efforts of multiple agents. This collaborative approach is exemplified in the coordination of ‘swarm drones,’ which showcase collective decision-making capabilities. In multi-agent AI systems, algorithms can be specifically designed to facilitate sharing and synchronization of learning among agents, a form of collective cognition.  

Some research suggests behavioral patterns in natural systems, such as murmurations of starlings, arise from simple rules without necessarily involving neural synchrony (Couzin, 2009). Yet, the intriguing possibility remains that underlying synchrony may exist, particularly in emotional or social processing, even if shared thoughts are not evident. This nuanced consideration of synchrony adds a layer of complexity to our understanding of collective intelligence, both in natural systems and emerging AI platforms. 

Exploring the concept of collective cognition in both AI and biology invites a thought-provoking comparison with evolutionary principles. Millions of years of biological evolution produced  individual adaptations that sometimes benefited from collective or coordinated behaviors. The bedrock principle of evolution through natural selection emphasizes individual survival and reproduction, provoking the question of whether AI may also  evolve in a way that parallels the processes observed in nature.   

The bedrock principle of evolution through natural selection emphasizes individual survival and reproduction, provoking the question of whether AI may also  evolve in a way that parallels the processes observed in nature.

Neuroscience not only illuminates the basis of biological intelligence but may also guide development of artificial intelligence (Achterberg et al., 2023). Considering evolutionary constraints like space and communication efficiency, which have shaped the emergence of efficient systems in nature, prompts exploration of embedding similar constraints in AI systems, envisioning organically evolving artificial environments optimized for efficiency and environmental sustainability, the focus of research in so-called “neuromorphic computing.”    

For example, oscillatory neural activity appears to boost communication between distant brain areas. The brain employs a theta-gamma rhythm to package and transmit information, similar to a postal service, thereby enhancing efficient data transmission and retrieval (Lisman & Idiart, 1995). This interplay has been likened to an advanced data transmission system, where low-frequency alpha and beta brain waves suppress neural activity associated with predictable stimuli, allowing neurons in sensory regions to highlight unexpected stimuli via higher-frequency gamma waves. Bastos et al. (2020) found that inhibitory predictions carried by alpha/beta waves typically flow backward through deeper cortical layers, while excitatory gamma waves conveying information about novel stimuli propagate forward through superficial layers. Drawing parallels to artificial intelligence, this rhythmic data packaging and transmission model suggests that AI systems could benefit from emulating this neural dance during data processing and sharing. Advocating for AI systems to communicate with a unified rhythm, akin to brain functions, may lead to swifter and more synchronized data processing, especially in deep learning models requiring seamless inter-neural layer communication. Engineering algorithms to mirror the theta-gamma oscillatory dynamic holds promise for AI models to process and retrieve data more efficiently and methodically.  

In the mammalian brain, sharp wave ripples (SPW‐Rs) exert widespread excitatory influence throughout the cortex and multiple subcortical nuclei. (Buzsaki, 2015; Girardeau & Zugaro, 2011). Within these SPW‐Rs, neuronal spiking is meticulously orchestrated both temporally and spatially by interneurons, facilitating the condensed reactivation of segments from waking neuronal sequences (O’Neill, Boccara, Stella, Schoenenberger, & Csicsvari, 2008). This orchestrated activity aids in the transmission of compressed hippocampal representations to distributed circuits, thereby reinforcing the process of memory consolidation (Ego-Stengel & Wilson, 2010). 

One might envision rhythmic data packaging and transmission protocols for AI too. Just as a fluid dance can captivate and optimize movement, guiding AI systems to mirror the theta-gamma rhythm might enhance collaborative efficiency. This perspective advocates for AI systems to communicate with a unified rhythm, akin to brain functions. In theory, AI neural networks could adopt an oscillatory transmission method inspired by the theta-gamma code.  

Recent AI experiments, particularly those involving OpenAI’s GPT-4, unveil intriguing parallels with evolutionary learning. Unlike traditional task-oriented training, GPT-4 learns from extensive datasets, refining its responses based on the accumulated ‘experiences’; furthermore pattern recognition by GPTs parallels pattern recognition by layers of neurons in the brain. This approach mirrors the adaptability observed in natural evolution, where organisms refine their behaviors over time to better resonate with their environment. 

The application of evolutionary algorithms within AI systems, exemplified by NEAT (NeuroEvolution of Augmenting Topologies), introduces a dynamic dimension to the evolution of neural network architectures. NEAT undergoes a process akin to biological evolution, in which neural network structures evolve across generations in response to challenges present in diverse datasets. This evolutionary mechanism reflects a continuous process of adaptation, shaping the architecture and function of the neural networks. It is crucial to recognize that while both biological evolution and RL in AI systems share the process of “gradient descent” to optimize policies through incremental improvements. The underlying principle of gradient descent involves the iterative refinement of parameters toward an optimal solution. In the context of both biological evolution and AI, this implies a continuous pursuit of improved functionality or fitness. 

In the case of NEAT and similar AI systems, the adaptive changes in neural network architectures implement a form of optimization driven by the specific objectives and challenges encountered. The iterative nature of these adaptations within a defined optimization framework, supports the overarching objective of efficiency and functionality.

From Brain Waves to AI Frequencies 

Human and nonhuman brains resonate along a continuum of frequencies, classically defined as spanning delta (slow waves) to gamma (fast waves). Resonant brain waves arise from coordinated activity across widely-distributed circuits, and have been linked to perception, attention,  learning, memory, empathy, and conscious state. Whether  brain waves  associated with these processes actively influence them remains hotly debated.  

By analogy, one might consider whether AI systems might develop analogous ‘frequencies.’ Could there be distinctive patterns or states within AI systems that, when synchronized, lead to enhanced performance or expanded capabilities?. Ultimately, AI will be designed to optimize functionality rather than replicate human brain wave dynamics, so potential parallels between human brain waves and ‘frequencies’ in AI systems remains metaphorical. 

Drawing inspiration from the architecture of the brain, neural networks in AI are constructed with nodes organized in layers that respond to inputs and then generate outputs. Activation patterns within these layers show intriguing similarities to the activation patterns of individual neurons in the brain (see references for specific studies).   

Exploring the ‘AI neural symphony’ offers potential avenues for achieving genuine AI-human synchrony and fostering deeper AI-AI collaborations. This could involve delving into the specific patterns of activations, understanding how different models contribute to the overall symphony, and identifying ways to enhance coordination for improved performance. Achieving synchrony in this context might refer to aligning AI processes with human cognition or ensuring seamless collaboration between multiple AI models, leading to more effective and sophisticated outcomes.  

In the realm of human neural synchrony research, investigating the role of oscillations has proven to be a pivotal area of interest. High-frequency oscillatory neural activity stands out as a crucial element, demonstrating its ability to facilitate communication between distant brain areas. A particularly intriguing phenomenon in this context is the theta-gamma neural code, showcasing how our brains employ a distinctive method of ‘packaging’ and ‘transmitting’ information, reminiscent of a postal service meticulously wrapping packages for efficient delivery. This neural ‘packaging’ system orchestrates specific rhythms, akin to a coordinated dance, ensuring the streamlined transmission of information, and it is encapsulated in what is known as the theta-gamma rhythm. 

This perspective aligns with the concept of “neuromorphic computing,” where AI architecture is based on neural circuitry. The key advantage of neuromorphic computing lies in its computational efficiency, addressing the significant energy consumption challenges faced by traditional AI models. The training of large AI models, such as those used in natural language processing or image recognition, can consume an exorbitant amount of energy. For instance, training a single AI model can emit as much carbon dioxide as five cars over their entire lifespan. (Strubell et al. 2019) Moreover, researchers at the University of Massachusetts, Amherst, found that the carbon footprint of training deep learning models has been doubling approximately every 3.5 months, far outpacing improvements in computational efficiency (Schwartz et al. 2019).   

Neuromorphic computing offers a promising alternative. By mimicking the architecture of the human brain, neuromorphic systems aim to achieve higher computational efficiency and lower energy consumption compared to conventional AI architectures (Furber et al., 2014). For example, IBM’s TrueNorth neuromorphic chip has demonstrated significant orders of magnitude in energy efficiency compared to traditional CPUs and GPUs (Merolla et al., 2014). Additionally, neuromorphic computing architectures are inherently suited for low-power, real-time processing tasks, making them ideal for applications like edge computing and autonomous systems, further contributing to energy savings and environmental sustainability. 

Implications for society  

Harnessing the potential of brain-to-AI and AI-to-AI forms of ‘synchrony’ offers promising avenues for society. With the seamless integration of human intuition and AI’s computational prowess, complex challenges can be approached with a hybrid analytical-creative-empathical strategy. This synergy could lead to solutions neither could achieve on their own. 

As synchrony also implies real-time responsiveness, such a new paradigm of ‘synchrony’ might revolutionize our capacity to adapt in the face of increased uncertainty and societal challenges.  

From a customer engagement standpoint, synchronized AI interfaces might more precisely understand and, in some cases, anticipate user expectations based on advanced behavioral patterns.

In the realm of training and skill development, synchronized AI has the potential to personalize learning experiences based on an employee’s unique learning curve, facilitating faster and more effective skill acquisition. This approach could significantly enhance onboarding, and development processes, especially when considering neurodiversity, by tailoring training modules to meet individual needs and preferences in an unprecedented way.  

From a customer engagement standpoint, synchronized AI interfaces might more precisely understand and, in some cases, anticipate user expectations based on advanced behavioral patterns. This capability enables businesses to refine marketing strategies, product recommendations, and customer support, thereby resonating more deeply with consumers in an unprecedentedly inclusive way. 

For operational efficiency, especially in sectors like manufacturing or logistics, AI systems working in coordination with each other can optimize processes, reduce waste, and strengthen the supply chain. Although true AI-to-AI synchrony isn’t fully realized yet, current capabilities that mimic or represent a primitive form of intelligent machine-to-machine synchrony allow for significant advancements in supply chain fluidity and resiliency. 

This would lead to increased profitability, with an ever-met greater ability for sustainability considerations integrated – with regards to waste reduction or routes and processes that reduce carbon emissions – into process design from the onset. In risk management, synchronized AI systems analyzing vast datasets collaboratively might better predict potential risks or market downturns, equipping businesses and other organizations to prepare or pivot before a crisis emerges to limit all related social and societal impact. Likewise, synchronized AI systems could provide insights for more efficient urban planning and environmental protection strategies. This could lead to better traffic management, energy conservation, and pollution control, enhancing the quality of life in urban areas.  

In various domains beyond business, deployment of AI with a prosocial orientation holds immense potential for the well-being of humanity and the planet. Particularly in healthcare, synchronization between the human brain and AI systems could usher in a revolutionary era for patient care and medical research. Recent studies highlight the positive impact of clinicians synchronizing their movements with patients, thereby increasing trust and reducing pain (Wager et al.,2004). Extending this concept to AI chatbots or AI-enabled robotic caregivers that are synchronized with those under their ‘care’ holds the promise of enhancing patient experience and improving outcomes, as evidenced by recent research indicating that LLMs outperformed physicians in diagnosing illnesses, and patients preferred their interaction (Mollick, 2023).  

In the educational domain, the integration of AI systems with a focus on synchrony is equally promising. Research demonstrated that synchronized brain waves in high school classrooms were predictive of higher performance and happiness among students (Dikker, et al., 2017). This study underscores the significance of neural synchrony in the learning environment. By leveraging AI tutoring systems capable of detecting and responding to students’ cognitive states in real-time, education technology can potentially replicate the positive outcomes observed in synchronized classroom settings.  Incorporation of AI systems that resonate with students’ brain states has the potential to create a more conducive and effective learning atmosphere, optimizing engagement and fostering positive learning outcomes.  

Perspectives and Potential  

The excitement surrounding the prospects of brain-to-machine and machine-to-machine synchrony brings with it a set of paramount concerns that necessitate scrutiny and that are all but technical. Data privacy emerges as a critical apprehension, given the intimate nature of neural information being processed by these systems. The ethical dimensions of such synchronization, particularly in the realm of AI decision-making, present complex challenges that require careful consideration (Dignum, 2018; Floridi et al., 2018). 

Expanding on these concerns, two overarching issues demand heightened attention. Firstly, the preservation of human autonomy stands as a foundational principle. As we delve into the era of brain-machine synchrony, it becomes imperative to ensure that individuals retain their ability to make informed choices. Avoiding scenarios where individuals feel coerced or manipulated by technology is crucial in upholding ethical standards (Russell, 2018). 

Secondly, the question of equity in access to these technologies emerges as a pressing matter. Currently, such advanced technologies are often costly and may not be accessible to all segments of society. This raises concerns about exacerbating existing inequalities (Diakopoulos, 2016). A scenario where only certain privileged groups can harness the benefits of brain-machine synchrony might deepen societal divides. Moreover, the lack of awareness about these technologies further compounds issues of equitable access (Kostkova et al., 2016). 

Addressing these concerns is not only an ethical imperative but also crucial for the long-term sustainability and acceptance of brain-machine synchrony technologies. Failing to consider issues of human autonomy and equitable access could lead to unintended consequences, potentially widening societal gaps and fostering discontent. A comprehensive and responsible approach to these challenges is essential to ensure the positive impact of these technologies on society at large.  

The integration of AI with human cognition marks the threshold of an unprecedented era, where machines not only replicate human intelligence but also mirror intricate behavioral patterns and emotions. The potential synchronization of AI with human intent and emotion holds the promise of redefining the nature of human-machine collaboration and, perhaps, even the essence of the human condition.  

This interconnectedness could span micro (individual), meso (community/organization), macro (country), and meta (planet) arenas, creating a dynamic continuum of mutual influence (Walther, 2021). 

The outcome of harmonizing humans and machines will significantly impact humanity and the planet, contingent upon the guiding human aspirations in this pursuit. This raises a timeless question, reverberating through the course of human history: What do we value, and why?  

A crucial point to emphasize is that the implications of synchronizing humans and machines extend far beyond the realm of AI experts; it encompasses every individual. This underscores the necessity to raise awareness and engage the public at every stage of this transformative journey. As the development of AI progresses, it is essential to ensure that the ethical, societal, and existential dimensions are shaped by collective values and reflections, avoiding unilateral decisions by Big Tech that may not align with the broader interests of humanity. What happens next shapes our individual and collective future. Getting it right is our shared responsibility.

About the Authors

Hamilton MannHamilton Mann is a Tech Executive, Digital for Good Pioneer, keynote speaker, and the originator of the concept of artificial integrity. Mann serves as the Group Vice President of Digital Marketing and Digital Transformation at Thales. He is also a Senior Lecturer at INSEAD, HEC Paris, and EDHEC Business School, and mentors at the MIT Priscilla King Gray (PKG) Center. He writes regularly for Forbes and Les Echos, and has published articles about AI and its societal implications in prominent academic, business, and policy outlets such as Stanford Social Innovation Review (SSIR), Knowledge@Wharton, Dialogue Duke Corporate Education, INSEAD Knowledge, INSEAD TECH TALK X, I by IMD and the Harvard Business Review France. He hosts The Hamilton Mann Conversation, a “Masterclass” podcast on Digital for Good. Mann was inducted into the Thinkers50 Radar as one of the 30 most prominent rising business thinkers globally for pioneering “Digital for Good”. He is the author of the book Artificial Integrity (Wiley, 2024)

Cornelia WaltherCornelia C. Walther, PhD, Director POZE@ezop, a global alliance for systemic change that benefits people and the planet. As a humanitarian practitioner, she worked for two decades with the United Nations in emergencies in West Africa, Asia, and Latin America with a focus on advocacy and social and behavior change. As a lecturer, frontline coach and researcher, she has been collaborating over the past decade with universities across the Americas and Europe. She presently is a Senior Visiting Fellow at the Wharton initiative for Neuroscience (WiN)/Wharton AI and Analytics; and the Center for Integrated Oral Health (CIGOH), and is affiliated with MindCORE and the Center for social norms and behavioral dynamics at the University of Pennsylvania as. Since 2021 her research focus is on AI4IA (Artificial intelligence for Inspired Action).  

Michael PlattMichael Platt, PhD, is Director of the Wharton Neuroscience Initiative and a Professor of Marketing, Neuroscience, and Psychology at the University of Pennsylvania. He is author of “The Leader’s Brain” (Wharton Press) and over 200 scientific papers, which have been cited over 23,000 times. Michael is former Director of the Duke Institute for Brain Sciences and the Center for Cognitive Neuroscience at Duke, and founding Co-Director of the Duke Center for Neuroeconomic Studies. His work has been featured in the New York Times, Washington Post, Wall Street Journal, Newsweek, the Guardian, and National Geographic, as well as on ABC’s Good Morning America, NPR, CBC, BBC, MTV, and HBO Vice. He co-founded brain health and performance company Cogwear Inc. He currently serves on multiple Advisory Boards including the Yang-Tan Autism Centers at MIT and Harvard and as President of the Society for Neuroeconomics. 

References
  • Achterberg, J., Akarca, D., Strouse, D.J. et al. Spatially embedded recurrent neural networks reveal widespread links between structural and functional neuroscience findings. Nature Machine Intelligence 5, 1369–1381 (2023). https://doi.org/10.1038/s42256-023-00748-9   
  • Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., … & Amodei, D. (2020). Language models are few-shot learners. arXiv preprint arXiv:2005.14165. 
  • Başar, E. (2013). Brain oscillations in neuropsychiatric disease. Dialogues in clinical neuroscience, 15(3), 291–300. 
  • Bastos, A. M., Lundqvist, M., Waite, A. S., & Miller, E. K. (2020). Layer and rhythm specificity for predictive routing. Proceedings of the National Academy of Sciences, 117(49), 31459-31469. [https://doi.org/10.1073/pnas.2014868117] 
  • Burle, D., Spieser, L., Roger, C., Casini, L., Hasbroucq, T., & Vidal, F. (2015). Spatial and temporal resolutions of EEG: Is it really black and white? A scalp current density view. International Journal of Psychophysiology, 97(3), 210-220.  
  • Buzsáki G. (2015). Hippocampal sharp wave-ripple: A cognitive biomarker for episodic memory and planning. Hippocampus. 2015 Oct;25(10):1073-188. doi: 10.1002/hipo.22488. PMID: 26135716; PMCID: PMC4648295.  
  • Buzsáki, G. (2006). Rhythms of the brain. Oxford University Press. 
  • Couzin, I. D. (2009). Collective cognition in animal groups. Trends in cognitive sciences, 13(1), 36-43. 
  • Diakopoulos, N. (2016). Accountability in Algorithmic Decision Making. Communications of the ACM, 59(2), 56–62. https://doi.org/10.1145/2844148  
  • Dignum, V. (2018). Responsible Artificial Intelligence: How to Develop and Use AI in a Responsible Way. AI & Society, 33(3), 475–476. https://doi.org/10.1007/s00146-018-0812-0 
  • Dikker, S., Wan, L., Davidesco, I., Kaggen, L., Oostrik, M., McClintock, J., … & Poeppel, D. (2017). Brain-to-brain synchrony tracks real-world dynamic group interactions in the classroom. Current Biology, 27(9), 1375-1380. 
  • Ego-Stengel, V., & Wilson, M. A. (2010). Disruption of ripple-associated hippocampal activity during rest impairs spatial learning in the rat. Hippocampus, 20(1), 1-10. 
  • Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., Luetge, C., Madelin, R., Pagallo, U., Rossi, F., Schafer, B., Valcke, P., & Vayena, E. (2018). AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations. Minds and Machines, 28(4), 689–707. https://doi.org/10.1007/s11023-018-9482-5 
  • Furber, S. B., Galluppi, F., Temple, S., & Plana, L. A. (2014). The SpiNNaker Project. Proceedings of the IEEE, 102(5), 652–665. https://doi.org/10.1109/JPROC.2014.2304638 
  • Girardeau, G., & Zugaro, M. (2011). Hippocampal ripples and memory consolidation. Current Opinion in Neurobiology, 21(3), 452-459. 
  • Gómez-Nava L, Lange RT, Klamser PP, et al. (2023-a) Fish shoals resemble a stochastic excitable system driven by environmental perturbations. Nat Phys. 2023. doi: 10.1038/s41567-022-01916-1 
  • Gomez, N., Eguiluz, V. M., Garrido, L., Hernandez-Garcia, E., & Aldana, M. (2023-b). Neural correlations in collective fish behavior: the role of environmental stimuli. arXiv preprint arXiv:2303.00001. 
  • Johnson, A., & Smith, B. (2018). Bridging the Gap: Human-AI Interaction through Verbal Synchrony. *Journal of Artificial Intelligence Research, 45*, 789-804. 
  • Kostkova, P., Brewer, H., de Lusignan, S., Fottrell, E., Goldacre, B., Hart, G., Koczan, P., Knight, P., Marsolier, C., McKendry, R. A., Ross, E., Sasse, A., Sullivan, R., Chaytor, S., Stevenson, O., Velho, R., Tooke, J., & Ross, E. (2016). Who Owns the Data? Open Data for Healthcare. Frontiers in Public Health, 4. https://doi.org/10.3389/fpubh.2016.00107  
  • Lebedev, M. A., & Nicolelis, M. A. L. (2006). Brain–machine interfaces: past, present and future. Trends in Neurosciences, 29(9), 536-546. 
  • Lisman, J. E., & Idiart, M. A. (1995). Storage of 7 +/- 2 short-term memories in oscillatory subcycles. Science, 267(5203), 1512–1515. [DOI: 10.1126/science.7878473] 
  • Merolla, P. A., Arthur, J. V., Alvarez-Icaza, R., Cassidy, A. S., Sawada, J., Akopyan, F., Modha, D. S. (2014). A million spiking-neuron integrated circuit with a scalable communication network and interface. Science, 345(6197), 668–673. https://doi.org/10.1126/science.1254642 
  • Nicolelis, M. A. L., & Lebedev, M. A. (2009). Principles of neural ensemble physiology underlying the operation of brain–machine interfaces. Nature Reviews Neuroscience, 10(7), 530-540. 
  • O’Neill, J., Boccara, C. N., Stella, F., Schoenenberger, P., & Csicsvari, J. (2008). Superficial layers of the medial entorhinal cortex replay independently of the hippocampus. Science, 320(5879), 129-133. 
  • Mann, H. (2024). Introducing the Concept of Artificial Integrity: The Path for the Future of AI. The European Business Review. 
  • Picard, R. W. (1995). “Affective Computing.” MIT Media Laboratory Perceptual Computing Section.  
  • Schwartz, R., Dodge, J., Smith, N. A., Overton, J., & Varshney, L. R. (2019). Green AI. Proceedings of the AAAI Conference on Artificial Intelligence, 33, 9342–9350. https://doi.org/10.1609/aaai.v33i01.33019342 
  • Strubell, E., Ganesh, A., & McCallum, A. (2019). Energy and policy considerations for deep learning in NLP. Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, 3645–3650. https://doi.org/10.18653/v1/P19-1356  
  • Sutton, R. S., & Barto, A. G. (2018). Reinforcement Learning: An Introduction. MIT Press. 
  • Walther, C. (2021). Technology, social change and human behavior. Palgrave Macmillan. New York. 

LEAVE A REPLY

Please enter your comment!
Please enter your name here