By Nada Sanders, Koen Pauwels, Christoph Riedl, Rick J. Arrowood, and David De Cremer
Should the deployment and facilitation of AI be left to tech experts? How does leadership affect AI? Here are insights from the Beyond Boundaries conference by the D’Amore-McKim School of Business on how business leadership shapes AI adoption and success.
AI is bringing massive changes to society, businesses, and academia. For business schools, it is important to know how organizations, leaders, and educators can address this rapidly changing environment. At D’Amore-McKim School of Business (DMSB; Northeastern University), we take this question seriously and have made it key to our school’s mission to educate socially responsible business leaders capable of working, navigating and creating in an AI-enabled environment. To emphasize the importance of our mission to what we do in our research, teaching and corporate outreach, DMSB has launched a series of conferences called Beyond Boundaries. At the first conference in this series, we explored the relationship between business leadership and AI. The conference brought together educators, students, and AI thought leaders, including Chess Master Garry Kasparov, to engage in deep discussions and collectively identify how to navigate this new landscape. Below, we discuss what the key insights are that emerged from this conference.
The overall conclusion that emerged from the first Beyond Boundaries conference is the importance of leadership as a catalyst and facilitator to leverage AI in successful ways – both in business and academia. The discussion revealed that leaders are ultimately responsible – and not so much tech experts – for setting the rules on how to deploy AI, setting expectations and performance metrics, and offering direction regarding how to use AI in an ethical and useful manner. They have this responsibility since their primary job in any AI adoption project is to align AI with organizational goals and purpose. Only through such alignment can they create value for all involved. Within this context, the most important skills leaders must have relate to knowing and anticipating human psychology – at every level of the organization – and knowing when humans should intervene with AI. Today, while massive amounts of data are available and offer big information, this also results in huge disinformation and difficulty making decisions. It is up to humans to know how to interact with AI to make adjustments and it is up to leaders to provide a guiding environment.
How should businesses use AI?
AI systems approach human-level performance across a variety of tasks – especially in routine and data management tasks that take place in closed systems. However, humans need to know how and when to intervene. In other words, when do we stop looking at the data and use the inferred insights to make actual decisions? This ability requires making a judgment call and who better than humans to do so? Indeed, the advantage comes to the person who knows when to make a decision, and this requires relying on gut instinct and intuition. That is the difference between a good decision-maker and a great one. As Garry Kasparov said during the conference: “Little tweak, here and there has the highest return. We don’t have to challenge machine superiority in 95 percent of the cases.” This means developing the skills to know when to intervene and the humility to allow the algorithm to work autonomously the rest of the time.
It is up to humans to know how to interact with AI to make adjustments and it is up to leaders to provide a guiding environment.
David De Cremer, Dean of D’Amore-McKim School of Business noted that “engineering is easy, humans are tricky.” The key is knowing when to bring human qualities into play. In the past, machines made us fast; today machines will make us smarter as they will provide information. But, how can leaders ensure that this new wisdom gets translated into real value? “Asking the right questions” is what makes this translation happen and what makes the difference between being a leader ready for the AI era or not. To do so, leaders need to know what their organization stands for and what they want to achieve for society and its stakeholders.
Consequently, business schools need to be aware that they must train future leaders by using AI as a tool that generates information, which is then used by students to give it their authentic touch and goals to transform content into applicable knowledge that can benefit multiple stakeholders. How will all of this influence the adoption of AI in business, the classroom and society? Below, we elaborate on what such adoption may look like.
How do businesses have to adapt to create value with this new technology?
1. Large companies
There is a need for leadership to be actively engaged and change the operating model. If leaders are not engaging, it is quite likely their organizations are not going to survive. Right now, in many organizations the head of analytics is under siege: so many new ideas are percolating upwards and downwards, and leaders have to make tough decisions. As AI is an expensive tool, it is costly for large companies to stay too long in the experimental stage of AI adoption. Large companies therefore have to think and anticipate as much as possible about where AI can create the greatest value – and deploy it in those areas.
Specifically, leaders need to be able to make a deliberate and coherent decision on where the greatest value of using AI is, such as return on equity, revenue, operational efficiency, financial leverage and so forth and those decisions should drive leadership focus. Then, leaders need a game plan on how to leverage AI to achieve those goals. It is a different way of thinking about technology, unlike what leaders have done in the past. That is, whereas in the past technology decisions were made separately they now need to be part of the leadership portfolio to achieve the greatest value and return for the business.
2. Small companies
Small companies are in a different position as they have less leeway to make mistakes and have more resource constraints than large companies. They must be lean and nimble. Nimbleness is what differentiates small companies from competitors. Engaging humans in the loop is also hard as there simply aren’t as many of them involved. Also, for small companies, it becomes difficult to evaluate potential value – when one small mistake can blow up the entire company. So, an important decision for small companies deploying AI is when and where you take the risk. A simple analogy is what happened to blockchain about 5-6 years ago. What happened was that failure emerged for small companies that stretched their business model to the limit to accommodate the blockchain technology, whereas success emerged for those companies that saw where and how blockchain would create key opportunities for their business model.
3. Business challenge.
Silos continue to be a dominant challenge within organizations, making it difficult to find a good strategy on how to communicate across boundaries. The answer is likely grounded in promoting a change in organizational structure and breaking down silos through shared data and technology usage. One clear thing is that organizations will not survive if they don’t break down silos. To use technology as a tool to connect and break down silos requires the use of AI. Indeed, AI can help integrate data across silos and facilitate more transparent business processes. Specifically, sharing data between different departments and identifying common goals on what questions need to be addressed can help to make clear that collaborative efforts are needed to answer crucial business questions. Data can also help to make the organization in its entirety see what their exact business challenges are and leaders need to use those insights to drive operational, customer-oriented and financial decision-making. Key business questions that will need to be asked in this approach are: what is the opportunity and how long is the path to value? What is the opportunity cost of making the wrong decision?
4. Risks and Liability.
One of the biggest issues for businesses concerns risks and liability, such as not having the ability to document ROI or to show your decisions reveal value for the business. This is especially important when AI and data management are driving your decisions, especially short-term ones. Trying to figure out how to calculate value and how to attribute it back to the business will matter and has to align with breaking down silos and getting everyone involved. So, who is taking credit and how do we measure return? As people operate with each other, how do we measure value accurately? What about liability? Who gets credit and who takes liability? One thing is clear: with the arrival of AI, the bar has been heightened significantly.
How do business schools educate the next generation of leaders to make those tough decisions?
1. Setting expectations.
It is essential to examine AI’s impact on performance and develop appropriate evaluation metrics.
Educators need to lead with a clear vision of how to use AI in their teaching efforts. They have to do so by effectively communicating this vision within the context of AI applications, performance expectations, and ethical considerations. A critical question to address here is the extent to which an AI tool like ChatGPT can assist those who teach and those who attend class, and whether them possessing domain knowledge or being AI novices matters. It is essential to examine AI’s impact on performance and develop appropriate evaluation metrics. The standards in the classroom must adapt when AI becomes universally accessible, as performance expectations will inevitably shift.
Another challenge involves detecting AI usage in the classroom and setting clear expectations for its use. This situation necessitates new rules for essays and guidelines on using tools like ChatGPT. It will be essential to establish the appropriate balance between AI-generated performance and the student’s own efforts in interactions with AI, so we can guarantee that the class experience becomes one where generated information is put to work in authentic and applicable ways. Such approach will incorporate a focus on preparing students to recognize the role of AI in how they can transition content to knowledge that has both value to themselves as leaders and to the organization.
2. Learning how to use AI
A significant challenge is that individuals often struggle to determine when and how to utilize AI appropriately. They may misuse AI, seeking feedback when they have already performed well instead of when they need improvement. AI is a valuable tool for those with a growth mindset, and one opportunity in the classroom is to teach students how and when to use AI effectively as their co-pilot. This teaching is a crucial role for educators as classroom leaders.
3. Reskilling
Reskilling is a popular notion within organizations and it is imperative that we start with it as early as possible – thus, in the classroom. But, what real value does reskilling reveal? Research is not entirely clear on this. First, some studies suggest that AI is particularly helpful in getting novices up to speed, for example, by giving them access to tacit knowledge embedded in the work practices of more experienced colleagues. In this case, AI could help level the playing field. On the other hand, business research has also provided evidence that AI is most useful to human experts. This happens when experts are in a better position to decide when and how to use AI, and subsequently to evaluate AI’s output and leverage it. In contrast, lower-skilled workers are not in that situation and they run the risk of using AI in the wrong situations or use it in the wrong way. For instance, studies show that novice psychiatrists who rewrite their profile with AI, raise the price too much as they overestimate the AI improvement. Overall, there may be a positive return to existing skills and then AI is likely to increase existing trends and thus also increase inequality. Business schools need to examine which processes are most likely at play and create classroom experiences where all students benefit from reskilling.
How will AI in business impact society?
Finally, AI deployment in businesses will also have a significant impact on society. If the deployment of AI can lead to lower costs for businesses by automation efforts and making jobs disappear, what will the costs be to society and how do we deal with this as business leaders? How do we ensure that businesses can leverage the power of AI in ways that it benefits not only the organization, but all its stakeholders, including society at large? The kind of socially responsible leadership that is needed will have to deal with how to ensure that lay-offs do not translate into societal disasters, that the ambition to create more leisure time for everyone by deploying AI is meaningful to people and, finally, clear on how they will spend their time, while still being able to earn a living and thus contribute to the functioning of society.
Research shows that humans need to be challenged, so the responsibility of business leaders does not stop when employees are replaced by AI and receive a final compensation package. Businesses need to work with governments to create opportunities for human capital that will benefit society directly. This means that business leaders need to be trained as well to see that business responsibilities also include providing solutions to the bigger societal challenges and in this era, this means to think about what the best way will be to democratize AI.
The main conclusion of our first “Beyond Boundaries” conference is that across business, academia and society, business leaders cannot escape their responsibility to think about both the short-term and long-term consequences of AI adoption and use those reflections in guiding how their organizations leverage AI. Leaders and organizations that understand this responsibility and develop the skill to act accordingly will be the ones that will thrive.
About the Authors
Nada Sanders is an internationally recognized thought leader and expert on forecasting, global supply chains, risk and resilience, and human-technology integration, and is Distinguished Professor at the D’Amore-McKim School of Business at Northeastern University. She is author of seven books, including The Humachine: Humankind, Machines, and the Future of Enterprise, 2nd ed. (Routledge, 2024), which is highly published in leading scholarly journals, and was ranked as the top 2% of Scientists by Stanford Study. She is a Fellow of the Decision Sciences Institute, has served on the Board of Directors of the International Institute of Forecasters (IIF), Decision Sciences Institute (DSI), and is a former president of the Production Operations Management Society (POMS), an organization that in 2020 created an award in her name for her contribution. She is a frequent keynote speaker, has consulted with numerous Fortune 100 companies, and serves on the Board of Economic Advisors of the Association of Industries of Massachusetts (AIM).
Koen Pauwels is the Associate Dean of Research and Distinguished Professor at Northeastern University and founding General Director of its Digital, Analytics, Technology and Automation (DATA) Initiative. He was a Principal Research Scientist at Amazon Ads, with brand building and budget allocation recommendations reaching hundreds of thousands of advertisers. Koen received his Ph.D. from UCLA, where he was chosen Top 100 Inspirational Alumnus. After getting tenure at the Tuck School of Business at Dartmouth, he helped build the startup Ozyegin University in Istanbul. Named a worldwide top 2% scientist, and ‘The Best Marketing Academic on the Planet’, Koen published over 100 articles on marketing effectiveness. This research was awarded by both managers and academics. Koen is the editor-in-chief of the International Journal of Research in Marketing. His books include Modeling Markets and Advanced Methods for Modeling Markets for analysts, and Break the Wall: Why and How to Democratize Digital in Your Business, and It’s Not the Size of the Data – It’s How You Use It: Smarter Marketing with Analytics and Dashboards for managers.
Christoph Riedl ([email protected]) is professor of Information Systems at the D’Amore-McKim School of Business, Northeastern University. He obtained his PhD from the Technische Universität München, Germany. Dr. Riedl’s research focuses on collective intelligence, crowdsourcing, and collaboration in human-AI teams.
Rick J. Arrowood is a Lecturer at the D’Amore-McKim School of Business at Northeastern University and a Visiting Professor in the Executive MBA program at the University of Central Punjab in Lahore, Pakistan. He specializes in Management, Organizational Development, and Leadership, with a research focus on the behavioral aspects of technology use by faculty and students in both traditional and online classrooms. Rick also consults for healthcare technology firms in the US and Pakistan. He has authored numerous case studies, published in leading journals, and serves on the editorial board of the Journal of Education and Education Policy Studies. His previous roles include serving on various nonprofit boards, teaching in Northeastern University’s dual master’s degree programs in Australia and Vietnam. Additionally, he founded The Leader Brew Podcast to share the stories of former students transitioning from the classroom to the real world.
David De Cremer is the Dunton Family Dean and professor of management and technology at D’Amore-McKim School of Business, Northeastern University (Boston). He is the founder and former director of the Center on AI Technology for Humankind in Singapore and an advisory board member at EY (formerly Ernst & Young) for their global AI projects. Before moving to Boston, he was a Provost’s chair and professor in management and organizations at NUS Business School, National University of Singapore, and the KPMG-endowed chaired professor in management studies at Cambridge University He is named one of the World’s top 30 management gurus and speakers by the organization GlobalGurus, one of the “Thinkers50 list of 30 next generation business thinkers” and continuously included in the World Top 2% of scientists. In addition to being highly published in the leading management and psychology journals, he is a best-selling author and his latest book The AI-savvy Leader: 9 Ways to Take Back Control and Make AI Work (published by Harvard Business Review Press, 2024) was named book of the month June 2024 by the Financial Times, selected as a must-read for summer 2024 by the Next Big Idea Club, and #1 new release at amazon.com.
References
-
Aksehirli, Z., Y. Bart, K. Chan and K. Pauwels (2022). Break the Wall: Why and How to Democratize Digital in Your Business (Emerald Publishing).
-
De Cremer, D. (2024). The AI-savvy Leader: 9 Ways to Take Back Control and Make AI Work, Harvard Business Review Press.
-
De Cremer, D., and Kasparov, G. (2021). AI Should Augment Human Intelligence, Not Replace It. Harvard Business Review: https://hbr.org/2021/03/ai-should-augment-human-intelligence-not-replace-it
-
Jones, A., and Miller, B. (2023). AI in Education: Navigating Ethics and Performance. Journal of Educational Technology, 15(2), 103-115.
-
Kiron, D, Altman, E.J., and Riedl, C. (2023). Workforce Ecosystems and AI. Brookings Institution. https://www.brookings.edu/research/workforce-ecosystems-and-ai/
-
Pauwels, K. (2014). It’s Not the Size of the Data: It’s How You Use It: Smarter Marketing with Analytics and Dashboards, American Management Association (AMACOM).
-
Sanders, N., and Wood, J. (2024). Humachine: AI, Human Virtues, and the Superintelligent Enterprise (Routledge Press).
-
Smith, C. (2022). The Role of AI in Modern Classrooms: Challenges and Opportunities. Educational Review, 29(4), 467-482.
-
Williams, D., and Harris, E. (2021). Integrating AI into Teaching Practices: A Guide for Educators. Teaching Innovations, 18(1), 89-101.