As Stephan Kudyba explains, generative AI affects companies in different ways….it all depends on the data that they rely on.
As the rollout of generative AI gains momentum, the debate about its effect on the labour force is intensifying. Automated document and code creation and the production of innovative images has industries of all walks on the edge, perplexed about how this technology can be used to enhance operational efficiencies and work productivity through reduction of time, and possible labour inputs.
Before we throw in the towel on the importance of humanity, we’ve got to conduct a reality check on what the process entails to use generative AI, and what the task of operationalising this platform to be part of everyday activities involves. The technology has introduced dramatic functionality in disseminating user prompts to access and process established data resources, creating content corresponding to user directives, all in a matter of seconds. Although this functionality seems cutting edge, the reality is that versions of it have been in existence for years. The truly impressive elements of current large language models (LLMs), however, is the speed and processing capabilities of extremely large data resources.
The Driver of Disruption
The key element to determining how generative AI will impact the viability of a company’s strategic focus and the corresponding underlying workforce relies on the data that it utilises to provide value to its customers. Remember, AI gains its value by the data it can access and analyse to produce output. The platforms that have been released to the marketplace so far access available data resources that exist in the digital, open space and those created by data gathering entities as well.
Organisations that largely rely on these data can gain greater value from generative AI, since it can process and produce custom output that addresses worker and consumer needs quickly. Unfortunately this increases a company’s exposure to greater disruption from competitive forces. The logic follows that LLMs can produce creative output at a fraction of the time of humans. Jobs that are grounded in leveraging open data resources to produce content for consumers are at risk, given that a general user base can access generative AI and create content themselves.
Disruption…the Driver of Innovation
This may be the reason behind Chegg Inc. claims that ChatGPT has disrupted its business recently, as the company leverages available online data along with expert input to produce content that aids educational activities for students.1 Students may now look more to generative AI themselves to create the desired information. The response to this disruption, however, has been a pulse of innovation, as the company seeks to produce its own generative AI-based system and add proprietary data resources and advanced interaction with customers.
The driver of disruption and innovation is also illustrated in CARMAX, who took a proactive initiative to work with generative AI providers to optimise available data resources, adding additional data, to produce valuable web content for their customer base in a fraction of the time it previously would have taken.2 This proactive approach helps mitigate the potential disruptive force that arises from simply relying on open data resources (data on used cars) which is accessible to general users of generative AI.
Both of these examples describe the exposure to the disruption of organisations that depend on open data resources. These disruptions can cause a reduction in labour inputs for producing custom content for consumers. However, the innovative reactions can cause increases in work in the form of:
- generative AI prompters to create custom output
- Editors of output that is created by generative AI
- SME work groups that seek to optimize data inputs to enhance information output.
The Net Creation of Work if Internal Data Is Essential
A major task that must be examined at this juncture regarding the effects on displacing jobs revolves around the operationalisation of LLMs in organisations. LLMs have shown their value in creating limited text- and code-based content, where users need to edit the output generated.
This can no doubt replace more routine-based job functions (e.g., creating blogs, preparing marketing blurbs, producing code for a particular application, creating images). However, the process of operationalising generative AI as an integral part of an organisation’s technical infrastructure may actually introduce more work and human input. Two potential tactics that have currently evolved to address this issue include fine-tuning and in-context learning.
The true balance of work creation verses disruption is evident when organisations require not only open/public data resources to produce relevant content, but data that is internal to the organisation. Existing generative AIs have been designed to leverage data that is open to the marketplace, but not the more sensitive internal resources of particular companies. This introduces a significant hurdle to operationalising LLMs, or in other words, using generative AI technology for everyday use to produce value to the company and to the market it serves. These companies need to devote significant work resources to organising existing and real-time data that is integral to creating credible output.
Consider the financial/investment industry (e.g., Morgan Stanley) that is looking to leverage LLMs to create custom information regarding wealth management issues.3 In order to create credible content, generative AI must access and process a multitude of market data, including data residing on the open market and data produced internally through the work activities of systems and employees, where the currency or timeliness of data is essential. Will generative AI result in significant job reduction in this industry by replacing the ability to produce credible wealth management content? Or will the result be a creation of work and corresponding labour inputs to accomplish new and augmented work tasks to operationalise generative AI?
Organisations that intend on using generative AI as an operational platform that requires the access of internal and open/public data may be faced with increased work involving the reorganisation of internal data resources (e.g., categorisation, storage, naming, codification, etc.) to be accessed by generative AIs. This involves data professionals and SMEs or individuals with technical skills and domain knowledge. Other, new work that is required entails prompt engineers or individuals that have the skills to optimise generative AI prompts to create desired output. Additionally, this output has to be verified according to relevance, accuracy, and timeliness, which again requires SME domain knowledge of content produced. These jobs are content editors.
The Degrees of Disruption and the New Effect on Work
The key message to emphasise is that generative AI can no doubt disrupt and reduce the human element in conducting basic tasks. This may render certain jobs obsolete or may augment previous job roles to higher value creating activities.
Organisations that heavily rely on data resources in the open digital market that are re-bundled by generative AI to provide value to their consumer base are at risk of disruption, as generative AI can automate much of this. The response in this case is to innovate and enhance the value of content it produces to consumers. Again, this is disruption balanced against innovation, or the loss of some jobs and the creation of others.
The last example illustrated involves the case of companies who require the input of custom internal data and operationalising LLMs in its workflow. The balance for this sector seems to favour an increase in work and human input in the form of optimising internal data resources, prompt engineers, and content editors to maintain a quality workflow which is measured by the production of relevant, accurate, and timely information.
Will the world experience dramatic reductions in the need for the human element? Don’t count out the knowledge and innovation of individuals.
About the Author
Stephan Kudyba is a professor of analytics and information systems at the Martin Tuchman School of Business, New Jersey Institute of Technology. He has held senior management positions at prominent organisations and has been a researcher, professor, and practitioner of AI applications in business for over 20 years. Dr Kudyba has written articles for Harvard Business Review, MIT Sloan Management Review, InformationWeek and The Wall Street Journal to name a few, and has published eight books addressing technology, analytics, and organisational performance. Email: [email protected]
References
- 1Singh, M.“Edtech Chegg tumbles as ChatGPT threat prompts revenue warning”, Reuters, May 2, 2023.
- 2 Rooney, P. “CarMax drives business value with GPT-3.5”, CIO Magazine, May 5, 2023.
3 Davenport, T. “How Morgan Stanley Is Training GPT To Help Financial Advisors”, Forbes Magazine, May 20, 2023.