By Marcelina Horrillo Husillos, Journalist and Correspondent at The European Business Review
Is the ChatBox Adding Knowledge or Leading Us to Stupidity? AI and the Neo-colonisation Issue
LLM depends on enormous amounts of social data extracted from online information environments on unwitting subjects all around the world. It searches for patterns in them and becomes increasingly proficient at generating statistically probable outputs as well as seemingly human-like language and thought.
Language AIs draw on a large amount of human labour to decide what is and isn’t a good answer. It also requires user feedback to determine what makes toxic content and to address challenges such as bias, fabrication, contradictions, and inaccuracies, resulting in more accurate and reliable language generation.
Chat Generative Pre-Trained Transformer, better known as ChatGPT, developed by the company OpenAI, is based on immense quantities of public junk text from the internet which cannot compare, analyze, or evaluate arguments or information on its own and is incapable of moral thinking. The rise of cheap and easy-to-use generative AI tools without boundaries for their deployment along with gradually greater influence in public opinion are creating the conditions for a perfect misinformation storm.
AI Colonialism
A recent investigation published in Time magazine revealed that hundreds of Kenyan workers spent thousands of hours reading and labeling racist, sexist, and disturbing writing from the internet, including graphic descriptions of sexual violence, to teach ChatGPT not to copy such content. They were paid no more than US$2 an hour, and many unsurprisingly reported experiencing psychological distress due to this work.
AI-based algorithms are mainly developed in the United States – which holds access to abundant data – and then scaled globally, reinforcing the supremacy of the English-speaking world. This creates a type of colonial hazard, reshaping people’s perception and supra dependency on the technology.
It is being said that algorithms are “opinions embedded in code”. AI has already been accused of underestimating the health needs of Black patients and of making it less likely that people of color will be approved for a mortgage.
Indeed, machine learning research is overwhelmingly male and white, a demographic a world away from the diverse communities it pretends to help. Big Tech firms don’t just offer online recreations—they hold enormous amounts of power to shape events in the real world.
Decolonizing AI is essential if it is to achieve its potential for public good in other areas as well. The colonial erasure of communities has led to the same sorts of under-representation in contemporary national statistics, raising similar challenges for the development of AI in the public sector. Dr. Mahlet Zimeta.
Mimic the human-like
Despite the name “artificial intelligence,” large language models are actually really dumb, stuck in a “pre-human” or “nonhuman” phase of cognitive evolution. They are completely dependent on human knowledge and labor. There are many, many human workers hidden behind the screen, and they will always be needed if the model is to continue improving or to expand its content coverage. LLM can’t reliably generate new knowledge, therefore lacking the most critical capacity of intelligence, and they are incapable of moral thinking.
On the contrary, the human mind is a surprisingly efficient, creative, and sophisticated system that operates with small amounts of information. It seeks not to infer brute correlations among data points but to create explanations. It also applies emotions to its reasoning, which seeks to resolve the moral and ethical questions intrinsically connected to the core values in our societies.
We know from the science of linguistics and the philosophy of knowledge that LLM differs profoundly from how humans reason and use language. It degrades our knowledge and debases our ethics by incorporating a fundamentally flawed conception of language and knowledge into our technology.
Chomsky sees the use of ChatGPT as “basically high-tech plagiarism” and “a way of avoiding learning”. It exhibits something like the triviality of deception: plagiarism, apathy, and obviation. It either over-generates (producing both truths and falsehoods, endorsing ethical and unethical decisions alike) or under-generates (exhibiting noncommitment to any decisions and indifference to consequences).
Given the amorality, faux science, and linguistic incompetence of these systems, we can only laugh or cry at their popularity.
Ethics
If you ask ChatGPT a question about its own leading role in a technological revolution, the first part of its answer is fairly unimpressive: “It’s underway and expected to have a significant impact on many sectors”. However, it goes on to say, “Its potential is enormous, but also raises ethical concerns.” That’s the feeling shared by most of us.
ChatGPT tends to answer requests based on its text sources – datasets with internet content, including webpages, books, essays, and other publicly available text sources – without credits or reference to the sources of this information. This raises huge Copyright concerns, particularly in light of ChatGPT’s widespread usage in academic and content creation sectors.
The critical question is, then, who owns the intellectual property of this language AI content? Is LLM guilty of Copyright infringement? Can one sue ChatGPT? Are users culpable in the case of content crafted out of machine learning?
Conclusion
AI language tools like ChatGPT, based on large language models that are fed vast amounts of data taken from the internet, have become increasingly popular in recent years, with applications ranging from chatbots to content generation. However, many of these products have barely been tested before their release into society, and they lack complete regulation.
There are several legal and ethical implications as well as other philosophical questions that will arise as time passes. Hence, setting up AI consumer protection laws and AI regulators as watchdogs responsible for its oversight is crucial to protect users and societies from AI-based potential breaches of algorithm data. As for now, it is too early to identify all the legal and ethical implications contained in the AI tool, as more issues will only emerge with use and time.