Outlook

ChatGPT – Challenges and Social Implications

by M. Demurtas

It is almost unthinkable to present a series of articles dedicated to AI and ChatGPT, without ever discussing, to some extent, the social implications, dangers, and the new challenges that recent ease of public access to AI could present to society. Many thought-leaders have often weighed-in on this sensitive topic.

To keep a pragmatic approach, we can address this topic on 2 fronts: the social implications and challenges that result from the current levels of AI coupled with its ease of access; And, the possible challenges that we are likely to face with the highly anticipated evolution to Artificial General Intelligence (AGI). The level of AI, as we currently know it, and which can be referred to as Narrow AI, generates exceptional results in the (narrow) field for which it’s been trained.  ChatGPT is exceptional at composing text, but it cannot drive a car, whilst an autopilot can (almost) drive a car, but it can’t write. The hypothetical AGI, on the other hand, excels in every field, equaling (and possibly surpassing) human capabilities.

Narrow AI has already demonstrated impressive results in many fields, and it is impossible to deny that it will continue to have an important impact on society. Those impacts however come with their fair share of short-term organizational challenges, and long-term social implications that require us to adapt and rethink societal policy and regulation.

Among the short-term organizational challenges we can easily mention the issues related to privacy, confidential information, copyright, and even child and minority protection. These issues are particularly important when it comes to AI, simply based on the way in which AI  works:  As we’ve seen in the previous articles of this series, AI is trained with a very large volume of data.  This in itself presents a first issue: Where do these large sets of data come from?  Can one be so carefree as to simply collect data from the public internet?  Once the data has been crunched and the algorithm trained, the AI output is somewhat of a weighted average of the collected data, without any easy ways to control the final result. An important part of recent AI advancements is based on proprietary ways to train and tune these neural networks.  This creates a series of problems:  Who owns the AI output? (if it is leveraging work from others);  Who is accountable for the output? (assuming that it can be deemed offensive or dangerous to others);  And how can someone withdraw their data or their work which was previously available on internet, if it has already been used as training data (European right to erasure, as an example).

The short-term challenges are probably the easiest to overcome:  As it often happens in the technology field, regulators have stepped in a little bit late, but we can assume that, in time, all the regulation issues will be addressed (until the technology further evolves and new issues arise).  In the meantime, some countries have decided to limit public access to certain types of AI tools (such as ChatGPT).

More long-term challenges will be more difficult to solve however, and these are likely to require a certain level of social adaptation too. As it stands today, Narrow AIs excel in several tasks that were once considered only achievable by humans.  We can also expect that Narrow AIs will continue to improve with time. Generative AI is good in writing articles and essays; In producing images and video from a given textual description; In producing research summaries on a given topic; Or in writing (or correcting) computer code for example. Sure, we could argue that in any given field, we humans are better than any Narrow AI, but this is probably just temporary. Above all, we need to be honest with ourselves:  Even if we are theoretically better at a task than any narrow AI, for all practical matters, given the lack of time, resources and interest, we often don’t produce at our best, and our ‘average’ results are often not too different from AI results.

It is not uncommon to hear that most of the current jobs will be affected by AI, however it is more difficult to predict how each job, and each person, will be affected. Extremists predict a dire future for humanity, with AI taking over all jobs, and all compensation from the common worker, and ultimately with a few corporations (the AI owners) collecting all of the profits and ruling the world.  The outlook cannot be so bleak:  Let’s not forget that the term “computer” initially referred to a person, who’s key job function was to compute, or to perform calculations for others. Big banks and insurance companies used to have entire floors filled with “human computers”, who would perform calculations all day long.  The adoption of digital computers didn’t result in widespread unemployment as one would think.  Instead, human computers evolved in their job functions. 

Will the massive adoption of AI result in something similar? Looking back at the major innovations of human history, we can probably assume that AI will bring on something similar. All the major innovations of human history initially caused fear, but the long-term result has usually been an improvement in the general quality of life. Surely certain jobs will disappear, most jobs will have to change, and many more will be created; It is up to us how to best manage the transition and the adaptation. 

Different considerations must be made regarding the Artificial General Intelligence (AGI), sometimes also known as ASI (Artificial Super Intelligence). Contrarily to narrow AI, AGI is a single hypothetical artificial intelligence that is capable to match and surpass the human intelligence in every field and in every task. In the most extreme scenario, an AGI that also controls the means of production, can improve itself at every iteration, thus creating a better AGI at every step. It is easy to see that in this scenario, the AGI improvements will become faster and faster (a more intelligent machine will discover how to create an even more intelligent machine easily, in an infinite loop), some experts have coined the term ‘singularity’ to describe the end of this scenario, where the only being to still exist is this super (artificial) intelligence, with no room remaining for (inferior) human intelligence. Some will argue that this is a step forward in the evolution of intelligence, and in which human intelligence may in fact be no longer necessarily required.

To keep the discussion at a more practical level, let’s consider the following: There is currently no consensus as to whether the evolutions that we see on narrow AI will ever lead to AGI.  Will ChatGPT10 ever evolve to be a general intelligence?  If we can speculate, it appears that it won’t be.  The most recent discoveries in neurology show that a brain is more complex than the perceptron model (which is still currently used in AI, and also its foundation). Furthermore, let’s be mindful that the current LLMs have no real awareness capabilities (ie: they don’t really understand the meaning of their output). 

If we were to ever evolve to AGI levels, many already argue that we simply shouldn’t allow any AGI to control its means of production (so that it should never be able to replicate). To that extent, some also argue that we will never be able to confine a super intelligence.  Super Intelligence will ultimately take a life of its own, in the same sense that it is an illusion to believe to have coded a bug-free application:  If one believes that it is bug-free, it is because one hasn’t sufficiently analyzed it.

Exciting times lie ahead of us, and that is one of the few certainties that we have! Our next and final article in this series will take us back to the present, and to the main topic of narrow AIs.  There we’ll discuss current AI applications and tools that just about anyone can leverage to make their jobs more efficient (and hopefully more fun!)

References

https://en.wikipedia.org/wiki/Open_Letter_on_Artificial_Intelligence

https://www.gatesnotes.com/The-Age-of-AI-Has-Begun?WT.mc_id=20230321100000_Artificial-Intelligence_BG-TW_&WT.tsrc=BGTW

https://www.economist.com/by-invitation/2023/04/28/yuval-noah-harari-argues-that-ai-has-hacked-the-operating-system-of-human-civilisation

https://en.wikipedia.org/wiki/The_Singularity_Is_Near

https://en.wikipedia.org/wiki/Instrumental_convergence#Paperclip_maximizer

Outlook

partner with us

contact us for further information

work with us

spontaneous applications

partner with us

contact us for further information

work with us

spontaneous applications