Dumb and Dumber with AI

Without you knowing, AI will make you dumber.

In 2023, yours truly wrote that 'AI will separate faster the smart from the stupid'

The same year, in ‘LinkedIn on its way to Artificial Stupidity’ I wrote about this post from LinkedIn's product manager:

'When it comes to posting on LinkedIn, we’ve heard that you generally know what you want to say, but going from a great idea to a full fledged post can be challenging and time consuming. So, we’re starting to test a way for members to use generative AI directly within the LinkedIn share box.'

Earlier this month, an MIT study quantified the negative impact of AI on our cognitive ability.

What was the study about?

Researchers recruited participants and divided them into three groups. They called them the LLM group, the Search Engine group and the Brain-only group. The task was to write essays on topics including - Art, Choices, Courage, Forethought, Loyalty, Perfect, and Philanthropy. The objective was to find out 'the cognitive cost of using an LLM in the educational context of writing an essay.'

As you might have guessed, the first group was required to use an LLM (e.g. ChatGPT), the second group was using Google to find supporting documents and the third group could use only their brain.

The results? I'd say even worse than expected.

The researchers used several techniques to quantify the outcomes. They 'used electroencephalography (EEG) to record participants' brain activity in order to assess their cognitive engagement and cognitive load, and to gain a deeper understanding of neural activations during the essay writing task.' The EEG analysis demonstrated that 'Brain connectivity systematically scaled down with the amount of external support …'

The conclusion - 'The use of LLM had a measurable impact on participants, and while the benefits were initially apparent, as we demonstrated over the course of 4 months, the LLM group's participants performed worse than their counterparts in the Brain-only group at all levels: neural, linguistic, scoring.'

As you can imagine, the speed with which the participants generated the essays using an LLM was much, much higher than the one where participants had to write it on their own. However, the participants using an LLM were not able to recall any detail from the essays they wrote.

The other result? The essays generated by the LLM were recognizably similar and most importantly absent of any new, original, deep insight. The LLM is returning the most probable answer based on its training data. Also notable were the essays written with the help of Google search which reflected the set of results which search query returned.

The implications are significant.

Researching a topic, reading articles, and debating it can lead to unique insights. In contrast, using ChatGPT makes you measurably dumber and boringly average.

Additionally, any content produced this way and published online will become the new training set for LLMs which will only enforce this un-original content to be promoted in subsequent queries and thanks to probability anything new and original will never be presented in the answer.

It also makes any prediction that LLMs can be the basis for Artificial General Intelligence (AGI) or Artificial Superintelligence (ASI), well - delusional.

Furthermore, the call for introducing AI into the education system will result only in an illusion that learning is happening. What's worse, it will create a total dependency on technology which actually nobody knows how it works and over and over has been proven that it's impossible to verify its accuracy.

The recurrent pattern? Two years later and things are getting worse. That's why I research and write these articles - it forces me to think differently.

Next
Next

Mr. Altman’s singularity delusion