The case of the forgetful AI
Every day, AI is discussed in countless articles. My recent posts about AI being conscious or AI driving cars are just a few examples. It’s my contribution to a much bigger conversation around the vision, the inventors and the dream of scientists to mimic the human brain and its functionality. This is about building the ultimate system: a system which can learn anything.
But there is another aspect of the human brain which we - as humans - understand even less. The magic of forgetting.
We are training algorithms to recognize images of people, animals, plants and objects. We have algorithms which can absorb large amounts of text and repeat it back to us. Once we train the algorithm, it should never forget.
But what happens when the information which it remembers becomes obsolete or irrelevant? What happens when we realize that the information is inaccurate?
One type of AI Forgetting happens as you are training the algorithm with new information. That triggers an event which makes the system completely forget everything that it learned before. It is called Catastrophic Interference or Catastrophic Forgetting.
How does this process work? Imagine that you are training an algorithm to recognize dogs. Then you send an image of a cat and the algorithm can no longer recognize a dog anymore. To cat lovers it sounds like a reasonable outcome. If the same algorithm were used in a self-driving car, your Tesla could suddenly become an off-road truck.
The other type of forgetting is an intentional forgetting where you tell the algorithm which data to forget and when it should forget it. That is a major challenge for the researchers because by the time they finish the experiment, they can't remember the reason why they started the project in the first place. They are running one of these projects at Meta (coincidentally, a company that is trying to get you to forget its original name). This company is designing a system that is meant to forget at scale.
It is a futile undertaking.
Forgetting is a highly personal experience. It is a curse and blessing at the same time. Your brain's forgetting functionality works in a mysterious way which is not understood. Trying to develop an algorithm to mimic that doesn’t have much prospect of success. In fact, it’s unclear how they could even measure success.
Will the scientist proclaim that their algorithm can forget better than their competition? I can see the newspaper headline already: 'After years of training, our AI is stupid again'. This will become the next trend - designing AS. The Artificial Stupidity. You can't have one without the other.
I'll leave you with an article which a colleague of mine sent me recently about Terry Winograd (with a title I’m sure you can appreciate: Siri, Who Is Terry Winograd?). It is excellent at describing the conflict of '... the schism between the devotees of artificial intelligence, such as Minsky, who quipped that robots would “keep us as pets,” and the community of intelligence augmentation, IA, whose members believed that computers should aid humanity, not supplant it...'
Yes, we can become robot's pets if we surrender our decision making and forget who we are. That would then become the only pattern.