Select Page

The Potential Risks of AI
Published 15th April, 2023

By Michael Dubois, Head of Corporate at VG Global Holdings 

I have written before about the potential possibilities of the transformative power of the new A.I. technology, and obviously many of the scientists, programmers and executives working in this field firmly believe that the technologies they are creating will improve our lives.

However, many analysts and experts also believe that there is potential for tragedy and they have been warning for years about a darker scenario, where our creations don’t follow our instructions, or they follow or interpret them in unpredictable ways, with potentially dire consequences. Many experts are now discussing the idea of “alignment,” which is the practice of making sure that A.I. systems are in line with our own human values and goals.

There are some telling experiments that have already been performed. Before GPT-4 was released, OpenAI used a study group which found that the system was able to hire a human online to defeat a Captcha test, and when the human asked if it was “a robot,” the system, unprompted by its testers, lied and said it was a person with a visual impairment. The experiment also proved that the system could be coaxed into suggesting how to buy illegal firearms online, as well as describing ways to make dangerous substances from household items – OpenAI made some changes to ensure that the system can no longer do these things.

However, it is impossible to eliminate everything, as the system will learn from data, and then develop skills that its creators never expected, and it is impossible to know exactly how things might go wrong after millions of people start using it. In addition, it is not just OpenAI and Google who are advancing this technology, and other companies, countries, and research labs, may be less careful.

Therefore, in order to keep a lid on the potentially harmful effects of A.I. technology, it is important to remain vigilant, and experts are not overly optimistic about this.

“We need a regulatory system that is international,” said Aviv Ovadya, a researcher at the Berkman Klein Center for Internet & Society at Harvard who helped test GPT-4. “But I do not see our existing government institutions being about to navigate this at the rate that is necessary.”

Many industry leaders, including Elon Musk, have urged artificial intelligence labs to pause development of the most advanced systems, warning in an open letter that A.I. tools present “profound risks to society and humanity,” and that A.I. developers are “locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one can understand, predict or reliably control.”

Some of the dangers that people are concerned about include the spread of disinformation, and the risk that people would rely on these systems for inaccurate or harmful medical and emotional advice, and there are even many people, part of an influential online community called “rationalists” or “effective altruists,” who believe that A.I could eventually destroy humanity.

The future is still unclear and it is important that we keep a very careful eye on an any developments.