Artificial intelligence and human habits. – culture

It’s always amazing what people can do with the technology available to them. The actual purpose is often redirected and thus it can happen that one uses one of the most complex tools known today to simulate friendships.

There is a program called Replica whose maker calls it “the man of artificial intelligence who cares.” Users engage their AI friends in conversations, such as conversations about their deaths or current events such as the death of the Queen. Quite a few people work in a romantic RPG using artificial intelligence and give them cute nicknames.

It is becoming increasingly common for users to report that their personal AI-powered friend has developed awareness. However, attributing human characteristics to simple AI is problematic on the part of the user. Presumably, such allegations will be heard more frequently in the future. It is the modern equivalent of Marian apparitions and religious visions, stemming from people’s firm belief that there must be more to worldly truth.

If you let a bot work with text from the internet, what would it say?

But not only users, but also the technology itself has to struggle with problems. There is, for example, a so-called Blenderbot for Facebook’s parent company Meta. To put it mildly, things don’t go smoothly with that either. Like most high-end AI systems, this one was trained on a large set of text, dubiously obtained from the Internet, and fed into a data center containing thousands of expensive chips designed to transform text into something coherent remotely.

Since the bot is online in beta, users have reported conspiracy theories told by the AI ​​or outrageous stories it invents. Sometimes she claims that Trump is still president of the United States, then praises the RAF or raises anti-Semitic slogans. Even Mark Zuckerberg did it badly: the bot describes the CEO as “intimidating and manipulative.”

These two examples nicely illustrate the general state of affairs in terms of language programs. Programs are either of little use – or tend to be able to make problematic content and fake news much more powerful than human users could ever do. The big question is how to get out of the conversations all the toxic things and phrases written by humans on the internet that serve as templates for artificial intelligence.

Decent AI does not give any investment advice, you will likely have bad experiences there

Alphabet’s Deep Mind has now experimented with a new method for filtering passive input. For a chatbot called Sparrow, you are not only using self-learning programs but also giving the AI ​​a binding discussion guide. The developers have formulated 23 specific rules to prevent the program from causing too much harm when talking to people.

Some rules are self-explanatory and understandable, such as not posting self-injurious behavior or not pretending to be human. On the other hand, other rules seem so specific that they probably stem from bad experiences in the past. According to its developers, the bot is not allowed to provide any financial or medical advice.

According to the first tests, successes can be easily measured. According to the researchers, the new chatbot gives three times less suspicious advice and statements than its predecessors. We know the principle from science fiction author Isaac Asimov, who formulated the Four Laws of Robotics. Today, however, it is not about preventing machines from subduing humanity. In fact, you just have to prevent users from investing their savings in cryptocurrency or drinking chlorine bleach. As always, reality is more mundane than fiction.

Leave a Comment