Can AI really tell the truth?

The first two episodes of this mini-series were about figuring out where the GPT-3 program, which can write scripts independently, is technically where today. If the program is able to write almost by itself and search for the same source, the next question is obvious: can the program also write “truth”? Are the interpretations and analyzes really correct?

This problem appears more clearly when using GPT-3 as a question and answer system. Yes, the correct answer to the question asked is often found; But often it doesn’t, and unless you know the answer to the question beforehand, you can’t decide which of the two scenarios you’re looking at: true or false.

Kelsey Piper from the really excellent Vox knowledge portal (nothing to do with this German TV channel!) confirms the possibility of errors: “GPT-3 can even answer medical questions correctly and explain their answers … although not all should be Its Answers Sam Altman, CEO of trusted OpenAI, emphasized the limitations himself when he tweeted: “The hype surrounding GPT-3 is so great…it still has serious vulnerabilities and sometimes makes pretty stupid mistakes.” Even that statement is far true, because Claiming that the program “makes mistakes” is already assuming some intent that GPT-3 does not and cannot have. Not yet.

GPT-3 is the jump from Volkswagen Golf to luxury SUV

The bottom line is that GPT-3 is, to use the literary critic’s term, an “unreliable narrator”: its credibility is compromised by the fact that the programs are not bound by the truth. How and who? This does not mean that GPT-3 is without practical application, quite the contrary. But this does mean that some use cases are appropriate and defensible and some are not. The production of creative novels, as long as they are clearly described as such, are of course perfectly fine; The same applies to hair. And we’ll see many more fantasy games where the authors of GPT-3 contribute to the dialogues.

But: Will OpenAI say “no” if companies want a bot to answer critical questions in the healthcare sector? Is the question “Can the Corona vaccine kill me?” Really something that should be left to the machine to answer? GPT-3 may use the news portals of lateral thinkers as an equivalent source in order to draw a conclusion: “Maybe, there is evidence for”. Which documents – not checked. But here too, companies like Google are already working to clarify the credibility of each source for future bot journalists.

What can you actually do with GPT-3?

On the other hand, applications in which a human is involved are more secure, and there are actually quite a few of them. Almost all of these applications are advanced writing tools that take a user’s text input and provide an alternative version of that entry that is longer or shorter depending on the application. some examples:

  • Generates quick reply from OtherSideAI emails Full length in your writing style when deciding the main points you want to cover.
  • Compose.ai, Footnote and Magic Email It also offers apps for creating emails and in a similar way, Kriya.ai creates custom intros: this can be used to expand the LinkedIn network, for example.
  • Dover.ai drafts a brief job description to a longer version.
  • Entourage write advertising texts Based on the product description; You can sign up for a free trial.
  • Taglines is another tool for creating ad copywhich generates keywords based on product or service descriptions.

The following applies to all applications: The generated text can be dismissed or edited as much as you like. It is only a matter of time when these tools will now work perfectly in English only: AI-based translation software gets better every month, in 24 months at the most, it will be a huge text in another for very complex and demanding content with a single click. Give the language.

The decisive factor is that there is no longer any “translation”, rather the irony, irony, and allusions that are only regionally understood by appropriate examples, phrases, or puns that do not omit or change anything in the core content of the original. The text generated by the chips is only really valid if these expressions can also be used correctly in the original text.

In a week we continue Part 4!

Photo: @Depositphotos.com/PixelsHunter

Wolfgang Blossom

Leave a Comment