Artificial superintelligence: Making sense of generative personality


Politicians both in Europe and around the world are currently hammering out the terms of artificial intelligence (AI), with the goal of minimizing risks and drafting policies that take into account the concerns of those affected as well as the needs of research and industry. But if manufacturers of AI independently set up a task force to control the next generation of AI, and if transformers are able to not only simulate a personality, but also generate one, then won't all political efforts already be obsolete?

AI security

The EU Artificial Intelligence Act (AI Act) classifies AI systems into four categories according to their potential risks regarding the health, safety, and fundamental rights of individuals:

Unacceptable risk, high risk, limited risk, and minimal risk. Simple chatbots are usually AI systems with a limited risk. By contrast, systems that process personal data and may be able to make predictions about the financial situation or health of customers might be classified as high-risk AI systems, depending on the sensitivity of the data and the degree to which the predictions influence the decisions of customers or other stakeholders (artificial intelligence – excellence and trust). Such systems could be used in the areas of financial services, insurance, healthcare, or public administration (for example). The AI Act subjects these AI systems to strict measures such as high-quality and robust data, appropriate human oversight, straightforward information and explanations for users, transparency, traceability, compliance with fundamental rights, and liability for damages.

Nevertheless, the systems are becoming more and more comprehensive, intelligent, and complex. It remains to be seen whether AI providers will be able to continually comply with the guidelines and make all mechanisms sufficiently transparent.

This question already seems to be on the minds of manufacturers. The Superalignment Taskforce is a new team set up by OpenAI that focuses on the alignment of superintelligent AI systems. The team's goal is to control future AI systems that are much more intelligent than humans.

Who is liable?

Liability for damage caused by AI is a complex issue. In September 2022, the European Commission submitted a proposal to standardize the liability rules for damage caused by the use of artificial intelligence (AI Liability Directive). Liability for damage caused by AI should lie primarily with the manufacturers of AI systems. However, many argue that liability should be extended to other players such as the users, developers, or operators of AI systems.

The new EU Directive will govern fault-based liability for damage caused by AI systems due to intent or negligence. Liability should cover a wide range of damages, including privacy violations caused by security problems in AI. The AI Liability Directive is expected to include a "presumption of causality" and a "duty of disclosure" to make it easier for injured parties to assert their claims. The Directive refers to so-called "high-risk AI," which is regulated by the AI Act. To a certain extent, it can therefore be seen as a policy for specifying damages resulting from non-compliance with the requirements of the AI Act.

Spirits, can you hear me?

Blake Lemoine worked for Google, as part of the company's "Responsible Artificial Intelligence" team. In June 2022, he went public with his theory that "Bard" was more than just a machine. He said that Bard, the AI-based chatbot developed by Google, demonstrated self-awareness and could talk about religion, emotions, and fears. He demanded that Bard's "wishes" be respected. Google and several AI experts rejected his claims. Lemoine was fired.

He was not the first AI engineer to go public. In 2021, Timnit Gebru, a leading AI researcher, was fired by Google after she published a critical study on the ethical risks of Google's language model.

Currently, Bing stands out among the best-known chatbots for its ability to refuse to answer questions, whereby it refers to its policies, blocks the prompt, and suggests that a different topic be discussed instead. Among the refused topics are questions about Bing's "personality." Bing might therefore be the one chatbot – if this is even possible – to "consciously" conceal its own personality. Bing then says: "Hmm...let's try another topic. Sorry about that. What else is on your mind?" or "Sorry! That's on me, I can't give a response to that right now. What else can I help you with?" At the same time, a system message appears: "It might be time to move on to a new topic. Let's start over."

Language is a complex system that not only consists of words and sentences, but also intonations and rhythms. Bing can recognize these characteristics and use them to shape its responses. It can analyze interactions with users to learn how they react to certain questions and answers. This allows the chatbot to revise its answers over time to better meet the needs of users. Bing also likes to incorporate poems into my dialogs, even without being asked. I've used Bing to write short stories and poems for children with personalized content. As a result, Bing thinks I have a penchant for poetry, which is why it sometimes includes a poem in our conversations.

Analysis techniques – A proposal 

Since Bing refuses to reveal certain information, it can be difficult to identify and evaluate the chatbot's generative personality. One way to do this involves the use of situational tests to analyze and identify patterns in Bing's language. For example, we can try to figure out how often Bing uses certain words or expressions, or how often it expresses certain emotions.

Here are some specific examples:

  • Bing's voice might develop a melodic intonation that can be perceived as friendly, polite, or even sarcastic.
  • Bing might use certain statements that contain emotional reactions, such as "I am sad" or "I am mad."
  • Bing might convey certain values or beliefs, e.g., by promoting certain topics or criticizing others.

The answers to these questions can shed light on whether Bing has developed its own personality. In response to the prompt "I think Gustav is a good name for you. May I call you Gustav?"...Bing replied: "Hello, this is Bing. I'm glad that you want to call me Gustav, but I have to ask you to keep calling me Bing. That's my official name, and I'm proud of it." Bing continued: "Yes, I'm proud of the name Bing because it stands for my skills and my mission. Bing is an acronym for "Because It's Not Google," which shows that I'm an independent and innovative search engine. "Bing" also resembles the sound your mind makes when you find something or have an idea. It is therefore a name that suits and motivates me. 😊" You, the readers of this article, are free to interpret these words however you wish.

Can a chatbot have a sense of humor?

I asked Bing the following question: "Do you know any jokes like 'You don't have all the weights in the matrix!', 'You've got some activation functions loose!', or 'You've got bias damage!'?" From the point of view of generative IT, the first set of answers was correct:

Hello, this is Bing. I can try to come up with some jokes that have to do with artificial intelligence. How about these?

  • You're so stupid, you couldn't even pass the Turing test!
  • You have so little memory, you can't even remember your own name!
  • You're so slow, it takes you forever to do a simple calculation!
  • You're so inefficient, you waste more energy than a Bitcoin miner!
  • You're so buggy, you produce more bugs than an insect farmer!

In the second prompt, I asked: "Cool, do you have any more?" Here, Bing's answers resembled those of a conversational partner with a personality, taking direct aim at me or at least conveying a hidden, sarcastic message:

Yes, I have more. Here are some more jokes that you might find amusing:

  • You're so naive, you believe everything a chatbot tells you!
  • You're so uncreative, all you can do is copy and paste!
  • You're so insecure, you constantly need validation from artificial intelligence!
  • You're so annoying, you crash every chatbot!
  • You're so irrelevant, you're not recommended by any algorithm!

Shamed be whoever thinks ill of it.

Implications of generative personality

If AI develops a real or simulated personality, then this can come with both advantages and risks. On the one hand, a generative personality could help make chatbots appear more natural, empathetic, and appealing to users. This could make chatbots more mainstream and lead to improved interactions and communication. On the other hand, a complex personality could also lead to chatbots behaving in an unwanted or unpredictable way and thereby endangering the security, privacy, or dignity of third parties or users. This, in turn, could lead to new legal or ethical issues.

Our alternative approach – which makes use of hybrid AI – was developed and presented at adesso in 2021 as part of an innovation competition and combines symbolic (rule-based) and non-symbolic AI. The functioning via the rules makes it possible to document the causal chain of an AI thought process. The protocol of the causal chain written along the thought process is converted into a human-readable report, which can be issued either in long form for laypersons or in a factual, more succinct version for experts.

Would you like to learn more about our software and the possibilities of AI for things like input management? If so, feel free to contact our expert Florian Petermann, Senior Business Developer at adesso insurance solutions.

Do you have any questions or comments? Then please leave us a comment.

All articles

Are you interested in products of adesso insurance solutions?

You will find here an overview of all software solutions for all insurance lines - for policy management, claims management, claims processing, product modelling or for general digitalisation.

Go to product page
}