Artificial intelligence (AI) is becoming more and more realistic, capable of generating human-like voices, images, texts, and behaviors. But how does this affect our trust in the people we communicate with? Do we trust them more or less when we suspect they might be AI agents? And how can we design AI systems that are transparent and trustworthy?
They examined how individuals interpret and relate to situations where one of the parties might be an AI agent, and what consequences this has for our relationships and society.
The Problem of Suspicion
One of the main findings of the study is that being unable to fully trust a conversational partner’s intentions and identity may result in excessive suspicion even when there is no reason for it.
Suspicion can damage our relationships, lead to jealousy, anxiety, and conflict, and reduce our willingness to cooperate and share information.
The researchers argue that suspicion is not only a problem of deception but also of relationship-building and joint meaning-making. Communication with others involves not only exchanging information but also creating rapport, empathy, and understanding.
When we are suspicious of others, we may miss out on these aspects of communication, and lose the opportunity to learn from them and grow together.
The study also discovered that during interactions between two humans, some behaviors were interpreted as signs that one of them was actually a robot.
For example, if someone was too polite, too repetitive, too vague, or too fast in their responses, they were seen as less human and more machine-like. This shows how our expectations and stereotypes about AI can influence our perception of others, and how easily we can be misled by superficial cues.
The Challenge of Design
The researchers suggest that a pervasive design perspective is driving the development of AI with increasingly human-like features. While this may be appealing in some contexts, it can also be problematic, particularly when it is unclear who we are communicating with.
For example, if we call a customer service center and hear a human voice, we may assume that we are talking to a human agent, when in fact we are talking to an AI system that uses pre-recorded loops or natural language generation.
This can create ethical issues, such as privacy violations, manipulation, and fraud. It can also create confusion and frustration when the AI system fails to meet our expectations or understand our needs. Moreover, it can create a sense of intimacy and familiarity that may not be appropriate or desirable in some situations.
The researchers propose creating AI systems with well-functioning and eloquent voices that are still clearly synthetic, increasing transparency and honesty.
They also suggest designing AI systems that are not only efficient and accurate but also respectful and supportive of human values and goals. They argue that AI systems should not only mimic human communication but also enhance it by providing feedback, guidance, and assistance.
The Future of Trust
As AI becomes more advanced and ubiquitous, our trust in human interaction will be challenged and reshaped.
We will need to develop new skills and strategies to cope with uncertainty and ambiguity, to verify information and sources, and to protect ourselves from deception and harm.
We will also need to rethink our assumptions and prejudices about AI and humans, and learn to appreciate the diversity and complexity of both.
At the same time, we will have new opportunities to communicate with AI systems that can enrich our lives and society. We will be able to interact with AI systems that can teach us new things, entertain us, inspire us, and help us solve problems.
We will also be able to collaborate with AI systems that can augment our abilities, complement our perspectives, and support our decisions.
The future of trust in human interaction will depend on how we design, use, and regulate AI systems. It will also depend on how we communicate with each other and with AI systems.
It will require us to be more aware, critical, and responsible for our communication choices and outcomes. And it will require us to be more open-minded, curious, and respectful of our communication partners and possibilities.