Why do we say "hello" to AI?

“If we talk to machines as we talk to humans, isn’t there a risk that we start talking to humans as we talk to machines?” - Louis de Diesbach, theology ethicist

Saying ‘Hello ChatGPT’ is weirder than you think

Raise a hand if you’ve ever said “hello” to anything AI – your smart speaker, your AI-powered phone assistant, even your chatbot. If you’re currently raising your hand, well, you’re not alone. Most people have said hello to AI before.

But why? AI isn’t sentient, it doesn’t recognise human traditions like greetings. Even if you don’t say “Hey,” “Hello,” or even “Thank you,” AI will still function just fine. And yet, we continue to interact with it the same way we treat other humans.

Here’s a fun fact: this behaviour isn’t random. It’s instinctive. The truth is, it’s deeply rooted in psychology. And it has major implications for AI ethics in product management and design that business and product leaders simply can’t ignore.

The psychology of talking to AI

Psychology has plenty of things to say about talking to AI like a human, even when we know it’s just code. That said, it’s usually attributed to three cognitive tendencies humans display.

The ELIZA effect

In the 1960s, the natural language processing program ELIZA rose to popularity for its unique and never-before-seen capability: it could ‘talk’ like a human being. It was a true pioneer – as one of the earliest ancestors of modern AI chatbots.

In reality, ELIZA was just using basic pattern matching. But that didn’t stop people from forming emotional connections with it. This phenomenon, where humans attribute human-like intelligence to AI, became known in psychology as the ELIZA Effect.

Even now, AI chatbots and personal assistants trigger the same response. They don’t really “understand” us, but because their responses feel natural, our brains see them as “human” enough.

Teleologic thinking

Teleology comes from the Greek words telos and logos, which mean “end goal” and “explanation” respectively. It refers to the brain’s tendency to assume that things exist for a purpose, even when they don’t. 

Humans instinctively look for intent and meaning, whether in nature, technology, or random events. It’s the reason people see faces in clouds or believe their car is “acting up” on purpose. When AI responds in a natural way, our brains complete the puzzle, assuming it “understands” us rather than simply generating statistical outputs.

Social conditioning 

Social conditioning and AI are also interconnected. Regardless of culture, most humans are conditioned from childhood to interact with others in a polite, respectful manner.

So, when you put a voice, a name, a face, and perhaps even a body to AI, our natural reflex kicks in. Even if we know it’s just a machine, we instinctively engage with it the way we would with a person.

In other words, we don’t actually believe the AI is human, but our brains still apply the same social rules because of how deeply they’re ingrained.

The ethical dilemma: when AI feels too human

Although the psychology behind AI is definitely interesting, it does raise an important question for AI product designers. Here are five tough questions product managers and designers need to ask:

  • If users emotionally bond with AI, does that create false expectations?
  • Can overly human-like AI be manipulative, influencing behaviour in ways users don’t realise?
  • What happens when people prefer AI interactions over human ones?
  • Are AI companionships healthy, or do they risk social isolation?
  • Are we designing AI to reflect society as it is, or as we want it to be?

Obviously, the stakes are much higher than what we were initially prepared for. Here are some real-world examples.

Paro the Robotic Seal

Paro, the robotic seal, is an interactive robot designed to provide emotional support and stimulate interactions between patients and caregivers in hospitals and extended care facilities.

While it offers comfort, it also raises concerns about AI anthropomorphism. Do patients mistake it for real companionship, and does it reduce human interaction instead of enhancing it?

Replika

Replika, an AI companion, engages in human-like AI interactions. Some users have been developing deep emotional connections, even romantic feelings. 

This shows AI emotional support risks, as users may become overly dependent on an AI that cannot reciprocate genuine human connection.

MIA

Mia is an AI tutor that mimics a supportive teacher. It uses teleologic thinking in AI to make students feel like it truly understands their needs. 

Although it can be a useful tool, the reliance on AI-driven education also raises questions about responsible AI principles as well as what it means to be a teacher and to teach. Does it supplement human educators, or does it replace meaningful teacher-student relationships?

ChatGPT and other AI assistants

ChatGPT and other AI assistants are increasingly used for emotional support, raising concerns about whether unqualified AI should be allowed to act as a therapist. 

These chatbots and personal assistants may seem helpful, but without proper oversight, they do risk promoting manipulative AI design that gives users a false sense of security in moments of emotional vulnerability.

Each of these AI tools provides real benefits, but they also blur the line between human and machine relationships.

When AI is used as a stopgap for systemic problems, it raises a difficult question: Is AI enhancing human connection, or replacing it? If we choose AI over human care, teaching, or companionship, what are we giving up in the process?

Let’s talk about bias

AI doesn’t invent stereotypes, it reflects the world we train it on. If you ask an image generator for a “successful doctor,” chances are it’ll give you a white man. Ask for a “dedicated nurse”? You’ll probably get a woman.

These patterns don’t come from nowhere: they come from us. As Louis pointed out during the event, the real issue isn’t whether AI is biased. It’s which biases we accept. There’s no such thing as neutral AI. Every model is shaped by the data it learns from and the choices made by its creators. That means product teams are always making a call, whether they realise it or not.

Are we okay with AI tools that simply reflect society as it is? Or do we want to build tools that challenge those defaults?

How product managers should approach AI

Technology is never neutral. Every AI product design choice influences user behaviour. Before rolling out AI-driven products, product managers need to ask the following questions.

Personal ethics 

Are you proud of what you’re building? Does it align with your moral values? 

AI has the power to shape user behaviour. If a product exploits human psychology in harmful ways, even unintentionally, its creators bear responsibility.

Legal boundaries 

Does your AI operate within clear legal guidelines, or is it in a grey area?

Laws surrounding AI are still changing all the time. Product designers must be proactive about compliance. They also need to avoid risks that could lead to legal challenges or user harm.

Company values 

Does your AI product reflect the impact your company wants to have on the world?

AI should also be in-line with a company’s broader mission. If an AI product encourages behaviours that contradict company ethics, it could damage brand trust and long-term credibility.

User awareness

Are users fully informed about AI’s capabilities and limitations, or are they being misled?

Users should know what AI can and cannot do. Lack of transparency can result in misplaced trust. This could create risks ranging from misinformation to emotional dependence.

Technology isn’t neutral. Every AI tool reflects the choices of its creators. If we don’t ask the right ethical questions now, we’ll be dealing with the consequences later, whether we like it or not.” - Louis de Diesbach, technology ethicist

The future of AI: companion or replacement?

AI is becoming deeply integrated into daily life. From chatbots and personal assistants to AI-generated art and AI emotional support tools, we’re engaging with machines in new ways.

The big question is: Are we designing AI to serve humans, or are we reshaping humans to serve AI?

AI isn’t going away. But how we design it today will determine how we interact with it tomorrow. AI should be built with clear ethical guidelines, ensuring that it complements rather than competes with human relationships, creativity, and decision-making. Otherwise, we risk waking up in a world where AI is not just assisting us, it’s shaping what it means to be human.

FAQ: Common questions about talking to AI

Should we talk to AI like a human?

It’s natural due to social conditioning, but users should remember that AI lacks emotions and true cognitive ability.

Does saying hello to AI change our perception of technology?

Yes, greeting AI reinforces AI anthropomorphism, making people more likely to believe AI is more intelligent than it really is.

How can we design ethical AI interactions?

Product designers should prioritize transparency, clarify AI’s capabilities and limits, and avoid manipulative AI design..

Are AI companions replacing human relationships?

While AI companionship has benefits, over-reliance could reduce human-to-human interaction. Ethical AI product design should focus on enhancing, not replacing, human connections.

Join our next Product Apéros!

How can we help you?

Do you feel we could be a match?
Then let’s have a first chat together!

;