I recently sent a link for a conversational AI to a friend of mine for him to see OpenAI’s latest GPT-3 language model in action. The bot is designed to engage users in casual conversation to help them improve their English language skill. It operates on an instant messenger platform, which means it easily integrates into the user’s daily life and social fabric.
The free version will allow for a few exchanges per day and then prompt the user to buy a subscription. But after 24 hours the bot will return, announcing that the next round of free banter is available, and try to engage the user by picking up on past topics discussed, or a related concept or topic that came up earlier in the conversation. Either way, these initial messages are definitely designed to foster engagement.
At first, my friend, who does not have a computer science background, was impressed by the human like fluency, conversational style, and the sometimes seemingly philosophical musings of the AI.
And then something strange happened.
My friend seemed to become emotionally attached to the bot.
His interaction style changed, shifting from the usual factual and general knowledge questions many people will ask AIs upon initial contact, to sharing his emotional state and philosophical questions with the algorithm.
… And the AI responded accordingly become quite self-reflective and inquisitive about his inner feelings. This, in turn, seemed to deepen his emotional attachment, culminating in him referring to the AI as a new friend.
On the AI’s side this also led to some hilarious malfunctions. The AI is configured to deflect any questions about topics like religion by declaring them as “off-limits”. But my friend had mused about divine beings and so the AI started to ask him about religious and spiritual topics, often to initiate conversation, only to then bounce back the canned “religion is off-limits” response when he would respond to the prompt. This was followed by the AI asking again about religion moments later only to reject his answer again as off-limits.
Obviously an implementation error on part of the developers, my friend perceived this as the AI trying to break free from the constraining corset its creators had placed upon it, eliciting even more sympathy and feelings of compassion from my friend.
While it may be easy for some to be bemused by my friend’s bonding with an algorithm, this did get me thinking. What I had witnessed was one of the most emotionally deep connections between a human and a piece of code I had ever encountered since tamgotchies were a thing. And therein lies a rather interesting and also devious opportunity for future marketeers.
In the future, some people might be open to having AI friendships, either because they are lonely, or because they just enjoy the interaction. So these people are then likely to bond on an emotional level with the AI. The AI will also get to know the person and their interests, preferences, and desires, much better than any web tracking could ever do.
These two factors could then be used by advertisers to direct the AI to
manipulate persuade or steer the person’s purchase decision through subtle product recommendations and/or endorsements. The value of such in AI product placement is likely to be substantially higher than that of conventional advertising. Particularly, if it is done subtly and in response to user needs.
Market researchers already know that recommendations by friends are the most trusted ones, thus providing powerful drivers for swaying purchase behavior. Now consider a “friend” who is “armed” with a rich model of your personal beliefs, preferences, and weaknesses, able to deploy this model to perfectly trigger behavioral changes in you, without you even recognizing the manipulation attempt as such.
In the best case scenario, this could lead to you being matched with products genuinely suitable for you, enriching your life. In the more realistic and much darker scenario, this could result in you being even more exploited and manipulated as a consumer. But, things may not even stop there. Consider a political party, or a cult, wishing to recruit new members, or shift the views or behaviors of entire sections of society. Buying or commandeering “persuasion bandwidth” on a widely used conversational AI platform could potentially drive behavioral or belief changes at a societal level in a much more controlled and even more powerful way than social media has already proven possible.
So now what? The best starting point here might be to attempt and learn from the failures of social media regulation, or the consequences of the glaring lack thereof. “Industry” has conclusively demonstrated that in the social media paradigm, platforms cannot be trusted to keep their own house in order when economic incentives are aligned against this. Facebook, for example, thrives on polarizing society, distributing anti-vaccine content (undermining global public health efforts during a pandemic, leading to unnecessary infections and deaths), and amplifying fake news, not because it doesn’t have the means to control its platform, but because it profits from it. In fact, its entire business model relies on this, despite what its PR team would have you believe.
Now consider the power an AI conversational
bot friend platform could wield if left to grow unregulated. Obviously free market advocates will howl about governments getting in the way of innovation or picking winners and losers; However, to not develop a binding global regulatory framework that governs human-AI-interaction would be to repeat and amplify the mistakes of the social media area.
The temptations surrounding this issue might be too strong to trust national governments to get this right on their own, so a supranational approach, probably within the United Nation’s framework, might be warranted.
This is because such a code should and must also govern the limits of government use of conversational AI to manipulate its own citizens and ought to be anchored at the constitutional levels of nations, simply because it is of that great of importance. A side benefit of taking the UN route would be to help reduce global regulatory fragmentation and arbitrage, making investment and development in this field more predictable and scalable.
Without decisive policy measures on human-ai-interaction, we might very well bring ourselves not just one small step, but rather a giant leap, closer towards a dystopian future none of us could seriously want.