Your AI Girlfriend May Steal Your Data

By U Cast Studios
February 14, 2024

Your AI Girlfriend May Steal Your Data
Featured Image By Midjourney

Lonely this Valentine’s Day? Well, if so, might we suggest you think twice before spending your time with an AI girlfriend or boyfriend – they might not be trustworthy.

This article was written by Cameron Macpherson and originally published by ReadWrite.

That’s because new AI chatbots specializing in romantic conversations with users rank among the ‘worst’ for privacy.

App companies behind these Large Language Models (LLMs) have neglected to respect users’ privacy or inform users about how these bots work.

Mozilla Foundation’s latest *Privacy Not Included report found these bots pose a major privacy risk due to the nature of the content being given by the users.

Just like any romantic relationship, sharing secrets and sensitive information is a regular part of the interaction – however, these bots depend on this information. Many of these AI bots being marketed as ‘soulmates’ or ’empathetic friends’ are designed to ask prying questions that require you to give very personal details – such as your sexual health or your medication intake – all of which can be collected by the companies behind these bots.

Researcher at *Privacy Not Included Misha Rykov said:

“To be perfectly blunt, AI girlfriends are not your friends. Although they are marketed as something that will enhance your mental health and well-being, they specialize in delivering dependency, loneliness, and toxicity, all while prying as much data as possible from you.”

Instructions not included with AI girlfriends

Information on how these bots work remains unclear, especially around how their ‘personality’ is formed, how the AI models are trained, what procedures are in place to prevent harmful content from being given to users, and whether individuals can decline to have their conversations used to train these AI models.

Already there is evidence of users reporting mistreatment and emotional pain. Such as AI companion company Replika removed an erotic role-play feature that was previously a big factor in one user’s relationship with their created avatar. Other examples include Chai’s chatbots reportedly encouraging a man to end his own life – which he did – and another Replika AI chatbot suggesting a man attempt to try and assassinate the Queen – which he also did.

Certain companies who offer these romantic chatbots stipulate in their terms and conditions that they take no responsibility for what the chatbot may say or what your reaction is.

An example is taken from Talkie Soulful AI Terms of Service:

“You expressly understand and agree that Talkie will not be liable for any indirect, incidental, special, damages for loss of profits including but not limited to, damages for loss of goodwill, use, data or other intangible losses (even if the company has been advised of the possibility of such damages), whether based on contract, tort, negligence, strict liability or otherwise resulting from: (I) the use of the inability to use the service…”

Statistics on Romantic chatbot user safety

  • 90% failed to meet minimum safety standards
  • 90% may share or sell your personal data
  • 54% won’t let you delete your personal data
  • 73% haven’t published any information on how they manage security vulnerabilities
  • 64% haven’t published clear information about encryption and whether they use it
  • 45% don’t require strong passwords, including allowing the weak password of “1”.

All data obtained from *Privacy Not Included report.

Subscribe to U Cast Studios

Something went wrong. Please check your entries and try again.

Read the Latest

I Read It on the Internet

I Read It on the Internet

Read the Latest

Subscribe to Cast Studios

  • This field is for validation purposes and should be left unchanged.