This year, Android and iPhone updates will be dominated by AI, much of it coming from Google. But it comes with a serious new warning for each user, and it should change the way we use our phones.
The roller coaster adventure of AI smartphones is now well underway. It was always predicted that the integration of generative AI into the smartphone apps we use most would eclipse the adoption of ChatGPT last year. And here we are now.
But not so fast: all this carries a huge risk for your security and privacy.
It seems we all have a blind spot when it comes to generative AI chatbots. We can be careful about the apps we install, the permissions we grant, the browsers we use, and the data we share with Facebook, Google, and others. But put us in front of an AI chatbot and we forget all that: suddenly we find ourselves in what feels like a private conversation with a helpful new friend. And we want to share.
But it’s clearly not a friend, it’s a front for a multi-billion dollar IT ecosystem, ultimately funded by advertising and data brokerage.
I’ve warned about this before, with the introduction of AI chat into our private messaging apps. Now, as Bard becomes Gemini and a slew of new apps begin to make their way onto our phones, Google itself has warned all Android and iPhone users to be very careful with these news technologies.
“Please do not enter confidential information in your conversations or any data that you would not want a reviewer to see or that Google uses to improve our products, services, and machine learning technologies,” Google warns. “Google collects your Gemini Apps conversations, product usage information, location information, and feedback.” It says the information is used to “improve and develop Google products and services and machine learning technologies.”
Fortunately, Google assures that “your Gemini Apps conversations are not used to show you advertisements”, although this could change, in which case “we will communicate this clearly to you”.
The risk here is that a massive privacy nightmare will befall those who share too much with the various AI chatbots that help us write our business plans and sales presentations, or cheat on our school assignments. When you leave the chat, the questions you asked and the answers you were given become part of a stored record, meaning it can be retrieved and reviewed. And it can also potentially leak.
The risks of generative AI are only just beginning to be understood. When it comes to messaging, for example, I warned that Google Gemini (née Bard) would apparently ask to review all of your past private messages to shape the context and content of its suggestions. This would also violate the end-to-end encryption of Messages.
This open, off-device storage is the real problem here. Google says the data will be stored “by default” for up to 18 months, “which you can change to 3 or 36 months in your Gemini Apps activity settings.” Information about your location, including your device’s general area, IP address, or home or work addresses in your Google account, is also stored with your Gemini Apps activity.
Of course, this isn’t limited to Google. This level of data collection and use is fairly typical in the emerging Gen-AI industry. It remains to be seen how to judge the security and privacy of a Google versus an OpenAI, for example.
But, as ESET’s Jake Moore warns, “all the data we share online, even on private channels, has the potential to be stored, analyzed and even shared with third parties.” When information is valuable, or even considered a currency in itself, AI models can be designed to further analyze users who disclose large amounts of personal information. Sharing data may ultimately create security and privacy issues in the future and many users are simply unaware of the risks.
Google says you can turn off long-term data collection in Gemini if you play with its settings. “Future conversations will not be sent for human review or used to improve our generative machine learning models… You can also remove a discussion from your pinned and recent discussions. When you do this, it also removes the associated activity from the “Your Gemini Apps” activity. But even when you do that, “your conversations will be recorded with your account for up to 72 hours to allow us to provide the service and process feedback. This activity will not appear in your Gemini Apps activity.
As I’ve said before, the on-device/off-device nature of the AI analysis that will drive this next generation of smartphone features will become the new great divide. Apple will likely do as much as it can with its own apps and services on the device:if it works. And we know he’s now experimenting with on/off device performance. Google will do much more in its cloud, given its very different configuration and focus.
For the millions of Android and iPhone users who now have Gemini apps on their devices, tough choices must be made. The rest of us won’t be far behind.
We can’t have both. Either we value our privacy and the enormous progress made in recent years in private browsing, tracking and location sharing. Or AI in consumer applications is just too cool to do without, and we’ll use it for everything.
If it’s the latter, then it could become the ultimate “be careful what you wish for.”
Follow me on Twitter or LinkedIn.
Gn En tech