Google and Gemini: Conversations for Eternity
Customize Your Gemini Data Saving Options with Google
You can disable future chats with Gemini from being stored to a Google Account for review by going to the My Activity dashboard and turning off Gemini Apps Activity (which is enabled by default). This will ensure that the three-year window does not apply. Meanwhile, the Gemini Apps Activity screen allows you to eliminate individual prompts and discussions.
According to Google, human annotators read, classify, and process chats with Gemini regularly to enhance the service. This applies even to discussions that are “disconnected” from Google Accounts. (Given the potential implications for data security, Google does not specify whether these annotators are employed in-house or contracted out.) Not only are these conversations preserved for a maximum of three years, but so are “related data” such as the user’s location, the languages they spoke, and the devices they used. Google, however, claims that it will save Gemini talks to an Account for up to 72 hours regardless of whether Gemini applications Activity is disabled in order to “maintain the safety and security of Gemini apps and improve Gemini apps.”
Read: Google’s Snap & Transform: Unlocking Surprising Image Magic with MobileDiffusion
The Wild West of Data Retention
On the other hand, Google’s policy shows how difficult it is to create GenAI models that use user data to improve themselves while also protecting users’ privacy. Last July, the Federal Trade Commission (FTC) asked OpenAI for specifics on the procedures it uses to verify the accuracy of data used to train its models, including customer information, and the safeguards it puts in place to prevent unauthorized parties from accessing this data. Internationally, the Italian Data Protection Authority, which oversees data privacy, found no “legal basis” for OpenAI’s massive data gathering and storage practices related to GenAI model training.
Concerns about potential privacy breaches are making enterprises cautious about the proliferation of GenAI products. Among businesses, 63% have restricted the kind of data that can be input into GenAI tools, while 27% have outright forbidden GenAI, according to a new poll from Cisco. In the same poll, 45 percent of workers admitted to entering “problematic” data into GenAI platforms, which includes sensitive company information and personal details. Businesses who refuse to keep data for any reason, including model training, can choose from a variety of GenAI offerings from companies like OpenAI, Microsoft, Amazon, Google, and many more. However, consumers usually wind up with the bare minimum.
Read: Outsmarting the Bots: US Regulator Drops the Hammer on AI-voice Robocalls
[To share your insights with us as part of editorial or sponsored content, please write to sghosh@martechseries.com]
Comments are closed.