Google improved its generative AI chatbot, Gemini (earlier Bard), and released an Android app. Users are advised not to disclose private information with the chatbot because human evaluation may improve quality.
Google’s generative AI chatbot, Bard, became Gemini after updates. A new Android app and Ultra 1.0 language model were released. Google warned all Gemini users not to share private information with the chatbot after the announcement.
Google support documents state that humans review some Gemini talks to improve the chatbot. Google promises to review only a subset of talks and erase personal information like email addresses and phone numbers.
Google advises against discussing sensitive topics with Gemini. They say reviewers can see pooled data and Google can use it to improve its goods and services.
Gemini Apps can be disabled for privacy. Gemini Apps Activity at myactivity.google.com/product/gemini lets them evaluate prompts and delete discussions.
When users interact with Gemini, Google captures conversations, location, feedback, and usage statistics. When set up as a mobile assistant, Gemini may collect more data to better understand and respond to customer requests.
Google said user data improves its products, services, and machine-learning capabilities, including Gemini Apps.
Conclusion
Google’s Gemini generative AI chatbot now has a new Android app and Ultra 1.0 language model. The company warns users not to reveal private information with the chatbot. Google promises that just a part of discussions are examined by humans and that contributed data may improve its products and services. Turning off Gemini Apps and reviewing prompted or deleted communications protects privacy.