Google has admitted it removed a search feature that used AI to surface medical tips from online strangers, weeks after its existence was publicly confirmed. “What People Suggest” was the name of the tool, which drew on community health discussions and organized them for users through AI. Sources close to the decision confirmed the feature is no longer operational, despite Google’s lack of clear public messaging about the change.
Unveiled with considerable optimism at Google’s New York health event last spring, the feature was described as a meaningful development in how users discover health information. Then-chief health officer Karen DeSalvo highlighted its potential to connect people with relevant lived experiences, particularly for those managing long-term or chronic health conditions. Mobile users in the United States were the first to have access to the new tool.
Google’s explanation for the removal has been challenged on transparency grounds. The company claims it was a routine simplification of the search interface and had nothing to do with safety, while pointing to a blog post that never references the feature. The disconnect has prompted criticism from digital rights advocates who argue the company should have made a clear public statement about removing a health-related AI product.
This episode is part of a larger story about Google’s troubled relationship with AI-generated health content. An investigation this year documented how AI Overviews on Google Search contained false and misleading medical information, reaching billions of users every month. The subsequent limited removal of health-related AI Overviews was described by critics as a damage control measure rather than a genuine systemic fix.
Google’s upcoming health event is another opportunity to reshape the narrative around its AI health strategy. Executives are expected to outline new research efforts and partnerships aimed at improving health outcomes through technology. For this message to resonate, however, Google will need to show that lessons have been learned from the failures of tools like “What People Suggest” and that accountability is now built into how it develops and deploys AI health products.