Quickpost | London
Google’s new AI Overviews feature is transforming how people search for health information. Instead of clicking through multiple websites, users are now shown an AI-generated summary at the top of search results.
But a recent study from Germany has raised serious concerns: when answering health-related questions, Google’s AI appears to rely more on YouTube than on hospitals, medical institutions, or government health websites.
With millions of users viewing these AI summaries every day, the reliability of such information is now under intense scrutiny.
What the Research Found
A large-scale analysis conducted by SE Ranking, a digital marketing and SEO research firm, examined 50,807 health-related search queries in the German language.
The findings reveal a clear pattern:
- More than 82% of health searches displayed Google’s AI Overviews
- YouTube emerged as the most frequently cited single source, accounting for 4.43% of all references
- Combined references from hospitals, clinics, and government health sites were lower than YouTube alone
- Academic journals and official public health agencies accounted for just around 1%
- Only 34% of cited sources were classified as reliable medical websites
- The remaining 66% lacked expert medical review or strict editorial oversight
Researchers warn that this heavy reliance on non-clinical sources significantly increases the risk of misinformation.
Is Popularity Replacing Quality?
Another striking discovery was how Google’s AI selects its sources.
Only 36% of the webpages cited in AI Overviews appear in the top 10 traditional Google search results. This suggests that AI Overviews are not following standard ranking signals.
Experts believe the AI prioritizes:
- Video engagement
- Simpler explanations
- High view counts and interaction
Because YouTube videos often use accessible language and visual demonstrations, the AI may treat them as more “understandable” — even when medical accuracy is not guaranteed.
The Real Risk of Misinformation
An investigation by The Guardian identified several misleading medical recommendations appearing in Google’s AI Overviews, including:
- Incorrect dietary advice for pancreatic cancer patients
- Misinterpretation of liver function test results
Medical experts say these errors often arise from oversimplified explanations and non-medical content sources, directly conflicting with Google’s own YMYL (Your Money or Your Life) policy, which requires the highest standards for health-related information.
What This Means for Users and the Health Sector
Health-related searches often influence life-changing decisions. Doctors and researchers warn that users may begin treating AI summaries as final medical advice rather than a starting point.
Experts recommend:
- Viewing AI Overviews only as preliminary information
- Consulting qualified medical professionals for diagnosis and treatment
- Health institutions increasing video-based, accessible, and digitally optimized content to ensure reliable information reaches the public
Final Thoughts
Google’s AI has made search faster and more convenient, but in the health sector, speed without accuracy can be dangerous. Without a stronger balance between credible medical sources and popular content, AI-driven search risks doing harm alongside good.
Sources
- SE Ranking https://seranking.com/blog/google-ai-overviews-study/
- The Guardian https://www.theguardian.com/technology/google-ai-health-search
- Google AI Documentation https://developers.google.com/search/docs/appearance/ai-overviews





