AI Content Moderation with Google's Ninny Wan
Loading video...
Show Notes
Google's Ninny Wan, Product Lead for AI Content Safety, joins to discuss the evolution of AI content moderation in the age of GenAI. The conversation covers Google's approach to semantic understanding, multilingual moderation across 140+ languages, synthetic data generation for training, and the balance between user freedom and safety. Ninny shares insights on transformer models, human-in-the-loop processes, cross-functional safety reviews, and Google's on-device privacy-compliant features like sensitive content warnings.
Key Topics Covered
- •Evolution of AI content moderation from UGC to GenAI
- •Semantic understanding and transformer models for contextual content analysis
- •Multilingual moderation coverage across 140+ languages
- •Synthetic data generation for privacy-compliant model training
- •Human-in-the-loop processes and continuous learning pipelines
- •Cross-functional safety review boards and product launch evaluations
- •On-device privacy features like sensitive content warnings
- •Balancing user freedom with content safety and cultural considerations
Resources & Links
Episode Chapters & Transcript
Intro: AI Content Safety at Google
Hermes welcomes Ninny Wan, Product Lead at Google, to discuss the evolution of AI content safety and moderation in the age of GenAI.
Shifting Focus: GenAI and Nuanced Safety
Ninny explains how content moderation at Google evolved with the rise of GenAI, requiring specialized approaches for different platforms and customers.
Initial Challenges and Synthetic Data
Ninny reflects on the early chaos of GenAI moderation, the importance of clear abuse definitions, and how synthetic data became crucial for scaling safely.
Semantic Moderation Lessons Learned
Discover how generalized classifiers help Google scale moderation efficiently across diverse product teams while handling ever-changing abuse vectors.
Transformer Models and Contextual Awareness
Explore the power of transformer models and self-attention in enabling nuanced understanding of content, context, and long-term dependencies in abuse detection.
Multilingual Moderation at Scale
Ninny explains how Google's models are trained across 140+ languages using synthetic data, ensuring equitable safety coverage worldwide.
Continuous Learning and Emerging Threats
Learn how Google’s continuous learning pipelines improve model velocity and help respond to new abuse trends in real time.
Safety Thresholds and Product Integration
All GenAI products at Google must pass a formal safety review. Ninny shares how a centralized review board helps products ship responsibly.
Sensitive Content Warnings and On-Device Privacy
Ninny highlights a flagship on-device feature—nudity detection with opt-in user control—built to preserve user privacy while improving safety.
Human-in-the-Loop and Living Policy
Why human reviewers remain essential to content moderation. Ninny describes how labelers help shape evolving policy and reinforce model accuracy.
The Role of Google's Review Board
Inside Google’s formalized, cross-functional review process that includes policy experts, red teams, and product advisors to secure GenAI launches.
Making Safety APIs Available to Developers
Ninny discusses Google’s efforts to offer moderation capabilities to 3P developers, including challenges in defining abuse across different platforms.
Partner Feedback and Model Updates
A look at how 3P partner feedback—via account managers or policy discussions—informs Google’s safety models, UX improvements, and configuration options.
Balancing Freedom and Safety
Moderation isn’t one-size-fits-all. Ninny reflects on the daily challenge of preserving freedom of expression while fostering healthier online spaces.
Navigating Cultural Expectations
How Google accounts for cultural differences in defining abuse, with special attention to cross-cultural sensitivities and expectations.
Wild Card: Human-in-the-Loop Passion
Ninny answers the final wildcard question, revealing her strong interest in the human-in-the-loop side of AI as the front line of moderation and insight.
Closing Thoughts
Hermes thanks Ninny and wraps the episode with a reminder to subscribe and follow along for more insights from leaders in Conversational AI.
Click on any chapter to view its transcript content • Download full transcript
Convo AI Newsletter
Subscribe to stay up to date on what's happening in conversational and voice AI.