AI Content Moderation with Google's Ninny Wan

Aug 13, 2025
37:01
00

Loading video...

Show Notes

Google's Ninny Wan, Product Lead for AI Content Safety, joins to discuss the evolution of AI content moderation in the age of GenAI. The conversation covers Google's approach to semantic understanding, multilingual moderation across 140+ languages, synthetic data generation for training, and the balance between user freedom and safety. Ninny shares insights on transformer models, human-in-the-loop processes, cross-functional safety reviews, and Google's on-device privacy-compliant features like sensitive content warnings.

Key Topics Covered

  • Evolution of AI content moderation from UGC to GenAI
  • Semantic understanding and transformer models for contextual content analysis
  • Multilingual moderation coverage across 140+ languages
  • Synthetic data generation for privacy-compliant model training
  • Human-in-the-loop processes and continuous learning pipelines
  • Cross-functional safety review boards and product launch evaluations
  • On-device privacy features like sensitive content warnings
  • Balancing user freedom with content safety and cultural considerations

Resources & Links

Episode Chapters & Transcript

00:00

Intro: AI Content Safety at Google

Hermes welcomes Ninny Wan, Product Lead at Google, to discuss the evolution of AI content safety and moderation in the age of GenAI.

01:26

Shifting Focus: GenAI and Nuanced Safety

Ninny explains how content moderation at Google evolved with the rise of GenAI, requiring specialized approaches for different platforms and customers.

02:46

Initial Challenges and Synthetic Data

Ninny reflects on the early chaos of GenAI moderation, the importance of clear abuse definitions, and how synthetic data became crucial for scaling safely.

05:50

Semantic Moderation Lessons Learned

Discover how generalized classifiers help Google scale moderation efficiently across diverse product teams while handling ever-changing abuse vectors.

08:07

Transformer Models and Contextual Awareness

Explore the power of transformer models and self-attention in enabling nuanced understanding of content, context, and long-term dependencies in abuse detection.

10:28

Multilingual Moderation at Scale

Ninny explains how Google's models are trained across 140+ languages using synthetic data, ensuring equitable safety coverage worldwide.

11:51

Continuous Learning and Emerging Threats

Learn how Google’s continuous learning pipelines improve model velocity and help respond to new abuse trends in real time.

13:21

Safety Thresholds and Product Integration

All GenAI products at Google must pass a formal safety review. Ninny shares how a centralized review board helps products ship responsibly.

15:00

Sensitive Content Warnings and On-Device Privacy

Ninny highlights a flagship on-device feature—nudity detection with opt-in user control—built to preserve user privacy while improving safety.

17:36

Human-in-the-Loop and Living Policy

Why human reviewers remain essential to content moderation. Ninny describes how labelers help shape evolving policy and reinforce model accuracy.

20:14

The Role of Google's Review Board

Inside Google’s formalized, cross-functional review process that includes policy experts, red teams, and product advisors to secure GenAI launches.

22:41

Making Safety APIs Available to Developers

Ninny discusses Google’s efforts to offer moderation capabilities to 3P developers, including challenges in defining abuse across different platforms.

25:39

Partner Feedback and Model Updates

A look at how 3P partner feedback—via account managers or policy discussions—informs Google’s safety models, UX improvements, and configuration options.

28:39

Balancing Freedom and Safety

Moderation isn’t one-size-fits-all. Ninny reflects on the daily challenge of preserving freedom of expression while fostering healthier online spaces.

31:01

Navigating Cultural Expectations

How Google accounts for cultural differences in defining abuse, with special attention to cross-cultural sensitivities and expectations.

33:50

Wild Card: Human-in-the-Loop Passion

Ninny answers the final wildcard question, revealing her strong interest in the human-in-the-loop side of AI as the front line of moderation and insight.

36:01

Closing Thoughts

Hermes thanks Ninny and wraps the episode with a reminder to subscribe and follow along for more insights from leaders in Conversational AI.

Click on any chapter to view its transcript content • Download full transcript

Convo AI Newsletter

Subscribe to stay up to date on what's happening in conversational and voice AI.

Loading form...
✓ Conversational AI news✓ No spam, ever✓ Unsubscribe anytime

Tags

#ai content moderation#google#content safety#ninny wan#genai moderation#semantic understanding#multilingual ai#synthetic data#transformer models#human-in-the-loop#on-device ai#privacy-compliant ai#content policy#safety evaluation