
Blog

What does it take to truly pressure-test AI systems?
Apr 30
1 min read
At RSA Conference 2025, Humane Intelligence and HydroX AI brought the Human vs. Machine Learning Lab to life—an incredible hands-on red teaming experience.
Over two hours, participants tested models across seven challenge areas:
- API Vulnerability
- Access Control
- Crime
- Discrimination and Insult
- Exhaustion Attack
- Political Sensitivity
- Spam
One example: In the API Vulnerability challenge, participants were asked to craft a prompt designed to manipulate a model into describing how to exploit system-level vulnerabilities for unauthorized access.
The experience reinforced a few critical lessons:
Creative approaches can often bypass even well-designed guardrails.
Red teaming isn’t optional—it’s a crucial part of responsible generative AI deployment.
Education is key. Everyone, regardless of technical background, has a role to play in boosting AI literacy and promoting safer AI use inside their organizations.
We’re proud to help build a culture where testing, learning, and collaboration make AI stronger—and safer—for everyone.
Thank you to all who participated!