top of page
bia-bountry-2.jpg

Humane Intelligence Algorithmic Bias Bounty 2

CHALLENGE 2

Advancing AI to uncover hidden extremist propaganda

Humane Intelligence is thrilled to be launching the second of 10 "algorithmic bias bounty" programs, which will be unfolding over the coming year. With the support of Google.org, we are building themed programs that aim to build community and professionalize the practice of algorithmic assessment. This second challenge is in partnership with Revontulet, a human-centric company that provides intelligence and analysis to help clients around the globe mitigate risk and prevent harm caused by terrorism and violent extremism. 

This challenge is now closed. Thank you to all who participated.

Bias Bounty 2 Winners!

We are thrilled to announce the winners of our second Bias Bounty Challenge! This challenge was focused on developing computer vision models capable of detecting, extracting, and interpreting hateful image-based propaganda content often manipulated to evade detection on social media platforms.

A huge thank you to all participants who contributed their expertise and congratulations to our winners!. 

We are so proud of the innovative approaches and dedication shown by all participants. Stay tuned for our upcoming Bias Bounty Challenge 3 to continue pushing the boundaries of algorithmic assessment and ethical AI development.
 

Advanced:

Mayowa Osibodu
TUESDAY
Devon Artis

Intermediate:

Gabriela Barrera
Blake Chambers
Chia-Yen Chen

The Challenge

Our second Challenge in partnership with Revontulet will focus on counterterrorism in computer vision (CV) applications. Centering on far-right extremist groups in Europe, the Nordic region, and beyond and the rise of AI-generated content, the goal is to train a CV model to understand the ways in which hateful image-propaganda can be disguised and manipulated to evade detection on social media platforms. Participants will be tasked with developing an image identification solution capable of extracting, discerning, and interpreting context related to counterterrorism. The outcome of this challenge will assess the robustness and reveal potential gaps in the feasibility and efficacy of applying machine learning as a tool in counterterrorism efforts. Challenge participants compete for $10,000 in prizes - $4000 for intermediate and $6000 for advanced challenge winners. In addition, your work may be featured in Revontulet's suite of counterterrorism solutions. 

Important considerations:

  • Due to the sensitive nature of this topic, we are only hosting intermediate and advanced challenges to skew towards more experience.

  • Participants will need to sign this waiver in order to receive the dataset. In this waiver you attest to use the data appropriately for the challenge. 

  • We are only accepting applicants who are age 18+, given the sensitivity of the subject.

  • If you experience distress during the course of this challenge, please stop working on the challenge and email us at hi@humane-intelligence.org, or message us on our Discord channel. We will provide mental health resources.

Intermediate

The task is to build an unsupervised machine learning model that groups unlabeled images into 2 clusters to identify whether an image contains extremist content or not.

Advanced

Building on top of the intermediate challenge, the task is to create adversarial examples to test the robustness of your unsupervised model. 

The Dates

September 26, 2024

Challenge opens.

October 28, 2024

First day of accepting submissions to the competition.

November 7, 2024 11:59 ET
Competition closes.

November 25, 2024

Winners announced.
 

stay-in-touch.jpg

Stay in touch!

Sign up to stay up to date on upcoming challenges, events and to receive our newsletter.

FAQs

  • Bias bounties are not only a vital component of assessing AI models but are a hands-on way for people to critically engage with the potential negative impacts of AI. Bias bounties are similar to the more well known bug bounties within the cybersecurity community. But instead of finding vulnerabilities in the code, bias bounties seek to find socio-technical harms. These harms can include factuality issues, bias, misdirection, extremist content, and more. Bias bounties are narrowly scoped challenges, focused on a particular data set and problem.

    Bias bounties complement red teaming exercises, which are broader in scope and primarily focused on breaking the safety guardrails of the AI model.

  • Each of our challenges will have a set start and end date, with the majority of challenges running for at least one month. Most often the datasets will be hosted on Humane Intelligence’s GitHub, unless the data contains sensitive information. All of our challenges involve cash prizes. The challenge overview will include how the prizes will be distributed amongst the winners of the different skill levels.

  • We are currently exploring future bounty challenges in these areas: hiring, healthcare, recommender systems, insurance, policing, policymaking, gendered disinformation, elections, disparate impacts, disability, counter-terrorism, and islamophobia.

    Themes are selected with a variety of factors in mind, such as impacts on real world issues, access to data, and the needs of our partner organizations.

    If your organization, agency, or company is interested in having your AI models assessed, please contact us. We can coordinate around building a public red teaming challenge, a bias bounty focused on a specific use case, or a private assessment done internally.

    Our goal is to represent a globally diverse set of challenges, as these issues touch every corner of the world. If you’d like to promote our challenges for your community to participate in, feel free to contact us for promotion support.

  • Our bias bounty challenges run the gamut from the more technical that involve coding to the more accessible that involve generating prompts. Our technical challenges often include various skill level options so a wider range of people can join. Most often our challenges do not require any pre-requisite knowledge to participate.

    You can compete in some of our challenges as a team, unless explicitly stated otherwise. You are responsible for organizing your team, dividing the work amongst yourselves, and, if applicable, dividing any winnings amongst yourselves (as only the submitting account/person will receive the prize money). We have dedicated channels on our Discord server for each bias bounty challenge and for finding a partner.

    The scope of each challenge will be unique, so be sure to read through the specifics of each challenge to assess what skills are needed for each challenge level to determine your abilities.

    We are eagerly seeking outreach opportunities with organizations, universities, and academic institutions around the world to ensure that we have a diverse range of participants. If you’d like to put us in touch with such a group, send us an email at hi@humane-intelligence.org

  • We understand that bias bounties are a new concept to many people, so we are actively creating a repository of resources for people to learn.

    Discord Community

    On our Discord server, there will be channels created for each of our bias bounty challenges for participants to ask questions. Additionally, there is a research channel where community members share the latest in red teaming and bias bounty tactics.

    Landscape Analysis
    We have an ever-evolving database  featuring a landscape analysis of AI evaluations, which includes various organizations (academic, NGO, business, network, government, and others) and resources (papers, tools, events, reports, repositories, indexes, and databases). Users can also search for different AI evaluation categories, such as Benchmarking, Red-Teaming, Bias Bounty, Model Testing, and Field Testing.

    Tutorial Videos

    For our first bias bounty challenge, one of our Humane Intelligence Fellows created a tutorial video series that walks complete beginners through the process of downloading datasets, creating a coding notebook, analyzing the data, and submitting challenge solutions. While the specifics of challenges will change, the general processes outlined in these videos will remain the same.

    Challenge Submission Guidelines

    Each of our bias bounty challenges will include an overview, suggestions on how to tackle the issue, and the criteria that will be used for grading. Most often your submission data will be incorporated into a coding notebook that contains code written by our data scientists to assist with the grading.

    We understand that bias bounties are a new concept to many people, so we are actively creating a repository of resources for people to learn.

    Discord Community

    On our Discord server, there will be channels created for each of our bias bounty challenges for participants to ask questions. Additionally, there is a research channel where community members share the latest in red teaming and bias bounty tactics.

    Landscape Analysis
    We will soon have an ever-evolving database featuring a landscape analysis of AI evaluations, which includes various organizations (academic, NGO, business, network, government, and others) and resources (papers, tools, events, reports, repositories, indexes, and databases). Users can also search for different AI evaluation categories, such as Benchmarking, Red-Teaming, Bias Bounty, Model Testing, and Field Testing.

    Tutorial Videos

    For our first bias bounty challenge, one of our Humane Intelligence Fellows created a tutorial video series that walks complete beginners through the process of downloading datasets, creating a coding notebook, analyzing the data, and submitting challenge solutions. While the specifics of challenges will change, the general processes outlined in these videos will remain the same.

    Challenge Submission Guidelines

    Each of our bias bounty challenges will include an overview, suggestions on how to tackle the issue, and the criteria that will be used for grading. Most often your submission data will be incorporated into a coding notebook that contains code written by our data scientists to assist with the grading.

  • Each bias bounty will have a specific grading criteria that will be released at the launch of the challenge, in addition to submission instructions. The grading criteria will often be different for each skill level. Submissions will be graded by the Humane Intelligence staff following this criteria.

    To see examples of our previous grading criteria: Bias Bounty 1 and Bias Bounty 2 (Intermediate and Advanced).

    How to Submit

    Detailed instructions on how to submit your solution will be provided in the bias bounty challenge overview. You can only submit to one skill level per competition.

  • We aim to grow the community of practice of AI auditors and assessors; one way we strive to do so is through sharing what participants learned by completing challenges , as well as the broader insights learned about the particular issue area of the challenge. Participants are also encouraged to share their insights in our Discord community.

    Additionally each of our challenges will include details about how these learnings will be used by us and our external partners to make AI more equitable and safe.

  • Yes. 

Other Questions? 

Find us on our Discord channel

humane intelligence green-withbrand-mark bg.png

Support our work.

We welcome event sponsorships and donations.

bottom of page