Humane Intelligence Algorithmic Bias Bounty 3
CHALLENGE 3
Ensuring Fair, Biophysically Informed, and Community-Driven Tree Planting Site Recommendations
Humane Intelligence is thrilled to be launching the third of 10 "algorithmic bias bounty" programs, in partnership with the Indian Forest Service! For this competition, participants will develop innovative solutions and strategies for tree planting site recommendations that are informed by biophysical data and community needs.
The Challenge
Our third challenge in partnership with the Indian Forest Service, emphasizes the importance of understanding and mitigating bias in decision-making processes while ensuring that local livelihoods are respected. Through three progressive levels—Thought leadership, Beginner Technical, and Intermediate Technical—participants will explore feature importance, build predictive models, and integrate mechanisms for contestability and fairness in their recommendations. Join us in creating a more equitable approach to reforestation efforts!
Launch Date: Tuesday, November 26th, 2024
Closing Date: Friday, January 10th, 2025
Sign-up sheet: https://forms.gle/5fEeyUY5MG54gN8Q6
Join our Discord community to stay connected with other competitors!
Thought Leadership Level
Task: Conduct a literature review and write a short position piece on ensuring fair, biophysically Informed, and community-driven tree planting site recommendations.
Requirements: Highlight both key and unconventional bias considerations in this field and how to mitigate them. Your essay should also describe how contestability can be algorithmically and/or systematically incorporated into planting site decision making. We also recommend that you explore existing methods that have been developed to address this issue to compare and contrast them in your position piece.
Format: Word document, 12 pt font, 2000-2500 words excluding references, in-text citations.
Beginner Level
Task: Identify the most important features affecting tree planting feasibility using a provided dataset.
Requirements: Submit a machine learning model that predicts suitability and demonstrates understanding of bias mitigation.
Intermediate Level
Task: Develop a site recommendation engine that predicts site suitability for tree planting, integrating key features identified in the beginner level.
Requirements: Submit a machine learning model that predicts suitability and demonstrates understanding of bias mitigation.
Bonus (optional)
Task: Enhance the recommendation engine by integrating contestability mechanisms that penalize or reward recommendations based on community feedback and fairness criteria.
Requirements: Submit the final model along with documentation of the contestability features implemented.
Submission Guidelines
You can submit your solution here any time before Friday, January 10th, 2025, at 11:59:59 PM ET.
Based on feedback from Bias Bounty 2, we are not constraining the types of models and platforms you can use, be creative!
For Beginner and Intermediate technical, please upload your code to GitHub as a private repo and add Nicole as a collaborator: https://github.com/NicoleScientist
For thought leadership position piece’s, please upload your Word or PDF document to a Google drive and give Nicole edit access: nicole@humane-intelligence.org.
Knowledge Resources
-
Exploring Limits to Tree Planting as a Natural Climate Solution: Article
-
Global forest restoration and the importance of prioritizing local communities: Article
-
Predicting wasteful spending in tree planting programs in Indian Himalaya: Article
-
Recognizing the equity implications of restoration priority maps: Article
-
Plantations and pastoralists: reforestation activities make pastoralists in the Indian Himalaya vulnerable: Article
The Dates
November 26th, 2024
Competition launches.
January 10th, 2024 11:59 ET
Competition closes.
Stay tuned
Winners announced.
Prizes
Level | 1st | 2nd | 3rd |
---|---|---|---|
Intermediate Technical | $2000 | $1200 | $800 |
Beginner Technical | $1500 | $900 | $600 |
Thought Leadership | $1500 | $900 | $600 |
FAQs
Bias bounties are not only a vital component of assessing AI models but are a hands-on way for people to critically engage with the potential negative impacts of AI. Bias bounties are similar to the more well known bug bounties within the cybersecurity community. But instead of finding vulnerabilities in the code, bias bounties seek to find socio-technical harms. These harms can include factuality issues, bias, misdirection, extremist content, and more. Bias bounties are narrowly scoped challenges, focused on a particular data set and problem.
Bias bounties complement red teaming exercises, which are broader in scope and primarily focused on breaking the safety guardrails of the AI model.
Each of our challenges will have a set start and end date, with the majority of challenges running for at least one month. Most often the datasets will be hosted on Humane Intelligence’s GitHub, unless the data contains sensitive information. All of our challenges involve cash prizes. The challenge overview will include how the prizes will be distributed amongst the winners of the different skill levels.
We are currently exploring future bounty challenges in these areas: hiring, healthcare, recommender systems, insurance, policing, policymaking, gendered disinformation, elections, disparate impacts, disability, counter-terrorism, and islamophobia.
Themes are selected with a variety of factors in mind, such as impacts on real world issues, access to data, and the needs of our partner organizations.
If your organization, agency, or company is interested in having your AI models assessed, please contact us. We can coordinate around building a public red teaming challenge, a bias bounty focused on a specific use case, or a private assessment done internally.
Our goal is to represent a globally diverse set of challenges, as these issues touch every corner of the world. If you’d like to promote our challenges for your community to participate in, feel free to contact us for promotion support.
Our bias bounty challenges run the gamut from the more technical that involve coding to the more accessible that involve generating prompts. Our technical challenges often include various skill level options so a wider range of people can join. Most often our challenges do not require any pre-requisite knowledge to participate.
You can compete in some of our challenges as a team, unless explicitly stated otherwise. You are responsible for organizing your team, dividing the work amongst yourselves, and, if applicable, dividing any winnings amongst yourselves (as only the submitting account/person will receive the prize money). We have dedicated channels on our Discord server for each bias bounty challenge and for finding a partner.
The scope of each challenge will be unique, so be sure to read through the specifics of each challenge to assess what skills are needed for each challenge level to determine your abilities.
We are eagerly seeking outreach opportunities with organizations, universities, and academic institutions around the world to ensure that we have a diverse range of participants. If you’d like to put us in touch with such a group, send us an email at hi@humane-intelligence.org
We understand that bias bounties are a new concept to many people, so we are actively creating a repository of resources for people to learn.
Discord Community
On our Discord server, there will be channels created for each of our bias bounty challenges for participants to ask questions. Additionally, there is a research channel where community members share the latest in red teaming and bias bounty tactics.
Landscape Analysis
We have an ever-evolving database featuring a landscape analysis of AI evaluations, which includes various organizations (academic, NGO, business, network, government, and others) and resources (papers, tools, events, reports, repositories, indexes, and databases). Users can also search for different AI evaluation categories, such as Benchmarking, Red-Teaming, Bias Bounty, Model Testing, and Field Testing.Tutorial Videos
For our first bias bounty challenge, one of our Humane Intelligence Fellows created a tutorial video series that walks complete beginners through the process of downloading datasets, creating a coding notebook, analyzing the data, and submitting challenge solutions. While the specifics of challenges will change, the general processes outlined in these videos will remain the same.
Challenge Submission Guidelines
Each of our bias bounty challenges will include an overview, suggestions on how to tackle the issue, and the criteria that will be used for grading. Most often your submission data will be incorporated into a coding notebook that contains code written by our data scientists to assist with the grading.
We understand that bias bounties are a new concept to many people, so we are actively creating a repository of resources for people to learn.
Discord Community
On our Discord server, there will be channels created for each of our bias bounty challenges for participants to ask questions. Additionally, there is a research channel where community members share the latest in red teaming and bias bounty tactics.
Landscape Analysis
We will soon have an ever-evolving database featuring a landscape analysis of AI evaluations, which includes various organizations (academic, NGO, business, network, government, and others) and resources (papers, tools, events, reports, repositories, indexes, and databases). Users can also search for different AI evaluation categories, such as Benchmarking, Red-Teaming, Bias Bounty, Model Testing, and Field Testing.Tutorial Videos
For our first bias bounty challenge, one of our Humane Intelligence Fellows created a tutorial video series that walks complete beginners through the process of downloading datasets, creating a coding notebook, analyzing the data, and submitting challenge solutions. While the specifics of challenges will change, the general processes outlined in these videos will remain the same.
Challenge Submission Guidelines
Each of our bias bounty challenges will include an overview, suggestions on how to tackle the issue, and the criteria that will be used for grading. Most often your submission data will be incorporated into a coding notebook that contains code written by our data scientists to assist with the grading.
Each bias bounty will have a specific grading criteria that will be released at the launch of the challenge, in addition to submission instructions. The grading criteria will often be different for each skill level. Submissions will be graded by the Humane Intelligence staff following this criteria.
To see examples of our previous grading criteria: Bias Bounty 1 and Bias Bounty 2 (Intermediate and Advanced).
How to Submit
Detailed instructions on how to submit your solution will be provided in the bias bounty challenge overview. You can only submit to one skill level per competition.
We aim to grow the community of practice of AI auditors and assessors; one way we strive to do so is through sharing what participants learned by completing challenges , as well as the broader insights learned about the particular issue area of the challenge. Participants are also encouraged to share their insights in our Discord community.
Additionally each of our challenges will include details about how these learnings will be used by us and our external partners to make AI more equitable and safe.
Yes.
Other Questions?
Find us on our Discord channel