Building Ethical AI for Recruiting: Learning and Unlearning
We're proud to lead on ethical AI in hiring
How do we create AI without bias? This is a touchy and intricate subject across all business functions, but especially HR and recruiting. This article starts the conversation and provides resources from industry leaders that can help reduce bias in AI.
This is a touchy and intricate subject. The fact of the matter is, AI isn’t inherently biased. If AI does have bias, it’s because of human error. Why is that? And, as we are the ones building AI to mirror ourselves, machine learning can take on the bias of the builder. Also, AI is only as accurate as the data you're feeding it. Bad data can contain bias based on gender, age, or race. This bad data has led to a few notable cases of discrimination with companies like Amazon and Facebook.
So how do we create AI without bias?
Identifying accurate data, multiple sources of data, understanding the limitations of your data are the building blocks of responsible AI. We need to take an approach that depends on trust and responsibility. Like Uncle Ben said, “With great power comes great responsibility,” and definitely can have those superhuman-like strengths. There are also many helpful resources from industry leaders that can help reduce bias in AI to see that it’s used fairly:
- The AI Now Institute publishes annual research reports about bias
- Google AI created recommended practices for fairness around AI
- The European Union High-Level Expert Group on AI created guidelines around removing bias
- Groups like IBM have created new methodologies to reduce discrimination in AI
At the end of the day, it’s up to us to be continually improving to ensure AI is fair for all. When we do so, the benefits are on our side because a just and fair AI is exceptionally beneficial. In many cases, AI can help reduce our subjective interpretation of data. This is because machine learning algorithms learn only to examine the variables that improve their predictive accuracy, based on the data used. Evidence shows that algorithms can improve decision making, causing AI to become fairer in the process of learning.
Especially in the world of recruiting and hiring, AI can create a more ethical process or screening and hiring candidates. Bias can surface by just reading what candidates may have on their resume or interviewing candidates face-to-face for positions. AI can perform basic level filtering. Are you old enough to apply, a citizen of the country you’re applying for said job, can you work the hours needed?
In the realm of the job search, a resume is the golden ticket that helps candidates score a job interview. Even the skills highlighted on a resume can be judged. So when AI comes in to do the work, words, phrases, and tones aren’t given bias. An AI resume-builder can also help translate skills from one field into another. This is one of the most critical aspects for a business to consider is how they can use AI to help eliminate bias from hiring.
Building AI to unlearn bias in recruiting and being transparent about the process is top of mind for us at Humanly, and other companies agree.
Addressing issues like bias and privacy are emerging priorities. 52% of leaders in AI say that companies need to make data and analytics transparent and comprehensible to consumers and non-scientists.
Allowing transparency and ease of understanding about how we use AI will make issues like bias seem less intimidating. It’s also critical to the success of an AI product. If adverse impacts of AI are not addressed by companies themselves, data privacy and ethical issues could become very heavily regulated. Hopefully, without major limitations to the people that use it. Along with making AI more ethical, it’s also becoming safer for people to use thanks to more regulation.
AI isn’t operating in a completely lawless space anymore. While AI with machine learning and natural language processing may be a newer product category, there’s now emerging governance around AI that helps make sure it’s safe to use and in the consumer’s best interest. This is happening in the United States and around the world.
Autonomous vehicles are currently the most talked about AI to be regulated, but another area of AI where rules have been put in place is data privacy. Take our AI platform. It’s to ensure the personal assets of potential and existing customers are protected. We incorporate the latest security safeguards to minimizing customer risk associated with GDPR compliance, malicious cyberattacks, and information theft.
There are self-governance approaches to take with AI. The fire warden approach to AI governance gives teams the skill to escalate issues that immediately need attention. This approach supports an environment of innovation and agility that is necessary for businesses to remain competitive. Plus, being agile means you can evolve alongside ever-changing AI.
While more laws are passed around the use of AI, we can take these steps, side-by-side with efforts to create unbiased AI and govern ourselves. We know that staying accountable with up to date information ensures an accurate baseline for AI. At Humanly.io, we value the importance of how to preserve and protect data while serving society as a whole with a strong and ethical moral compass. The truth is, once we’ve established fair and safe data, we can move on to another important aspect of AI, making it more human. Let's push what is humanly possible. Start your next conversation with Humanly and book time with me.
[Original full article, published on 10/31/2020]