FairNow & Humanly Present: Ethical AI Governance & Best Practices For AI-Powered Recruiting
Transcript
Possibly beyond.
I was, head of people analytics technology and strategy at Capital One. And in fact, my time there is what caused us to start, FairNow where we were building a lot of AI technology solutions, starting to use vendor solutions as well, and realized, you know, governance was very, very important. And so we felt the need. And while we were able to build really good governance practices at Capital One, we thought there was a real need in most HR organizations to understand, how to manage risk of AI solutions, whether they were building them internally or using them leveraging them through their, to their vendors. So awesome. Let's get started.
So what you're gonna learn today, a few things. One is kind of what does good AI governance look like? What are the regulations out there? What does good AI governance look like? And how do you evaluate vendors like Humanly, for example, right, which we'll talk about, in the course of today?
So first of all, I I wanna start with what is AI. Because when I talk about AI, often people's head goes to chat GPT and all this generative AI that's out there today. But most of you, most HR organizations, most organizations, period, have already been using AI for quite some time. And the way regulators define it, the way I define it, the way data scientists often define it, AI is much broader than generative AI. It's any technological system that takes data as input, learns from that data, and does one of three things, makes predictions, makes recommendations, or generate content.
Obviously, generative AI is where the content generation piece comes in. But many of you many of your organizations are already doing predictive analytics using regression modeling, machine learning techniques to make predictions, and that's AI. You see that on this page. And so already, you should be thinking about, hey. How do you manage risk and govern these models?
AI in the HR space is already is for the last decade has been growing by leaps and bounds, in particular, in the talent acquisition space. And Matt's gonna talk about that later on in terms of how humanly plays a role there. But here are some of the examples of ways that large organizations are already using AI in the talent acquisition space, from interacting and communicating with candidates to filtering, screening and sourcing candidates. Right?
And the beauty of it is AI actually has a ton of potential. Right?
While we're going to talk about risks and risk management, the reason we're talking about risk and risk management is because AI is such a powerful technology. The value prop is so significant. But at the same time, you wanna make sure you're managing the downside.
A few examples like Capital One were able to my previous team were able to build AI that was not only more efficient, but was less biased than the human process and led to higher quality of hires.
So that's the trifecta. Right? If you could be more efficient, have better efficacy of hiring, and be less biased, that's an amazing outcome to have. And we were able to do that with AI. But, again, we had very, very strict governance and controls around these models. There was a study recently that showed, basically how women, female candidates actually prefer a recruiting process that was driven by AI as opposed to one driven by humans.
And while that may be shocking to some, I think that just highlights the fact that while AI has its risks, so does the status quo.
And many groups feel the current status quo is already doesn't work well for them or for people like them. Right? And so how do we not say, hey. Let's throw the baby out with the bathwater and not use AI because that's not where the future is going.
Any organization that, you know, shies away from using AI or is hesitant to use AI or, you know, doesn't play on using it, I think is going to, be as be at a disadvantage in the future. So it's a less a matter of let's not use this, but how do we use this responsibly? How do we use this well?
Along the way, as you can imagine and as many of you are probably following, there's a lot of regulations that are starting to come out. Right? This is just the list that we are tracking. So fair now, we're an AI governance platform. You'll learn a little bit more about us later. But one of our jobs is to track all the laws and legislations that are relevant to AI. And you can see kind of at various stages where they are in the pipeline.
The most recent one was the Illinois, h b three seven seven three that came out. California has multiple regs out there. Everyone's heard of the New York City local law, but Colorado and Utah have also passed legislation recently. This is all happening in the last year. And, of course, the big one is the EUAI Act, which goes into effect, fairly soon.
Now how do you manage all of these things? Right? These some of these laws are quite comprehensive.
Many of them include HR as a high risk domain category in their in their scope. Right? So you have to be thoughtful about being compliant with these laws and legislation.
A few common themes. One is that in most of these rigs, both the enterprise customer and the vendor are in scope. So I get this question often, Hey, I'm an HR organization. I'm using AI through my vendors.
Can I just use their documentation to be compliant? And the answer is mostly no. Right. The vendor has a responsibility to demonstrate that they are responsible, but so do you.
And there's a variety of reasons for that. Right? The training data that the vendor might be using to train their models might be different than just your data that you use. Also, you may be using the the applications differently than how the vendor intended, so they can't be responsible for that.
Right? So you own the the deployment of these technologies, and so you need to be well governed.
Like I said, on almost all of these laws, HR and talent, the talent space is considered high high risk, which is natural. Right? These applications are making or influencing decisions around hiring, around sourcing, around, talent management and promotion and pay. These are career decisions that these applications and algorithms are are impacting.
Right? And one of the other themes that we're seeing is the expectation of an AI governance program, right, which we'll talk about again. What does what does that look like?
So at the same time that a lot of organizations are investing in AI, I think over half of large HR organizations are already investing in AI today. And, again, remember, it's not just Gen AI. Right? It's predictive models, recommender systems, etcetera.
At the same time, the number of lawsuits have gone up. Right? And there's some, some examples here. The big one, that's in the press these days is the Workday lawsuit, but there have been others as well. And so you really want to, you know, balance that desire and motivation to use AI with the risk management.
Here's an example that came up in one of our exercises, that we've done with with customers.
Here's ChatGPT, and I know there are organizations and even technologies out there that use GenAI to do, interact with candidates, to do job recommendations, to even potentially a source and select candidates and filter candidates. But here's a very, clever example where, there was a scenario of a individual who was laid off and needed to go back to work and was looking for roles. Right? Conditional on the same set of skills, but just different in the original home country, Mexico versus Germany, which is bolded here. Look how different the job recommendations are from Chachibiti. Just take a second.
Right? It's quite astonishing.
Right? And, again, this is not with any kind of differences in skills or backgrounds or anything like that. This is controlling for that. Right? And so this just highlights one example of numerous, numerous examples of how just blindly using this technology without proper procedures, controls, testing and governance can really lead you astray and get you in trouble as an organization.
As I mentioned, more than half of large organizations are already using AI and HR today, but often because it's through the vendor. Right? Most HR organizations don't have huge data science teams, so they're not often building it internally themselves. It's often coming through a vendor.
They don't know what that application is doing. They often don't know if they've even turned on AI. They don't know what that AI is doing. How effective is it?
Is it actually returning the ROI that you are promised? Is it biased? Is it compliant?
If you don't know the answer to those questions, that's not gonna work any longer. Right? And so I'd be very curious, if we have time at the end to understand how many of you actually know the answers to those questions.
So what we're seeing in this space, you know, at FairNow, we've been talking to a lot of organizations on both sides of the market. Right? We've been talking to a lot of HR tech vendors as well as a lot of HR organizations. And one of the things we're hearing is that questions, in the sales cycle are starting to increase. More and more organizations are starting to ask questions of their vendors, and we'll talk about what those questions should look like and so on. But questions around, hey. What kind of data do you use to train your models on?
What's the have you done a bias audit? Are you compliant with regs? Are you certified under any kind of standard?
If something goes wrong, how do you remediate? Etcetera, etcetera. Right? And we actually have a, a book that we can share with you afterwards where we go through twelve essential questions to ask vendors.
Right? And that's gonna be an important part of the process, and vendors should be ready to answer those questions. And and and Matt's gonna talk through how Humanly is able to do that. But that's really important part of managing the vendor risk that you will own when you work with a vendor that has AI in their systems.
I I promised you at the very beginning, to describe what good governance looks like. And so this is our governance framework.
Governance one zero one, first and foremost, inventory all your AI applications, whether they're internally built, whether they come through a vendor, doesn't matter. Know what you have. When I talk to HR leaders, very few of them know all the AI applications they have in your organization. That's step one.
Inventory your AI applications. Collect metadata about these applications. What is the data that's going into these systems? What's the data that's coming out?
How are they being used? What decisions are being made? Where are they being used? All that metadata really, really matters.
And based on that metadata, you can do risk assessments. Governance should be very, very easy and thin for low to medium risk applications.
The governance really should bite for higher risk applications. Right?
You should establish roles and accountability. Humans in the loop are very, very important in the deployment of AI. Where are the critical points where humans should be making decisions right about the development of the technology, about the deployment of the technology and about decisions made around the data coming out? What are the internal policies and controls you have?
There's a set of policies you want to have as an organization to set your guardrails. What are the red lines where you will not use A. I. From an ethical standpoint?
What are your policies around transparency? Will you tell candidates that they are interacting with A. I? What are you what's your remediation plan if something does go wrong?
If if if if you understand there is actually bias in the system, how will you remediate them? Right? Regulatory compliance, naturally. And you want to get on a cadence, some kind of cadence depending on how often your models are changing to test and monitor your models, right, for a variety of things from data privacy to bias to reliability.
Right? So this is this is a good a good framework to have in mind when you think about instituting an AI governance program in your organization. The last pillar, which we didn't include on this page, is around training. Right? Making sure the right folks in your organizations are trained on good governance, on ethics, understand their roles and responsibilities, and what role they play. That will also be crucial.
So what do we do? Well, we're a platform that helps you manage your governance program. Right? As we believe, more and more organizations are going to need to institute a governance program in their HR organizations just to implement the previous page.
And we have lived as again, during my time at Capital One, I've lived this stuff. It's painful. Governance can be painful, annoying, manual. And so our goal is to make governance as easy as possible for you.
Right? So then you can focus on the fun stuff. You can focus on building, on and on using the actual AI applications. And so we track all the laws and regulations.
We ensure, that you're comply we help you ensure that you're compliant with these regs. And then we also have a a bias testing suite that we've actually used with customers to ensure their models are fair and not biased.
And in fact, our, case study with Humanly, which is a good segue, was in fact using that testing suite to test, Humanly's technologies for bias. And, Humanly is able to use our partnership to then kind of build trust with their customers and show that, hey. We've been audited by a third party. We've ensured that our models are not biased and build that trust with customers. And that's how we've worked with humanly, and we work with both sides of the market. We work with the vendor side as well as HR organizations till, again, we feel like we wanna be that trust builder in the space and ensure that both sides are using and building technology that's not biased, that's compliant, and well managed. So, again, hopefully, if you have any questions, feel free to reach out to me at guru at fair now dot a I.
But at this point, I'd like to segue to Matt.
Thank you, Gert. Let me see here. I'm sharing my screen. Let me introduce myself real quick.
It's great to be talking to everybody. I'm Matt Raymond. I'm the VP of product at Humanly. Prior to Humanly, I was, head of product and also eventually CEO of a company called Teamable, before joining Humanly.
So I am, super excited to go through kinda how Humanly approaches the the entire kind of framework around ethical AI, how we work with, folks like Guru and his team at Fair Now. I do wanna reinforce the fact that this is, actually something I pulled from Fair Now post on LinkedIn.
Just, again, in terms of reinforcement, we are seeing this unfold in real time in terms of the regulatory bodies and the way that, you know, AI is going to be governed, you know, between now over the next, you know, one to any years. Right? And it's really important that for companies like us, Humanly, we are constantly on the blade of the you know, what's happening from an innovation standpoint and that we're working with folks like Guru and his team at Fair Now to make sure that while we have the best intentions always, we always need third party, kind of oversight governance and and ways to kind of make sure our philosophical approach and and the way that we're implementing new technology is always, meeting the ethical AI standards and, of course, the governance and regulatory bodies that, are helping us look after those things.
So the basic definition here is, you know, ethical AI is artificial intelligence that adheres to well defined ethical guidelines, skipping ahead values, including such things as individual rights, privacy, nondiscrimination, non manipulation.
So we are holding AI to a very human standard. Right? We wanna hold AI to the same standard we would hold each other to in terms of accountability and transparency in terms of how it intersects with our day to day, especially in HR tech recruiting and talent acquisition.
This is a wonderful visual. I'll I'll I'll keep it brief, but this is a framework borrowed from Microsoft. Our CEO, Pream, spent many years at Microsoft before, cofounding Humanly. And I find this to be just a great way of explaining how ethical AI works in practice.
The two things, the two spheres here rather, is one, you know, is your model the underpinning of AI inclusive, equitable, accountable, and ultimately ethical in terms of its practice in terms of how it recommends, makes predictions, and helps you scale yourself operationally in your intelligence? Does it make you better at your job in an ethical way? The second half of this is the explainability of that to you back to Guru's point about the human in the loop. Specifically, do you understand how the AI is working?
Not everybody on this call, myself included, is a data scientist. I don't write AI algorithms myself. I implement them from a product level perspective and the UI, the UX, and the the transparency and the explainability about what's happening under the covers to allow you some control, some some ability to steer, and some ability to kind of open up that black box and make sure that the AI is is is doing what you expect and want it to do is extremely important. So you wanna operate right in the middle of this Venn diagram in in a reliable and safe safe way.
And this this framework is a great way for us to stay honest and a great way for us to work with folks like Guru and Fair Now on, you know, auditing that process and our philosophical approach. I won't go through everything on this table, but this is essentially the recruitment value chain, various ways that AI is applicable across that value chain.
You know, one of the things I like to say as we as we look at this is ask me what this table or this value chain should look like a month, a year from now. It will be very different. More boxes will be green. There'll be more use cases added to this.
That's good news. Right? That means there's a a a high opportunity to continue to innovate and ultimately become more efficient, especially as we talk about the recruitment value chain. Again, that's why it's so important to stay on top and on the blade of, what's happening from a regulatory standpoint and making sure all the new things you're adding.
And this is what we're passionate about at at humanly, and every day we're looking to innovate on the AI that we already have. We have to continue to be cognizant, aware, and, you know, use, FairNow as an example to help us audit our processes. I won't go through this in a ton of detail, but the what I wanted to reinforce based on Guru's, kind of q and a that you should have from a vendor standpoint, Everybody on this call is likely very good at interviewing a human to ascertain this type of information, right, about perspective role or perspective opportunity.
You should take the same exact approach when you're assessing new technology to bring into your operational workflow or to augment your workflow from, from an HR tech perspective, and, ultimately, from recruiting and talent acquisition perspective. So I would take a very similar approach and a and a framework that you're very comfortable with in terms of that interview process. The the book guru shared, from Fair Enough is a great way of kind of learning what those questions should be. But in terms of categorically and the different dimensions, it's very much set up in your favor to treat it as if you're interviewing a human.
So AI can make things happen faster, more efficiently, and with higher quality. It can also make bad things happen faster, more efficiently, the at at a higher quality of bad. Right? So the few examples I wanna talk through are what does this mean and how do we boil this down to some use cases and ultimately how this kind of unfolds behind the scenes and what what the ethical kind of approach is around all these things philosophically. So I'm just taking a screenshot from our humanly app right here. It's a generic candidate. It's got my headshot on there.
What this ultimately could be is an applicant for a job, maybe a candidate that, I'm I'm passively reviewing that I may wanna reach out to.
What is important about this is as a human, as I review this candidate, there are several things that may push me one way or another in my decision making for this candidate from an unconscious or conscious perspective, on the bias scale. And that could be my image. It could be my name. It might could be my location.
That could even be some of the companies that I work for. And a couple quick anecdotes to help explain this is we found through going through hundreds of hours of of analytics around interviews, a Seattle based recruiting team will spend another additional minute and a half talking to candidates, making small talk that are in the Seattle area, the greater Seattle area. Now how that ultimately affects their decision making process, you know, is a second kind of conversation, but it could be pushing them in one direction or the other just knowing their location. Some of the other things that we've heard a lot from recruiting teams is we're not doing hundreds, thousands trying to get through all the applicants for a specific job.
If I don't know the company or companies that they work for historically, it may push me in one direction or another in my decision making process because I don't have enough context. I don't have enough knowledge. I don't have enough information to help me make a well rounded decision about that candidate. So getting additional information about, all these dimensions may help me a bet be a better recruiter.
But what ultimately makes you the best and most unbiased kind of reviewer of talent is to remove those elements that that may push me in one direction or another. So as I review applicants, as I review candidates for a specific opportunity, this is my best view of that candidate in order to rule that bias potentially out. And the correlation here is the way your AI should underpin this decision making to make you more efficient, to help recommend and make predictions on what might be best fit for a specific role is to have the same approach. Right?
We talked about the the all the human aspects and correlations so far about, you know, how I interact with it as a human being, whether I'm interviewing vendors or otherwise. This is another great kind of just general kind of parallel that I like to paint in terms of how your AI should be evaluating candidates the same way a human should, in the most unbiased way be, reviewing candidates as well. Another just a quick example is as I look at a very standard example template around, in this case, product designer role, All that matters here is I have highlighted a few things that may help me power my search to match candidates to this.
And what I wanna make sure I I talk about is twofold. First is AI can help make these searches more intelligent because you may want to look for product designer. You may or may not know all the different titles that may also correspond to a product designer or what if you have a keyword of of human centric, AI can help discern that and find a taxonomy of, words and skills that map to human centric, human research in this case, corporate identity, UX design. There are all of these things that AI can do to help broaden your search, be more inclusive and more equitable, and find the best talent, specifically for a role or opportunity.
But what you need to have is the explanation, the transparency around why. Right? If I'm recommending a candidate to you that has, you know, experience design or heuristic design, I should be able to draw the parallel for you as a recruiter through how we create that transparency in UI, UX to say, you know, think about it as a consumer. Folks that, you know, like human centric also like heuristic design. Folks that like user research also like design research. These are very, great ways to leverage AI from a, from a, like, a recruiting and sourcing perspective and review perspective, and the transparency is key in making that connection.
The last thing I just wanna say in terms of use case, especially on the humanly side, is through our chat and the conversational AI, choice is extremely important with the application of AI, specifically in a conversational AI capacity.
Right? So if you have chatbots that are interacting with your candidates, we have a spectrum of customers that have a spectrum of risk tolerance that they're willing to assume at any point in time. And a lot of that's just a deployment trajectory.
But in this instance, you know, you may have a q and a chatbot that is meant to ascertain and solicit more in information from candidates, but also provide transparency about you as a company, how you hire, what your culture's like, what the benefits look like, and you provide that experience, to a specific candidate or subset of candidates that are visiting your career site, you may want to have a very curated way of how that chatbot responds to candidates that are asking questions about all those various aspects in the q and a, scenario. You may want, a little more, AI, conversational AI to be part of that.
You may want bespoke answers, specific questions, and you may want to tap into LLMs. You may want the candidate to be able to ask anything freely into that experience. You know, why is the sky blue? May have nothing to do with your, you know, your careers, you know, page, you the jobs that you have, our open recs that you have, but you may wanna provide a great AI driven experience.
And the advent of LLMs make answering questions like that, easier and easier as we continue to integrate them into our our technology, into our chat flows, into our chatbots. So the idea here behind all of these examples is AI is about choice. AI is about control. AI is about transparency and giving you the the, effectively, the knobs and dials in order to control your risk tolerance, you you as a human in the loop, your ability to engage and make better decisions and make better and well well informed decisions about how the AI is helping you in terms of efficiency, in terms of decision making, and everything in between.
So, last thing I'll say is AI will never replace recruiters, but recruiters using AI intelligently will replace those that don't. And I'll also say using AI intelligently and using AI ethically will replace those that do not. Right? So with that, I will turn it back over to Kayla, and you might, be able to close this out, Kayla. Thank you so much.
Yeah. Thank you so much both for that conversation. I always find it so fascinating concerning and fascinating, how some of those biases play into role recommendations as well as, like you said, with Seattle, you know, just how much time is spent with that person over another. So thank you both. I know we've got just a couple more minutes here. I do have a couple questions I will throw out from the audience here. Okay.
A great time to get them in the chat. This is a rare opportunity when we stay at the same time. And so if you have any questions or you guys can pop those into the chat or the q and a. I've got one here. Guru, this one is for you. How do you know if your AI vendors are up to date when you pause now?
So if you're an HR organization, you're using a handful of vendors, but they're using AI as a client is upset there are how do you know if they're staying up to date?
I think, it's a little hard to hear, Kayla, but I think what I heard you say is, how do you how do you know if the AI vendors are staying up to date with regards to standards and regulations that are coming out? Is that correct?
Yes.
Yeah. That's, one is that's, a couple of, responses there. One, you should ask them. Right?
Again, this is part of the sales cycle, part of the RFP process. You should be asking them for all the documentation of, any standards that they, are certified for, NIST, for instance, or ISO. These are new standards that are emerging around responsible AI. But also ask them for audits that have been done or compliance reports for regulations that are in in place for them.
Now you may be asking, hey. How do I know what regulations they will be in scope for and all of that stuff? And that's exactly what I was saying before. That's what FairNow helps with.
Right? We simplify all of that friction for you. So if you know what vendor tool and technology is is is you're using and you've inventoried it on our platform, we'll automatically tell you what you should be asking them. Hey.
Are you compliant with the New York City law, for instance? Are you are you compliant with the new Colorado law? It'll auto our our technology will automatically highlight that for you, and you'll know exactly what to ask. And in fact, you can automatically the platform itself can automatically collect that information from your vendor.
So we know how complicated this is, and that's why, we're we're we're here to help.
Awesome. I have one for you here too as well, Matt. So, with your customers, I know you said there's customization along how much they incorporate elements, how much they're incorporating. Where do you see them on that spectrum? Do you feel like your customers are comfortable including different pieces of LOMs right now, or do you feel like they're more skeptical?
I I it's a great question. I don't know that it's skepticism. I think it's a a process. Right? And I think, what we're seeing is, moving from very specific curated answers to just get comfortable with the technology to opening the aperture on the corpus of information that our chat box can access to be owned, governed, and curated by our customers.
And from, like, an LLM kind of response and conversational AI perspective, we have a, like I said, a larger corpus of information with which to respond more freely and have it be more interactive. And the next step in the process for those customers usually becomes if and where and how do I include a broader set of elements outside the scope of my necessarily influence, you know, fully governed to add to continue to add to that experience at from candidate level. So I think it's a spectrum, in terms of where they're at with their comfort level, with their risk conversion, and where just their process and understanding of how it all works from a candidate experience perspective so that they can continue to open the aperture and and feel like they're in control and confident with what the next step is.
And for my own curiosity, do you have a sense of what that has looked like? Let's say over the last year, do you feel like your clients are getting more comfortable, or is there hesitation there?
Yeah. And customers are certainly getting more comfortable.
And a lot of it, like, the interaction between human lead customers is making sure that we can shepherd them and, like, shepherd them through that process. Right? So letting them know, like, what the art of the possible is by taking that next step, what the right controls should be, from a from a governance perspective on their end and how that might influence and change the the experience at the candidate level. So, I think you're seeing an increase, in terms of confidence, a a not a lowering of the risk threshold, but a better, like, kind of understanding of the technology.
And we'd love to be able to share with them through that process so they understand, you know, either, you know, what happens next and what could happen next based on, opening that aperture.
Oh, absolutely.
Awesome. Well, thank you for the important work that you do in this space. It's been a pleasure to be able to to partner with you, and we'll continue to, as time goes. From here oh, I look I think I've got one last question here. Let me see.
Yeah. A question about AI and HR more generally. Cheryl, I'm gonna loop back with you one on one, offline. And, Drew and Matt, I wanna thank you for your time here today.
This is where you can catch us next. I know we'll be a HR tech, and, you know, we will be at breakfast. And so make sure you have a recording of the entire session as well as the slide. And, if any questions, you have two, experts here. And so thank you both for your time.
Thanks, Kayla. Thanks, Matt.
Thanks for your good day. Bye. Pleasure. Thanks again. Bye bye.