With the world turning its attention to the increasing growth of Artificial Intelligence (AI), it is also questioned how this growth impacts priorities such as Diversity, Equity and Inclusion (DEI). Upon listening to the podcast episode ‘More than a Glitch’ by the Good Robot, we are aware of the different challenges and perhaps opportunities in aligning AI and DEI. In the first instance, they acknowledge that AI is neither a good or bad thing in itself, but rather, technology defies these binary structures, so we should consider the context in which we use AI to impact DEI efforts and initiatives positively.
The three things that stood out for me in this podcast include:
1. Bias still exists in AI systems. People with their own perceptions, understanding and knowledge still frame these systems. That is why there is a need for more diverse individuals within the AI space to overcome the power imbalances and overall collective unconscious bias.
2. Technology / AI systems are unable to address underlying systemic social and cultural issues. AI is not a magical tool that automatically makes life easier; instead, AI exists from historical data, which allows it to make predictions and suggestions using mathematical and coding patterns. Therefore, if historically, one group is more privileged than another, AI systems reinforce this.
3. When we “design for accessibility, everyone benefits”. Although it is complex and costly to restructure legacy systems to cater for the advancements in the DEI space, when we carefully consider how we build and continue to build AI systems, addressing a problem can benefit the targeted community/group and still have a domino effect on everyone.
So, what does this mean for the Talent industry?
One of the major takeaways from this podcast was the advice to incrementally approach new and changing technologies, with a sense of friction, so as to prevent discriminatory measures along the way. In the HR world, when we are looking at platforms or even the sourcing tools that our team will regularly engage with, it’s worth asking ourselves what are the high risks in using these? We should carefully consider whether the adoption of new technology will increase exclusionary behaviours.
We have more candidates on the market than we have jobs, so driving efficiency is key to productivity. But as we review the AI structures - we need to ask the hard questions. What is the historical data is reinforcing. I would be intrigued to know what are the coding and legacy structures that are scanning job descriptions and resumes. Are you scanning for a particular skillset, education, and is that equitable and supporting in your inclusion strategies? We will start to see more research in coming years.
Ultimately, the message is this. It’s ok to be inquisitive about hiring with the use of technology. Hard questions to ask: are we ensuring that recruiters really take a good look at the short list from the results of these platforms? Are we being considerate of the algorithmic labelling that goes on behind doors? Where are these technology companies getting their data from? What biases are present in the data?
Is there enough focus on the social impact of technology in hiring? With profitability as a priority for many, can we shift to more of a balanced approach between profitability and fairness?
I presume our answers to these questions, will come in due course as our engagement and ‘friction’ with AI increases.