This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
| 2 minutes read

Generative AI, Talent Acquisition and Hallucinations

According to Forbes Magazine in February 2023, ChatGPT is the fastest growing technology in history, reaching 100 million users in the first two months after its release. The potential uses seem almost infinite, and most individuals have started using ChatGPT without abandon.  

One of the places that has been impacted by ChatGPT is the talent acquisition function - most organizations have at least experimented with writing job descriptions or job postings using generative AI.  

But what about the candidate side?

Candidates can now use generative AI to write their resumes to fit a specific job description. What a sweet scenario this is. Candidates can have as many resumes as they want, to fit any job within their interest.

What impacts can this have on TA?

A person's resume (and cover letter if still requested) are generally our first view into several characteristics about a candidate. Does she/he/they have strong written communication skills, does she/he/they organize thoughts and details well, and does she/he/they have the key experiences we're looking for. If generative AI crafts these key documents on behalf of an individual, are we even evaluating the person's skills? Or are we evaluating the AI's skills.

One of the things that fascinates me about generative AI is a concept call “hallucinating”.  This is where generative AI has gaps or doesn't know the answer, so it extrapolates or makes some assumptions and fills in those gaps with potentially unreal or inaccurate information.  Amanda Augustine, career expert at TopResume, was recently quoted in a CNBC article saying, “ChatGPT is “the ultimate people pleaser,” … Whatever prompt or task you give it, it will always give you some sort of response.” 

This could mean that as it is crafting a resume to match a job description that it may actually make up experiences the candidate has not actually had.

Think about that a minute.

Even if we assume that the candidates have very carefully reviewed the freshly created resume and have edited any hallucinations, their resume is now written in a language that reflects the language we use in our specific business (even if that might not naturally be the case). If we then switch hats to the recruiter who is using some form of AI machine learning for sourcing and ranking candidates, my hypothesis is that the people who used ChatGPT or similar generative AI may actually score/rank higher than those whose “authentic” resume weren't quite so perfect even if they are not, in fact, a better fit.

Will recruiters be able to spot generative AI enhanced resumes. With 20-30-40+ requisitions and thousands of candidates to review it's highly unlikely.

I started my career with a typewriter and a file cabinet for my “ATS”.  In my 30-year career, I've been able to see technology advance the TA function in very cool and amazing ways (it makes me a tad jealous that I didn't have all this “stuff” at my fingertips when I worked a full desk). Pointedly, I love everything that technology can do for the profession.

But I still ask myself, is there an ethical dilemma here?  Should candidates disclose if their resume was generated by generative AI? Does it adversely impact underrepresented candidates who may not have access to computers and technology to avail themselves of this “benefit”? How might we tighten up our interviewing and selection process beyond “the resume” to ensure we flush out hallucinations before making a final selection.

I don't know the answers to these questions, but I have a feeling technology is going to be exciting and challenging us for many years to come.


“ChatGPT is “the ultimate people pleaser,” … Whatever prompt or task you give it, it will always give you some sort of response.”


artificial intelligence, candidate attraction, talent acquisition, hr tech