Omg! Why are companies still incorporating AI recruitment technology that has been proven to make clunky biased judgments?
Sarah Knapton's article in The Telegraph relates to recruitment AI that is making judgments on home décor, clothing, lighting, art in the background, books in the background, head scarfs, head tilts, and the sound of voice.
The article (based on research by Cambridge University) raises concerns that some AI recruitment technology represents little more than “automated pseudoscientific software” making “spurious correlations” between facial expressions/visuals and personality.
These concerns are not new. As Daniel Henry highlighted in his fabulous BBC3 show (“Computer Says No”), most AI recruitment solutions that incorporate facial recognition tech also drive racial bias. And Hilke Schellmann's marvelous podcast (“In Machines we Trust”) raised many worrying questions around black-box algorithms throwing out suspicious results.
None of these concerns are relevant to VireUp
VireUp does not assess facial features, expressions, postures, backgrounds, tone of voice, or accents. And it doesn’t look for keywords or phrases.
VireUp AI assesses interview answers down to concept level and provides clients with all the results in a dashboard that back-up the scores. This is "Glass Box" (explainable) AI and it is fully auditable.
VireUp AI does everything recruiters want and need (dramatically reduces cycle time, cost, complexity, and candidate drop-out) without building in bias.
Contact: firstname.lastname@example.org to find out more.