Employers around the world are using online assessments for faster hiring and deeper candidate insights.
Innovations in assessment technology such as video assessment, social media scraping and game-based assessment have created more opportunities for even richer insight. However, these new technologies are not without risk. They all generate vast amounts of data on candidates, which can make it difficult for assessment designers and users to see ‘under the hood’ and understand how their assessments are actually working to identify the best candidates:
The key to answering these questions turns on the difference between black box and glass box algorithms.
Black box algorithms use forms of AI and machine learning that have free rein to combine and recombine the data in ever more complex ways to improve the prediction of an outcome, such as turnover or hiring success. We call these approaches “black box” because the algorithms they produce are so complex that even the assessment designers cannot explain how they work.
While this approach can be successful in predicting the outcome it also carries significant risk. Because the algorithms cannot be explained, the outcomes they produce cannot be defended. This can have significant legal implications if your organisation’s hiring practices are ever challenged in court. How can you defend a hiring decision if you don’t know what led to that decision being made? It can be difficult or impossible to know if the algorithm has “inherited” biases from the data it was trained on.
As an example, if your current recruiters tend to hire more men than women, then an algorithm developed to predict hiring success will learn to associate any “maleness” in the data (e.g. names, interests, looks, writing styles) with success. This will lead the algorithm to systematically promote male applicants, even if the actual gender of the applicant is removed from the data. Finally, these algorithms can pick up on transitory flukes in the data to produce prediction results that are not sustainable or that decay over time. Without knowing how the algorithms work it’s not possible to know what the “shelf life” of the algorithm is.
All of these factors ultimately decrease trust in the algorithm and the utility of the assessments they are based on. For the employer, the implications of black box assessments can drastically undermine their efforts to build diverse and effective teams.
An alternative path, and one we use at Revelian, is to follow a glass box approach. This approach still harnesses the power of machine learning and algorithmic insight but retains the capacity for humans to understand and explain how outcomes are predicted. Primarily this is done by using methods which simplify, summarise and explain key features in the data, rather than amplifying the complexity in the data. Glass box approaches are focused on both predicting outcomes as well as providing reliable insight into key factors that describe or differentiate applicants. This means that glass box approaches can be used not just to predict an outcome, but to help your organisation understand why some applicants are better than others.
The key thing though that defines a glass box approach is that differences between candidates in their scores or outcomes can be explained by the people who developed or use the assessment. This means that if you’re ever called on to explain or defend a decision, then you can do so. Rather than just placing your blind trust in a black box, you can place your trust in an algorithm that you can explain and understand.
This means you can be confident that you’re accurately assessing specific attributes, such as problem-solving ability, work-related values, emotional intelligence, integrity and more in a bias-free and inclusive manner. And just as importantly, candidates can see that you’ve opted to use fair, bias-free recruitment tools that give everyone an equal opportunity.
Ethical providers will equip you with the information you need to confidently harness psychometric insights. This includes details of validation studies undertaken to ensure the assessments measure what they should, without unintentional bias.