Employers around the world are using online assessments for faster hiring and deeper candidate insights.

Innovations in assessment technology such as video assessment, social media scraping and game-based assessment have created more opportunities for even richer insight. However, these new technologies are not without risk. They all generate vast amounts of data on candidates, which can make it difficult for assessment designers and users to see ‘under the hood’ and understand how their assessments are actually working to identify the best candidates:

  • How do you ensure that the information gathered about each candidate is relevant and meaningful?
  • How do you know the algorithms you’re using haven’t inherited biases from the data they were trained on, despite the best intentions of the assessment designers?
  • How do you know how the algorithms actually function?
  • In other words, how do you make sure you’re taking advantage of new and technologically advanced hiring methods, while still providing a fair, equitable and reliable recruitment process? 

Glass box versus black box

The key to answering these questions turns on the difference between black box and glass box algorithms.

Black box algorithms use forms of AI and machine learning that have free rein to combine and recombine the data in ever more complex ways to improve the prediction of an outcome, such as turnover or hiring success. We call these approaches “black box” because the algorithms they produce are so complex that even the assessment designers cannot explain how they work.  

While this approach can be successful in predicting the outcome it also carries significant riskBecause the algorithms cannot be explained, the outcomes they produce cannot be defended. This can have significant legal implications if your organisations hiring practices are ever challenged in court. How can you defend a hiring decision if you don’t know what led to that decision being made? It can be difficult or impossible to know if the algorithm has “inherited” biases from the data it was trained on.

As an example, if your current recruiters tend to hire more men than women, then an algorithm developed to predict hiring success will learn to associate any “maleness” in the data (e.g. names, interests, looks, writing styles) with success. This will lead the algorithm to systematically promote male applicants, even if the actual gender of the applicant is removed from the data. Finally, these algorithms can pick up on transitory flukes in the data to produce prediction results that are not sustainable or that decay over time. Without knowing how the algorithms work it’s not possible to know what the “shelf life” of the algorithm is.  

All of these factors ultimately decrease trust in the algorithm and the utility of the assessments they are based on. For the employer, the implications of black box assessments can drastically undermine their efforts to build diverse and effective teams. 

An alternative path, and one we use at Revelian, is to follow a glass box approach. This approach still harnesses the power of machine learning and algorithmic insight but retains the capacity for humans to understand and explain how outcomes are predicted. Primarily this is done by using methods which simplify, summarise and explain key features in the data, rather than amplifying the complexity in the data. Glass box approaches are focused on both predicting outcomes as well as providing reliable insight into key factors that describe or differentiate applicants. This means that glass box approaches can be used not just to predict an outcome, but to help your organisation understand why some applicants are better than others.  

The key thing though that defines a glass box approach is that differences between candidates in their scores or outcomes can be explained by the people who developed or use the assessment. This means that if you’re ever called on to explain or defend a decision, then you can do so. Rather than just placing your blind trust in a black box, you can place your trust in an algorithm that you can explain and understand.

This means you can be confident that you’re accurately assessing specific attributes, such as problem-solving ability, work-related values, emotional intelligence, integrity and more in a bias-free and inclusive manner. And just as importantly, candidates can see that you’ve opted to use fair, bias-free recruitment tools that give everyone an equal opportunity. 

Ethical providers will equip you with the information you need to confidently harness psychometric insights. This includes details of validation studies undertaken to ensure the assessments measure what they should, without unintentional bias. 

About author

Revelian Matthew Neale

Matthew Neale – Chief Psychology Officer

PhD (Management), Master of Organisational Psychology, BA (Honours, Psychology)

As Chief Psychology Officer at Revelian, Matt leads Revelian’s professional team of organisational psychologiststo deliver innovative psychometric assessments that give client organisations genuine insight into their current and potential talent.

Matt’s career spans leadership in HR, organisational development and recruitment in private and public sector organisations ranging from small start – ups to some of the largest organisations in Australia. Over his career his work has specialising in recruitment and selection, organisational surveys, and team development.

In addition to his role as Revelian’s CPO, Matt is also Chair of the Queensland College of Organisational Psychologists and plays an active role in promoting the profession and science of organisational psychology in Australia.

LINKEDIN