21 June 2019
Research shows that gender influences the outcome of innovation funding applications. How can we understand this better and test ways to guard against unconscious bias?
Anyone who’s sifted through applications can understand how easy it is to become paralysed by the enormity of the responsibility of giving grants for public sector innovation.
No one wants to be responsible for getting between that idea and the people who really need it.
We know from research into academic grant funding that even people who believe they are making unbiased choices can be unconsciously influenced by irrelevant factors, like applicant gender. Harvard Business School’s Professor Iris Bohnet has written about the traditional interview process and how it can easily weed out the best rather than helping you find them.
Organisations such as Applied are trying to counter this in the hiring process by providing platforms that help to devise more inclusive job descriptions, introduce bias-free scoring systems and eliminating gendered language from applications.
As we thought about it, we kept returning to the same question: is selecting a job candidate like selecting an innovation project?
We’re looking for exciting people with ideas we’ve never heard before – how do you write a person specification for that?
When it comes to hiring, there are six specific recommendations for designing interview processes which encourage systematic and deliberative decision-making. We considered these, and tried to apply them to our own selection process.
Of the six recommended actions, we adopted four as part of our selection process. Those were:
- Everyone gets asked the same structured questions
- Projects are directly questioned by one panel member at a time
- Scoring answers before moving on to the next question, so scores don’t influence each other
- Focusing on skills they will need as part of the programme
The proportion of majority female teams funded this time was much closer to the proportion of majority male teams being funded.
What happened?
Both our programme selection processes were run with small numbers, and based on comparing two such small groups, where other factors including panel membership, the interviewees, the projects changed, it’s not possible to reasonably attribute any changes in grant-making to this bias mitigation project.
That said, there were improved patterns; the proportion of majority female teams funded this time was much closer to the proportion of majority male teams being funded.
Transforming best practice recommendations into real-world actions is always interesting; we found that it was necessary to make compromises in designing the process, and that running the experimental process led to some interesting challenges for the selection panel when compared to a more traditional interview approach.
The practice of ensuring every project is given the same question is utterly fair – but in reality, the effect was that the person asking the question was unable to probe for subtleties and it meant that project answers weren’t perhaps as rich as previously.
It was simply not practical to take notes on answers, score and log behavioural characteristics within the time we had for interview – this recommendation was dropped from our process after a practice-run ahead of interview day.
Making such fundamental changes to the design of our selection process increased the amount of administration and organising necessary to make the process run smoothly. Most projects told us that they enjoyed the experience, but one or two were much less comfortable with the situation.
Despite the challenges throw up by the recommendations we implemented, we are very proud to have purposefully designed a selection process that seeks to be as inclusive as possible. We intend to continue to take positive steps in this area, and hope that our experiences might help others improve their own practice.