The evidence for structured interviews

The research on interview validity has been replicated across decades and contexts, and the findings are consistent. Unstructured interviews - where different candidates are asked different questions, in different orders, scored on the basis of general impression - have a predictive validity of approximately 0.20 on a scale of 0 to 1 (where 1 is perfect prediction). That means they are barely better than chance as a predictor of who will actually perform well in the role.

Structured interviews, where every candidate answers the same competency-based questions in the same order, scored against pre-agreed rubrics, have predictive validity of approximately 0.51. That is a meaningful and consequential difference. A process with higher predictive validity selects people who genuinely perform well in the role. A process with low predictive validity selects people who perform well in interviews - which tends to favour candidates with cultural familiarity, confidence in formal settings, rapport with the panel, and communication styles that map onto the preferences of the majority-group interviewers.

In short: unstructured interviews are a bias amplifier. They reward similarity and penalise difference. Structured interviews reduce - not eliminate, but materially reduce - the influence of personal chemistry, accent, appearance, and shared background on the hiring decision.

What makes an interview 'structured'

A structured interview has four defining features:

  1. Every candidate is asked the same questions in the same order. There is no improvisation, no follow-up questions prompted by personal curiosity, and no adjustment of the questions based on what the candidate has already said. The structure is identical for every person interviewed.
  2. Questions are competency-based. They ask candidates to describe specific past situations where they demonstrated a relevant skill or behaviour - not what they would hypothetically do, but what they actually did. "Tell me about a time when you had to deliver a piece of work under significant time pressure. What did you do, and what was the outcome?"
  3. Answers are scored against a pre-agreed rubric. Each question has a defined scoring scale - typically 1 to 5 - with explicit descriptions of what a strong, average, and weak answer looks like for that specific question. The rubric exists before any candidates are seen, and is applied consistently.
  4. Scoring happens during the interview, before panel discussion. Each assessor records their independent rating for each answer before the panel discusses. This prevents the most confident voice in the room from anchoring everyone else's scores.

How to design structured interview questions

Start with the role requirements - not the person description, not a list of personality traits. What does someone actually need to be able to do to succeed in this role? Break those requirements into discrete competencies, and write one or two questions per competency.

Effective structured interview questions are:

Aim for four to six questions per interview. More than six and you risk fatigue for both candidates and assessors; fewer than four and you may not cover all the key competencies. Keep the list focused on the requirements that genuinely differentiate strong performers in the role.

Building the scoring rubric

A rubric gives assessors something concrete to score against. Without it, "I'd give that answer a 3" means different things to different people. With a well-designed rubric, a 3 has a specific meaning that all assessors have agreed on before the interview begins.

For each question, define what a strong answer (4–5), average answer (2–3), and weak answer (1) looks like. Be specific. For a question about managing competing priorities, a strong answer might include: clear identification of the priority conflict; a structured approach to decision-making that considered stakeholder impact; proactive communication with those affected; and a specific, measurable outcome. A weak answer is vague, describes someone else's decision, or doesn't result in a clear outcome.

Share the rubric with panel members before the interview begins. Give them time to read it, ask questions, and raise anything that seems unclear. The rubric is not a script - it doesn't tell assessors what to probe or how to follow up. It is a calibration tool: it ensures that when two assessors hear the same answer, they're evaluating it against the same criteria.

Briefing your panel

Panel members should be briefed before every interview, including those who have done it many times before. The briefing should cover:

Legal requirement: The duty to make reasonable adjustments for disabled applicants is anticipatory. At the point of inviting candidates to interview, proactively offer adjustments - don't wait for candidates to ask. Many won't, for fear of how the disclosure will be received.

Reasonable adjustments within the structured process

The duty to make reasonable adjustments for disabled candidates applies to the interview itself - and it is anticipatory, meaning you must offer adjustments rather than waiting for candidates to request them. At the point of sending interview invitations, include a clear, specific statement: "We want to make sure this interview is as accessible as possible. If you need any adjustments - for example, additional time to process questions, written questions in advance, a break during the interview, a different format, or anything else - please let us know and we'll make it happen."

Common adjustments that work within the structured format:

Critically, none of these adjustments compromise the integrity of the structured process. Every candidate still answers the same questions, scored against the same rubric, by the same assessors. The adjustments remove barriers to demonstrating capability - they don't change what is being assessed.

2.5×
Structured interviews predict job performance at 2.5× the rate of unstructured ones - and are proven to reduce the influence of in-group bias on hiring decisions. (Schmidt & Hunter meta-analysis)

Getting started

Moving to structured interviews doesn't require a wholesale redesign of your hiring process overnight. Start with one role. Define the competencies. Write four questions. Build a simple rubric. Brief the panel for 15 minutes before the interview. Score independently. The effort is modest and the improvement in both fairness and decision quality is immediate and measurable.

If you're working across multiple hiring managers, consistency matters. A structured interview that is implemented differently by each team is not structured - it's semi-structured at best, and the inclusion benefits diminish accordingly. Build a shared template, train your hiring managers together, and review outputs to check that the process is being applied consistently.