
Last Updated on February 16, 2026
McKinsey & Company is quietly piloting a new interview component that reflects a deeper shift in how consulting work is done: the McKinsey AI Interview.
Based on recent feedback from my MBA coaching clients in the U.S., candidates in select U.S. offices are now being asked to collaborate live with McKinsey’s internal AI platform, Lilli, as part of the interview process, starting in January.
While this has not yet been universally rolled out and remains a pilot (it is NOT used for evaluation yet), it offers a clear preview of where consulting recruitment is heading. Only candidates in select offices in the U.S. are participating in this trial and are provided with a McKinsey-issued laptop with access to Lilli.
Update: January 12 2026 – The first of our MBA clients have now completed the McKinsey AI interview as part of their assessment process last week. It followed two traditional McKinsey interviews, each consisting of a case interview and a Personal Experience Interview.
The third interview, the AI interview, was conducted with an interviewer present to facilitate the session. Candidates were required to use an internal McKinsey AI tool to answer questions. For example, they were presented with a typical case scenario, received supporting exhibits, and were asked questions such as how they would validate the numbers in a chart using AI. Candidates were expected to work through these tasks using the McKinsey chatbot.
In essence, the interview retained elements of a classic case interview but introduced a distinct twist, with different directions compared to the standard interviewer-led McKinsey case format.
Overall, candidates reported that the session felt highly exploratory. It was perceived as less structured and less clearly guided than both the case interview and the Personal Experience Interview. At several points, candidates were unsure what was expected of them. It was also communicated that this interview currently serves as a test format and is not yet a formal part of the core assessment.
Update: January 26 2026 – The trial runs are also currently being conducted in a few select European offices.
This article explains what the McKinsey AI Interview is, where it fits in the interview process, how candidates are evaluated, how to prepare without overcomplicating your strategy, and what this change signals for consulting careers more broadly.
TL;DR – What You Need to Know
- The McKinsey AI Interview evaluates how candidates apply judgment, structure, and communication while collaborating with AI.
- It currently appears in a pilot of a select few offices in the U.S.
- Candidates are not tested on technical AI skills, but on consulting fundamentals applied in an AI-supported setting.
- Case interviews and the Personal Experience Interview (PEI) remain the primary hiring drivers for now.
- Over-reliance on AI output, weak framing, and poor explanation of reasoning are common pitfalls.
- The interview reflects a broader shift toward AI-augmented consulting work, not a replacement of core consulting skills.
Where the McKinsey AI Interview Fits in the Recruiting Process
Where piloted, the McKinsey AI Interview has appeared as an additional interview component, alongside the traditional case interview and PEI.
Importantly, this is not a global rollout. Current indications suggest limited use in parts of the United States, with broader adoption expected later. In its current form, the AI interview complements existing assessments rather than replacing them.
Candidates are explicitly informed that the tool is not used for evaluation at this stage. McKinsey is testing it before any broader rollout or integration into formal assessment. This follows the firm’s standard practice when introducing new recruiting tools, as seen previously with the Solve Game and earlier changes to the PEI dimensions.
How the McKinsey AI Interview Works in Practice
In the AI interview, candidates are asked to complete realistic consulting tasks by interacting live with an internal AI tool. The process mirrors how consultants increasingly work on real engagements.
Candidates typically receive a business scenario or question and are expected to:
- Prompt the AI with clear, focused questions
- Review outputs for relevance, accuracy, and gaps
- Refine or re-prompt when responses are incomplete
- Synthesize insights into a structured, client-ready answer
Interviewers are not evaluating the AI’s output. They are evaluating how the candidate works with it.
Simply accepting responses at face value is not sufficient. Candidates are expected to demonstrate judgment, ownership, and structured reasoning, much like working with a junior team member.
The interaction is live and time-bound, comparable to other interviews.
What McKinsey Is Evaluating
Based on candidate feedback and early reporting, the McKinsey AI Interview assesses how candidates think, not what they know about AI.
Key capabilities include:
- Judgment: deciding what to trust, adapt, or discard
- Structure: framing questions and synthesizing outputs logically
- Iteration: refining prompts and improving results step by step
- Communication: clearly explaining reasoning and next steps
Strong candidates remain calm when AI outputs are imperfect and take ownership of the final answer. This mirrors real consulting work, where AI supports analysis but does not replace professional accountability.
Why McKinsey Added an AI Interview
McKinsey has been explicit about the role of AI in its future operating model. Internal platforms such as Lilli are designed to accelerate research, synthesis, and knowledge access across engagements.
As of 2025, Lilli reportedly processes hundreds of thousands of prompts per month, supporting consultants across geographies and practices.
From a recruiting perspective, this creates a new expectation. Consultants are no longer evaluated only on independent problem solving, but also on how they work with advanced tools responsibly.
The AI interview allows McKinsey to observe whether candidates can:
- Use AI to support, not bypass, structured thinking
- Apply judgment when outputs are incomplete or generic
- Maintain ownership of decisions rather than deferring to technology
- Communicate reasoning clearly in an AI-supported workflow
In short, the interview is a preview of the job itself, similar to how the traditional case interview has always functioned.
An Important Clarification: Early, Non-Evaluative Pilots
It is worth emphasizing that the AI interviews are currently NOT evaluative.
Fur current candidates, the AI component is explicitly positioned as non-evaluative, used for testing, calibration, and fine-tuning rather than scoring. Only candidates in select offices in the U.S., are participating in this trial and are provided with a McKinsey-issued laptop with access to Lilli.
If you are interviewing in person, there is a high likelihood that you will be part of the trial. If your interview is conducted virtually, you are not part of it at this stage.
This mirrors McKinsey’s historical approach with other recruiting innovations, most notably the Solve Game, which was initially introduced as a pilot before later becoming a formal evaluation tool.
Candidates should therefore avoid over-interpreting early exposure to the AI interview. Its presence today does not imply immediate, universal impact on offer decisions.
How to Prepare for the McKinsey AI Interview
First, confirm with your recruiter whether the AI interview is part of your final round. It is not universal.
If it does appear, preparation should be light, focused, and proportional.
You do not need access to McKinsey’s internal tools. Practicing with publicly available AI platforms is sufficient to build familiarity.
A practical preparation approach includes:
- Asking goal-driven, specific questions
- Reviewing outputs critically rather than accepting them blindly
- Refining prompts when answers are vague or unfocused
- Summarizing insights into a clear structure
- Explaining your thinking out loud
Most importantly, do not shift your preparation away from cases and PEI. Those remain the primary determinants of success.
Common Mistakes Candidates Make
Most difficulties candidates encounter in the AI interview do not stem from a lack of technical proficiency with AI tools. Instead, they arise from using AI incorrectly. The interview is not a test of whether a candidate can operate an AI system, but of whether they can apply sound judgment and structured thinking while working with it.
A common mistake is treating AI as an answer engine rather than as a support tool. Candidates who expect polished, final answers from AI often ask vague or unfocused prompts, which predictably lead to generic outputs. When those outputs are weak, they fail to iterate, refine the question, or redirect the tool toward a more useful line of inquiry.
Another frequent issue is the uncritical presentation of AI-generated content. Candidates may relay AI output without synthesizing it, prioritizing what matters, or linking it back to the problem at hand. Many also struggle to clearly articulate why certain AI suggestions were adopted while others were discarded.
Taken together, these behaviors signal weak consulting fundamentals rather than weak AI capability. The interview ultimately evaluates judgment, problem structuring, and decision-making. AI is simply the medium through which those core skills are observed.
What This Signals for Consulting Careers
The McKinsey AI Interview reflects an evolution in consulting, not a reinvention.
Core skills such as structured thinking, clear communication, and sound judgment remain central. What is changing is the environment in which those skills are applied.
Looking ahead, AI will increasingly be used to support research, analysis, and synthesis in consulting work. Rather than replacing core problem-solving, it will augment how consultants gather information, test ideas, and structure insights more efficiently.
As a result, consultants will be expected to critically evaluate and refine AI output rather than accept it at face value. The ability to assess quality, identify gaps, and steer the tool toward more relevant or precise outputs will become a core professional skill.
Equally important, clear explanation of reasoning will matter as much as speed. Consultants will need to articulate not only what conclusions were reached, but how they were derived and why certain inputs or recommendations were prioritized over others.
Ultimately, accountability will remain with the consultant, not the tool. AI may assist the process, but responsibility for judgment, recommendations, and outcomes will continue to sit squarely with the human professional.
This shift is gradual and predictable. Learning to work productively with AI is becoming part of professional readiness, similar to learning how to build slides or structure analyses.
Final Takeaway
The McKinsey AI Interview is best understood as a forward-looking signal, not a reason to overhaul your preparation.
Prepare like a strong McKinsey candidate first. Then add a light layer of AI fluency so the tool feels familiar, not intimidating.
If you can think clearly, apply structure, communicate your reasoning, and treat AI as a junior teammate rather than a crutch, you are already prepared.


