McKinsey Solve Game (2026): How to Prepare and Ace the PSG

the image is the cover for an article on the mckinsey solve game

Last Updated on February 19, 2026

The McKinsey Solve Game has become one of the most decisive and least transparent stages of the McKinsey recruiting process. No interviewer. No feedback. Just complex, time-pressured problem solving in a digital environment that many candidates underestimate.

McKinsey announcement headline introducing the Solve Game as part of digital recruiting.
McKinsey’s Announcement of the Solve Game

This article is an overview of the Solve Game. If you want deep dives into the current games, please click here to jump straight to game-specific strategies:

For everyone else, this guide explains what the Solve Game is, why McKinsey introduced it, what skills it actually assesses, and how strong candidates approach it. You will learn what to expect, where most applicants go wrong, and how to prepare efficiently without wasting time on irrelevant drills.

Consider this your orientation briefing before you step into the simulation.

History of the McKinsey Solve Game

The McKinsey Solve Game is a central element of McKinsey’s recruiting process, used alongside case interviews and Personal Experience Interviews (PEI). . It is a digital problem-solving assessment where candidates face complex, unfamiliar scenarios under time pressure, without interviewer interaction or immediate feedback.

Developed in collaboration with Imbellus (aquired by Roblox) and behavioral scientists from UCLA Cresst, the Solve Game places candidates into interactive simulations designed to test how they explore information, structure problems, make decisions, and manage trade-offs. Rather than answering business questions on paper, participants operate in dynamic environments that require building sustainable systems, analyzing populations, or optimizing limited resources.

Earlier versions of the assessment included a tower-defense-style scenario where candidates protected plant species from invasive threats. Current versions focus on system-building and resource-allocation challenges across varied simulated environments.

When McKinsey introduced this game-based assessment, it marked a clear break from the traditional pen-and-paper Problem Solving Test. It also came with the message that the game could not be specifically prepared for. This left applicants feeling uncertain about how to best approach the assessment and was a notable change for candidates accustomed to preparing for weeks or sometimes even months to tackle their case interviews.

Collage of online forum posts from candidates discussing failing the McKinsey Solve Game.
A Challenge for Most

Despite being in use for several years, the Solve Game remains one of the least transparent parts of the McKinsey selection process. Official guidance is minimal, expectations are rarely spelled out, and candidates receive little to no feedback after completion. This lack of clarity leaves many applicants uncertain about what they will face, how performance is evaluated, and how to prepare effectively. As a result, otherwise well-prepared candidates often leave the assessment disappointed, not because they lack ability, but because they did not know what to expect or how to approach the challenge strategically.

Quick reality check…

It quickly became clear that the Solve Game was not immune to preparation or strategy. McKinsey’s early messaging to the contrary functioned more as positioning than reality. Our interviews with some of the first candidates who completed the Imbellus Test in London in November 2019 provided valuable early insight. This marked the first formal use of the Solve Game in live recruiting beyond beta testing. Many of these candidates reported that with a clearer understanding of the game format and evaluation criteria, they would have performed materially better. Several had prepared for the traditional Problem Solving Test instead, only learning about the switch to the Solve Game a week before their assessment.

We used this early feedback as a starting point and systematically collected insights from test-takers across multiple countries over the following years. In parallel, we collaborated with subject-matter experts to translate these observations into concrete preparation methods, gameplay strategies, and even recreated the actual games for our clients to play.

The conclusion was consistent.

Contrary to McKinsey’s initial positioning, effective preparation is not only possible but decisive. Candidates who understand what to expect and apply targeted strategies for each game segment develop relevant skills faster and perform significantly better than peers who approach the assessment blindly.

This article provides a structured overview of the Solve Game. It covers five key areas:

  • Why McKinsey transitioned from the traditional Problem Solving Test to a gamified assessment, and what this means for candidates
  • The games included in the assessment and the variations reported by test-takers
  • The skills actually being evaluated, beyond official communication
  • Proven preparation methods, exercises, and tools to raise performance
  • Practical test-taking strategies to maximize results under time pressure

For candidates seeking comprehensive preparation, we offer a complete Solve Game Preparation Suite that includes fully playable simulations for Red Rock, Sea Wolf, and Ecosystem Creation, a video course explaining game mechanics and game-winning strategies, and a 129-page strategy guidebook.

StrategyCase.com pioneered in-depth analysis of this assessment format based on firsthand test-taker insights. This foundation has allowed us to continuously refine our methods using feedback from a large and diverse candidate base. The program has gone through 23 iterations, most recently updated in January 2026, and incorporates input from more than 600 test-takers and several game designers.

We launched the original program in November 2019 and have updated it consistently to maintain relevance and accuracy. Today, more than 9,000 applicants from over 70 countries have used our materials to prepare for their Solve Game.

Introduction of the McKinsey Solve Game

“Imagine yourself in a beautiful, serene forest populated by many kinds of wildlife. As you take in the flora and fauna, you learn about an urgent matter demanding your attention: the animals are quickly succumbing to an unknown illness. It’s up to you to figure out what to do—and then act quickly to protect what you can.”

McKinsey & Company

Sounds exciting? Well,…you be the judge.

As a consultant with McKinsey or any other top-tier consulting firm, you often find yourself in situations where you must save the day. On an abstract level, the game simulates exactly this reality. While your consulting career mostly relates to strategy engagements with Fortune 500 companies, McKinsey chooses the environmental scenarios deliberately. More on that in a second.

Traditionally, the McKinsey way of hiring candidates was through the following funnel:

  1. Screening: Your consulting resume and cover letter are screened based on a number of filters
  2. Problem Solving Test: A 60-minute pen-and-paper test, covering 26 business-related questions
  3. Consulting Interview Round 1: 2 to 3 business case and personal experience interviews
  4. Consulting Interview Round 2: another 1 to 3 interviews depending on the region (Rounds 1 and 2 can be on the same day in some offices)

With the introduction of the Problem Solving Game (PSG), the Problem Solving Test (PST) was on its way out.

So, why would McKinsey replace a time-tested screening tool, which has evaluated hundreds of thousands of applicants, with a computer game? The reasons are threefold, reflecting McKinsey’s typical approach:

The answer is quite simple and – as ever so often in the McKinsey world – threefold:

  1. To attract new talent and new types of consultants.
  2. To have an assessment tool that is agnostic (in theory) of people’s backgrounds.
  3. To have a lower-cost program (in the long run) to assess more candidates.

McKinsey uses the Solve Game to reflect broader shifts in the consulting landscape. Client problems are changing, the firm’s service portfolio is expanding, and consulting careers now require a wider range of skill sets. Alongside generalist consultants, McKinsey increasingly hires data scientists, implementation specialists, product and digital designers, and software engineers. A digital, interactive assessment is a natural response to recruiting digital-native talent for this evolving workforce.

The Solve Game uses environmental and system-based simulations as neutral task settings. McKinsey emphasizes that no prior knowledge is required and that traditional preparation should not provide an advantage. The intention is to create an assessment that is accessible to candidates from any academic or professional background. This marks a shift from the former Problem Solving Test, which favored business and quantitatively trained applicants through a pen-and-paper format. The Solve Game aims to reduce background-driven bias by testing problem solving in unfamiliar, abstract environments. Whether this fully succeeds is debatable, as new forms of bias inevitably emerge. We discuss this further later in this article.

Scale is another key driver. McKinsey receives several hundred thousand applications each year. Manual screening is resource-intensive, and strong candidates are often filtered out early due to rigid resume-based criteria.

The Solve Game addresses this challenge in two ways.

First, administering the assessment to an additional candidate comes at negligible marginal cost. Most applicants can complete it remotely, without occupying local recruiting resources. This creates a highly automated and scalable screening step (sounds exactly like what a top-tier management consulting firm would do). By contrast, the former Problem Solving Test required on-site administration and significant staff involvement.

Second, low marginal testing costs allow McKinsey to evaluate a broader pool of applicants beyond those who pass initial resume screens. Candidates who may not meet traditional resume thresholds can still demonstrate their potential through actual problem-solving performance, increasing the likelihood that overlooked talent reaches interview rounds.

To build the Solve Game, McKinsey partnered with a specialist game-based assessment developer, later acquired by Roblox, with the ambition to redefine how human potential is measured. This raises an important question.

Does the Solve Game live up to this ambition and fulfill its role as an effective screening tool for consulting applicants?

If you want to learn more about McKinsey’s rationale for the Solve Game, Fortune spoke with Katy George, McKinsey & Company’s chief people officer, regarding the impact of prevailing labor market trends on the consulting firm’s talent strategy.

The Firm wanted to change its talent recruitment strategy to align with current labor market trends. Shifting its focus from prestigious educational backgrounds to the potential and diverse skill sets of candidates, McKinsey now recruits from a broader range of educational institutions, increasing its outreach from 700 to about 1,500 schools, with plans to expand to 5,000. This approach supports the “paper ceiling” movement, valuing talent over formal qualifications.

To support this move, McKinsey developed the video game ‘Solve’ to attract a wider pool of applicants, including tech talent. This evaluation has reached over 150,000 candidates in the first two years of the game’s introduction, highlighting the game’s role in identifying talent with varied backgrounds, particularly in technology.

The Format of the Solve Game

The current Solve Game format consists of two games completed within a total of 65 minutes. Based on consistent test-taker reports and our data, all candidates currently face the Red Rock Simulation at 35 minutes and the Sea Wolf Simulation at 30 minutes.

This structure places strong emphasis on time management. Candidates must balance exploration, analysis, and decision-making while ensuring both games are completed within the fixed time window.

In the following sections, we provide a detailed breakdown of each game, along with practical strategies to manage time effectively and maximize performance.

The Scoring of the Solve Game

At its core, the Solve Game mirrors the logic of consulting case interviews. Candidates must identify problems, gather and interpret data, make decisions under time pressure and incomplete information, and translate findings into actionable solutions. The difference lies in the format. Instead of an interviewer-led discussion, the process is captured digitally and evaluated through algorithmic scoring.

Internal and test-taker data indicate that performance in the Solve Game is a strong predictor of success in subsequent case interviews, outperforming the former Problem Solving Test in predictive accuracy. We discuss available evidence and implications in later sections.

the image shows how solve game performance correlates with success in case interviews at McKInsey
Source: Imbellus

The McKinsey Solve Game is designed to assess capabilities that cannot be reliably inferred from a resume or cover letter. It evaluates how candidates identify problems, explore information, develop solutions, and make decisions in unfamiliar situations. Rather than testing business knowledge, the assessment focuses on core cognitive and behavioral skills. Specifically, it measures:

  • Problem identification: The ability to recognize the underlying issue that requires resolution
  • Information analysis: The skill to source, filter, and interpret data from multiple inputs
  • Strategic solution development: The ability to form, test, and refine hypotheses
  • Decision making: The capacity to draw sound conclusions under time pressure
  • Adaptability: The agility to adjust approach when conditions change
  • Quantitative reasoning: The ability to interpret and apply numerical data.

To evaluate these dimensions, the Solve Game uses a dual-scoring system.

The product score measures outcome quality. It assesses whether game objectives were achieved, such as building a sustainable system, reaching correct analytical conclusions, or optimizing constrained resources.

The process score evaluates how results were achieved. Every interaction is tracked and translated into behavioral data points. This captures whether candidates follow structured approaches, test assumptions, revisit earlier decisions appropriately, or proceed in an ad hoc manner.

This scoring logic has several important implications for candidates.

First, success is not defined solely by arriving at the correct answer. How you structure the problem, explore data, and sequence decisions is equally important. Candidates who apply disciplined problem-solving processes are rewarded even when outcomes are not perfect.

Second, the assessment captures behavior under pressure. It evaluates decision speed, response to incomplete information, and the ability to maintain structure in dynamic environments.

Finally, constant behavioral tracking can increase perceived pressure, as every action contributes to the evaluation.

At first glance, this process-focused design appears difficult to prepare for. However, our data shows that the range of viable solution paths in the current games is relatively narrow. Effective strategies and step-by-step approaches can be learned and practiced, leading to consistent performance improvements.

Overall, the Solve Game represents a shift from testing knowledge to testing problem-solving behavior. For candidates, this means that preparation should focus less on memorizing content and more on developing repeatable approaches to exploring, structuring, and solving unfamiliar problems.

Current Roll-out and Scope of the McKinsey Solve Game

It’s all fun and games until your score actually determines your future McKinsey career.

A common question from candidates is whether participation in the Solve Game is mandatory. In nearly all cases, the answer is yes.

The assessment was initially piloted between 2018 and 2019 with several thousand candidates across multiple countries, running in parallel with the former Problem Solving Test. This phase focused on beta testing, data collection, and calibration rather than formal evaluation. McKinsey consultants were also invited to complete trial versions to build internal benchmark data.

Today, the Solve Game is fully embedded in McKinsey’s global recruiting process. Based on our data and consistent candidate reports, it is now used in virtually every country with a McKinsey office. The worldwide rollout was completed during the 2020 recruiting cycle and has since become a standard screening step for the vast majority of applicant profiles.

Since 2022, the Solve Game has also been required for selected recruiting events and early-access programs, such as leadership and diversity initiatives, further extending its reach beyond standard office applications.

In terms of roles, the assessment applies broadly across consulting tracks, including generalist consulting, implementation, digital, research, and analytics roles.

Senior and experienced hires are often exempt from the Solve Game requirement, as their evaluation typically relies more heavily on professional track record and interviews.

Timing of the Solve Game in the McKinsey Recruiting Process

Once your resume and cover letter pass the initial screening, you will receive an email invitation to complete the Solve Game. In most cases, candidates can choose when to take the assessment, as long as it is completed within a defined window, typically between three and seven calendar days after receiving the link.

In some regions, candidates are informed of their assessment deadline earlier, sometimes several weeks in advance. In rare cases, offices may still require candidates to complete the Solve Game on-site, occasionally on the same day as case interviews.

Because the assessment evaluates core problem-solving behaviors rather than memorized knowledge, it is advisable to start preparing early. This allows sufficient time to build familiarity with the game format, refine structured thinking habits, and develop confidence in navigating unfamiliar scenarios under time pressure.

Post-Game Process: Waiting for Results

If you complete the Solve Game remotely, the time to receive feedback typically ranges from one to fourteen days, depending on office processes and candidate volume. Based on our data, most candidates receive an update within a week. Longer waiting periods are possible in specific regions, especially where recruiting decisions are made in scheduled batches. In rare outlier cases, candidates have reported waiting significantly longer. If you require a faster response due to a competing job offer, reaching out to the recruiting team can often accelerate the process.

In some offices, the Solve Game is completed in conjunction with first-round interviews. In these cases, game performance is evaluated alongside interview results rather than as a standalone screening step.

The weight placed on Solve Game performance varies by office. For some, it acts as an initial gateway to the interview stage. For others, it serves as an additional data point combined with resume screening and interview performance. In certain cases, a strong application or referral can partially offset an average Solve Game result.

Requirements to Pass the McKinsey Solve Game

The pass rate for the Solve Game is estimated to be similar to or slightly lower than that of the former Problem Solving Test. Based on consistent candidate reports and internal tracking, only around 20 to 25 percent of applicants pass the assessment without targeted preparation. With structured training, clear strategies, and deliberate practice, success rates can increase substantially.

McKinsey has conducted extensive calibration and beta testing with large pools of candidates and internal staff to refine the Solve Game’s scoring models and difficulty balance. As more applicants become familiar with the assessment format and preparation efforts intensify, average performance naturally rises. To counteract score inflation and preserve differentiation between candidates, the Solve Game is regularly updated and adjusted.

This continuous evolution is the reason our preparation program has already reached version 23 within a few years.

The Skills Assessed by the McKinsey Solve Game

The McKinsey Solve Game does not test specific business knowledge. Instead, it evaluates the same fundamental cognitive abilities and problem-solving skills that traditional case interviews and written assessments were designed to measure, but in a gamified and data-rich environment. To perform well, candidates must:

  • Understand what each game is actually testing
  • Apply effective preparation methods and repeatable problem-solving strategies

The Core Skills

The games are designed to build a multidimensional profile of each candidate’s cognitive and behavioral capabilities. Every interaction is recorded and analyzed, contributing to both a product score and a process score. The assessment does not only evaluate outcomes. It also measures how candidates think, adapt to new information, and correct course when facing uncertainty.

To score well, candidates must optimize both result quality and problem-solving approach, while understanding the drivers behind each game’s scoring logic.

While McKinsey does not publicly disclose its full evaluation framework, consistent test-taker data and expert analysis indicate that the Solve Game primarily assesses:

  • Critical thinking: Forming sound judgments from qualitative and quantitative information
  • Decision making: Selecting effective courses of action among competing options
  • Metacognition: Using structured strategies such as hypothesis testing and note-taking
  • Situational awareness: Recognizing interdependencies and anticipating scenario outcomes
  • Systems thinking: Understanding multi-layer cause-and-effect relationships and feedback loops
  • Cognitive processing: Absorbing, integrating, and recalling new information efficiently
  • Adaptability: Adjusting strategies as conditions change
  • Creativity: Developing novel and effective solution approaches
the image displays the skills evaluated by the McKinsey Solve Game
Source: Imbellus, Interviews

The Solve Game leverages advanced behavioral analytics to capture these dimensions at scale. This digital format allows McKinsey to observe candidates’ problem-solving processes with a level of granularity that exceeds traditional interviews, while maintaining consistency across thousands of applicants.

The result is a highly sophisticated screening tool that evaluates not only what candidates conclude, but how they think.

Demonstrating Key Skillse

Maximizing your performance in the McKinsey Solve Game requires demonstrating a broad set of problem-solving behaviors through your actions and decisions during gameplay. The assessment evaluates not only what you achieve, but how you arrive there. The following behaviors signal strong performance across the core skill dimensions:

  • Critical thinking: Systematically filter large amounts of information, discard irrelevant data, analyze key inputs, and synthesize findings into coherent solutions across both qualitative and quantitative tasks.
  • Decision making: Form timely, evidence-based conclusions. The game tracks where you spend time, how you prioritize exploration versus action, and how you translate data into recommendations.
  • Metacognition: Apply structured problem-solving habits such as hypothesis-driven exploration, note-taking, and deliberate testing of assumptions. These strategies become visible through your navigation patterns and tool usage.
  • Situational awareness: Maintain a clear grasp of objectives, constraints, available options, and remaining time, ensuring that decisions align with the overall task.
  • Systems thinking: Recognize interdependencies between variables, such as matching ecosystem conditions to species requirements or anticipating downstream effects of resource allocations.
  • Adaptability: Adjust strategies when new information emerges or conditions change, particularly in dynamic scenarios such as the Sea Wolf Simulation.
  • Cognitive processing: Absorb, integrate, store, and retrieve information efficiently throughout the assessment.
  • Creativity: Develop effective and sometimes unconventional approaches when standard solution paths fall short.

To optimize both product and process scores, candidates must also understand the structure and logic of each game. Familiarity with game mechanics and scoring drivers allows you to apply the right strategies to each scenario, manage time effectively, and avoid costly trial-and-error during the actual assessment.

The Current Games of the McKinsey Solve

The McKinsey Solve Game currently allocates a total of 65 minutes for completion. Candidates spend 35 minutes on the Red Rock Simulation and 30 minutes on the Sea Wolf Game. Earlier versions of the assessment used different time limits and game combinations, with total durations ranging from 60 to 81 minutes depending on the version being tested. Today’s structure is standardized across offices.

Before each timed game, candidates complete an untimed tutorial. These walkthroughs introduce game mechanics, objectives, and available tools. Candidates can take as much time as needed to understand the setup before starting the clock. This design ensures that performance in the timed section reflects problem-solving ability rather than confusion about rules or interfaces.

Once a timed game begins, it cannot be paused. This increases the importance of preparation, focus, and time management. Candidates must balance exploration, analysis, and decision-making while staying aware of the remaining time, as interruptions or trial-and-error approaches quickly become costly.

Evolution of Game Scenarios

Since its introduction, the Solve Game has consistently used abstract environmental and system-based scenarios, while the specific games themselves have evolved over time. Earlier versions included Ecosystem Creation and Plant Defense scenarios, followed by temporary additions such as disease-identification and migration-planning simulations used primarily for calibration and internal benchmarking.

As with McKinsey’s broader assessment philosophy, new game variants are periodically introduced to test difficulty balance, scoring consistency, and resistance to overfitting through preparation. These calibration games are typically not used for formal candidate evaluation until fully validated.

At present, all candidates face two scenarios: the Red Rock Simulation and the Sea Wolf Simulation.

We continuously update this article and our preparation materials as new evolutions of the Solve Game are introduced.

In the next section, we take a closer look at each game in detail.

Red Rock Game

The Red Rock Simulation has been a fixed element of the McKinsey Solve Game since March 2023, replacing the former Plant Defense scenario for all candidates. It represents a deliberate shift toward a more classical analysis and problem-solving context. While the storyline places you in the role of a researcher, the underlying tasks closely mirror real consulting work: identifying relevant data, structuring analyses, performing calculations, interpreting exhibits, and translating findings into clear recommendations.

In other words, Red Rock is the Solve Game’s closest equivalent to a digital case interview. It tests whether you can run a structured analysis under time pressure without external guidance.

The simulation is completed within 35 minutes and is divided into two parts: the Study Section and the Case Section. Success depends less on raw math ability and more on disciplined data selection, clean calculation setup, and strict time control.

The Study Section

The Study Section follows a three-stage workflow: Investigation, Analysis, and Report. Together, they replicate the consulting process of gathering data, analyzing it, and communicating conclusions.

Investigation Stage
You receive a research objective alongside multiple data sources such as text passages, tables, and charts. Your task is to identify relevant information and transfer it into your on-screen Research Journal.

The key challenge is selectivity. Strong candidates filter aggressively, capture only decision-relevant data, and label it clearly. Weak candidates either collect everything or miss critical inputs. This stage directly feeds your process score, as the game tracks how you explore, prioritize, and document information.

Analysis Stage
You answer a series of quantitative questions based on the collected data. An on-screen calculator is available, but the real difficulty lies in setting up the right equations, choosing the correct inputs, and avoiding unnecessary recalculation. You can move between Investigation and Analysis, but excessive back-and-forth consumes time and signals an unstructured approach.

This stage tests quantitative reasoning, equation setup, and your ability to execute math reliably under time pressure.

Report Stage
You synthesize your findings into a short written report and select an appropriate visual representation of your results. This mirrors the consulting requirement to translate analysis into clear, decision-ready communication.

Candidates often underestimate this step. Rushing the report after overspending time upstream is a common failure pattern. Clean time allocation across all three stages is therefore essential.

The Case Section

In 2023, McKinsey introduced an additional mini-case component to Red Rock. This section presents a set of quantitative reasoning questions based on new data, distinct from the Study Section. You now complete both the Study and Case Sections within the same 35-minute time limit.

This update materially increased difficulty.

Candidates must manage two different analytical contexts back-to-back while maintaining accuracy and time discipline. Those who enter without a predefined pacing strategy often complete one part well and rush the other.

The Case Section primarily tests rapid data interpretation, quick equation setup, and error-free execution. It is less about complex math and more about speed, structure, and consistency.

Why Preparation Matters Even More Here

Red Rock rewards candidates who already know:

  • how to filter data efficiently
  • how to structure a research journal
  • how to set up equations quickly
  • how to allocate time across stages
  • how to avoid unnecessary backtracking

Each phase of the Red Rock Simulation is intentionally structured to replicate core consulting work: gathering relevant information, performing targeted analysis, and translating findings into clear recommendations. The challenge is not simply solving math problems, but doing so through a disciplined workflow that mirrors how consultants approach unfamiliar problems under time pressure.

Red Rock tests whether you can filter signal from noise, structure quantitative analysis, and communicate conclusions with clarity. These are foundational consulting skills. Candidates who approach the game as a simple puzzle often struggle. Those who treat it like a mini consulting engagement perform materially better.

The introduction of Red Rock marked a shift in McKinsey’s assessment philosophy. Earlier Solve Game scenarios focused on abstract system-building and dynamic simulations. Red Rock moves closer to a classical problem-solving test, resembling digital case formats used by other top-tier firms, while still capturing richer behavioral data through gameplay mechanics.

In that sense, Red Rock is less a game and more a timed, self-guided problem-solving interview embedded inside a digital environment.

Strategy for the Red Rock Study Section

The Red Rock Study Section is best approached as a self-guided consulting case. You are given an objective, a set of data sources, and limited time. Your task is to extract the right information, run a focused analysis, and deliver a clear conclusion. Success depends less on raw math ability and more on disciplined workflow and time control.

We recommend a four-step approach.

1. Understand the objective before touching any data (Investigation Stage)
Start by carefully reading the research objective. Do not open data sources immediately. First clarify:

  • What’s the overall context and objective?
  • Based on this objective, what decision or questions must be answered in later stages?
  • Which metrics or comparisons are likely to matter?

Strong candidates treat the objective like a client question. A clear mental problem statement prevents wasted exploration later. Don’t jump into the data selection before having absolute clarity about the objective and context.

Red Rock Study Investigation Stage screen showing research objective and data sources.
Investigation Stage Information (Source: Our Simulation)

2. Target data selection, not data collection (Investigation Stage)
In the Investigation Stage, your goal is not to gather everything. It is to gather only what is decision-relevant.

  • Select data sources with a hypothesis in mind
  • Drag only key figures (e.g,. initial year and last year values), constraints, and important definitions into your Research Journal
  • Label notes clearly and highlight the most important data points so they are usable during analysis
  • Avoid exhaustive reading or random clicking

The game tracks how you explore and what you choose to record. Selectivity and structure directly support your process score and save critical time. Move logically through the text and provided exhibits.

Research Journal interface from Red Rock Study simulation with collected data fields.
Research Journal (Source: Our Simulation)

3. Execute analysis with pre-set time and calculation discipline (Analysis Stage)
In the Analysis Stage, the main risk is not math difficulty (you have a calculator for that) but inefficient setup.

  • Translate each question into a simple equation before calculating
  • Select the right data from your Research Journal
  • Become familiar with the drag-and-drop calculation functionality
  • Use the calculator only after the equation is clear
  • Avoid repeated recalculation or backtracking

Set a time expectation per question and move on once you have double-checked your approach and executed the calculations.

4. Communicate findings, do not re-analyze (Report Stage)
In the Report Stage, your job is synthesis, not discovery.

  • Fill report statements directly from completed analysis
  • Select the graph that best supports the conclusion
  • Populate the graph with the right values from your Research Journal (don’t drag the correct value into the wrong field)
Report Stage screen from Red Rock Study simulation showing chart type selection.
Report Chart Type Selection (Source: Our Simulation)

Many candidates lose points by entering the Report Stage with unclear expecations of the task.

Strategy for the Red Rock Case Section

The Red Rock Case Section functions as a rapid-fire sequence of 6 mini-cases, some with several questions tied to a shared context. The difficulty level is comparable to the Study Section, but the workflow is compressed. Investigation, analysis, and answering happen in a single continuous stage.

Red Rock Case Section question screen with chart, multiple-choice equations, and on-screen calculator.
Red Rock Case Question (Source: Our Simulation)

Success in this section depends on swift data targeting, clean equation setup, and decisive execution.

Identify what data is needed before opening sources
Each mini-case starts with a short prompt. Before exploring any data, clarify what the question is asking and which variables are required. This prevents unnecessary reading and random clicking.

Extract only decision-relevant data
Open charts or tables with intent. Pull only the figures required to solve the question and ignore contextual information that does not feed into the calculation.

Set up the equation first, calculate second
Translate the question into a simple formula before touching the calculator. This reduces errors and speeds up execution.

Execute using the interface efficiently
Answers are submitted through drag-and-drop fields or dropdown selections. Familiarity with these mechanics matters. Hesitation here costs time with no analytical benefit.

Treat each mini-case as independent
Do not carry assumptions from prior questions unless explicitly stated. Reset your thinking at the start of each prompt.

The Case Section rewards candidates who combine structured quantitative reasoning with interface fluency and time discipline. Those who have practiced both strategy and real-game execution typically find this part relateively straightforward rather than stressful.

Enhancing Quantitative Reasoning for Red Rock

Strong performance in Red Rock depends heavily on fast and reliable quantitative setup and execution. The math itself is not complex. The challenge lies in extracting the right data, setting up equations quickly, and delivering correct answers under time pressure (don’t underestimate using the drag-and-drop functionality under stress).

Practice with case-style math questions
Regular exposure to case interview math builds the exact skill set Red Rock requires. Data extraction from charts, tables, and exhibits, followed by rapid insight generation, is identical to what you will face in consulting case interviews and in the Red Rock Study and Case Sections.

Train equation setup, not just calculation
Most errors come from poor equation setup, not from arithmetic. Focus on quickly translating questions into simple formulas. Pay particular attention to percentages, growth rates, and averages, as these are the most frequent operations in the game.

Use quantitative reasoning drills if simulations are not available
If you do not yet have access to full Solve Game simulations, GMAT-style quantitative reasoning questions are an okay substitute. They train structured problem solving, data interpretation, and time-based execution under pressure.

Balance speed with accuracy
You must move fast, but not carelessly. Set time expectations per question and commit to decisions once you reach a defensible answer. If a question stalls, move on.

Master data extraction from visuals
Practice pulling key numbers and trends from charts and tables efficiently. Red Rock rewards candidates who can spot relevant data quickly without reading or copying everything.

Use tools deliberately
Ideally, become comfortable with the on-screen calculator before test day. The tool is simple, but hesitation or repeated recalculation wastes valuable time.

Simulate time pressure
Timed mock sessions build pacing discipline and reduce stress during the real assessment. The goal is to make structured quantitative execution feel routine.

By developing these capabilities, you approach both the Red Rock Study and Case Sections with confidence, speed, and control.

Additional tip: The skills that are needed in this game are much closer to an actual case interview and we would recommend that you also take a look at our articles on

The Sea Wolf Game

Sea Wolf is the final module in the McKinsey Solve Game and runs for 30 minutes. It is now a fully standardized production game encountered by all candidates in the current two-game Solve sequence.

The simulation places candidates in a microbial ocean-cleanup scenario. Behind the storyline, Sea Wolf is a structured multi-constraint optimization task. Candidates must design treatment solutions for three contaminated sites by filtering data, narrowing feasible options, and selecting optimal combinations under time pressure.

Each site follows the same mechanics. Only the input parameters change. This repetition tests whether candidates learn from earlier rounds, accelerate their decision process, and maintain time discipline across the full module.

What Sea Wolf Tests

The Sea Wolf Simulation is designed to assess how candidates translate complex requirements into structured decision logic. Each site presents a set of environmental constraints expressed through numerical ranges and desired or undesired traits. Strong candidates systematically convert these requirements into clear filtering and selection criteria rather than relying on intuition or trial-and-error exploration.

A second core dimension is the ability to filter viable options under multiple simultaneous constraints. Candidates must eliminate infeasible choices quickly, narrow the solution space efficiently, and maintain a clean shortlist of promising options. This tests whether you can manage complexity without becoming overwhelmed by data volume.

Sea Wolf also evaluates quantitative optimization skills. Final solutions depend on averaged attribute values across selected microbes, requiring candidates to anticipate how individual selections influence the overall outcome. The challenge is not advanced math, but setting up the right mental equations and making corrective selections when averages drift away from target ranges.

Because no option is ever perfect, the game deliberately forces decision-making under imperfect conditions. Candidates must recognize when further optimization yields diminishing returns and commit to a strong solution rather than chasing an unattainable ideal. This ability to balance analytical rigor with pragmatic decision-making closely mirrors real consulting work.

Finally, the repeated three-site structure tests time management and learning agility. Candidates are expected to refine their approach from site to site, increase speed without sacrificing accuracy, and maintain discipline across the full 30-minute module.

Sea Wolf Game Flow and Strategy Summary

The Sea Wolf Simulation follows a fixed four-phase workflow repeated across three contaminated sites. While the environmental storyline changes, the mechanics remain identical. Strong performance comes from executing a consistent decision logic rather than improvising.

Phase 1: Interpret site requirements and configure filters

Sea Wolf site information panel showing required attributes and desired and undesired traits.
Sea Wolf Location Characteristics (Source: Our Simulation)

What happens
Each site presents treatment requirements through numerical attribute ranges and desired or undesired traits. Before any microbes appear, you must select exactly two characteristics to define filtering criteria.

Winning strategy
Start by translating site requirements into filtering logic. Choose characteristics that best eliminate unfit microbes early. Do not jump to solution-building. The game evaluates whether you can convert requirements into structured selection rules.

Sea Wolf characteristics filter panel for selecting microbe attributes and traits.
Sea Wolf Filter (Source: Our Simulation)

Phase 2: Evaluate microbes and shortlist viable candidates

What happens
The microbial pool becomes visible, each with attribute values and traits. You must remove microbes that violate attribute ranges or contain undesired traits while ensuring at least one candidate carries the desired trait.

Winning strategy
Filter aggressively and systematically. Eliminate infeasible options first, then narrow to a small, realistic candidate set. Poor filtering at this stage guarantees weak final outcomes.

Sea Wolf microbe categorization screen for assigning microbes to sites.
Sea Wolf Microbe Categorization (Source: Our Simulation)

Phase 3: Build the prospect pool through forced selections

What happens
You start with six microbes and repeatedly choose one out of three presented candidates until your prospect pool contains ten microbes.

Winning strategy
Think in portfolios, not individual picks. Evaluate how each choice shifts the future average attributes of your final three-microbe solution. Avoid undesired traits where possible, prioritize the desired trait if missing, and accept that no pick will be perfect. Optimize the pool, not the single microbe.

Sea Wolf prospect pool selection screen showing candidate microbes and current pool.
Sea Wolf Prospect Selection (Source: Our Simulation)

Phase 4: Finalize the treatment

What happens
From your prospect pool, you select three microbes whose averaged attributes fall within site target ranges and whose traits best match site preferences.

Winning strategy
Stop searching for a perfect solution. Once a high-quality configuration is available, commit. Over-optimization and late indecision are common causes of time failure.

Sea Wolf final treatment selection screen showing three chosen microbes and the Submit Treatment button.
Sea Wolf Final Selection (Source: Our Simulation)

Why Candidates Struggle

Unprepared candidates typically:

  • Filter inconsistently
  • Optimize individual picks instead of the final solution
  • Spend too long on the first site and rush the rest

Prepared candidates enter knowing the workflow, filtering logic, and decision sequence. They spend test time executing, not discovering mechanics.

The Former Games of the McKinsey Solve Game

The following games are no longer part of the Solve Game. If you are short on time, you can skip this section and move toward the end of the article for practical preparation tips. If you are genuinely curious about the earlier versions of the assessment, feel free to continue reading here.

Ecosystem Creation

the image shows a screenshot from the mckinsey ecosystem creation game

The Ecosystem game, often referred to as the Ecosystem Building or Ecosystem Creation game, has been a cornerstone of the McKinsey Problem Solving Game for a long time before it was replaced by the Sea Wolf Game in 2024.

We’ll explore proven strategies to succeed in the Solve Game Ecosystem Simulation, highlighting how to effectively balance your ecosystem.

In this game, you are placed on an island (either in the reef, the jungle, or on a mountain ridge) and tasked with establishing a sustainable ecosystem in a chosen location. The primary objectives are twofold:

  1. Create a sustainable chain: You need to select 8 species out of 39 that together form a sustainable ecosystem.
  2. Find a suitable location: Determine the best location for this ecosystem on a map.

These tasks must be completed within a 35-minute timeframe.

The game begins with a tutorial that is untimed, providing an opportunity to understand the game mechanics.

At the core, the game is an optimization problem. You will be confronted with an overload of different data points (similar to the McKinsey Problem Solving Test, yet not business-related). You match the location to the species as well as the species with each other based on many different characteristics such as calorie need or provision and environmental requirements such as temperature, sun exposure, etc. All requirements need to be fulfilled at the same time to create and sustainable ecosystem and to successfully pass this game.

There are 2 parts:

First, you need to pick 8 species, either animal or plant, to inhabit the mountain, reef, or jungle location. Selecting a suitable, heterogeneous sample for the food chain relationship out of the numerous species is crucial. You need to account for the interaction effects between the species (e.g., coral, aquatic animals, algae, etc. in the reef) and several individual characteristics such as the required environment, place in the food chain, how many calories they need to survive, or how much energy they need, how many calories or energy they provide when consumed, etc.

Second, you need to decide on the location of the ecosystem to create good living conditions for several species. You need to consider several characteristics of the location such as altitude, cloud height, ph-level of the soil, wind speeds, precipitation, etc. for the mountain ridge or depth, temperature, salinity, etc. for the coral reef.

The catch in this game is that you are presented with information overload and need to show proper systems thinking. The food chain must not collapse, and the ecosystem must sustain itself. You will know if you have provided a good answer before submitting it since you can test your hypotheses to see if the ecosystem can actually sustain itself.

In the summer of 2020, McKinsey started to introduce new boundary conditions to make the game more challenging. For instance, you not only need to create the food chain with several levels and match it with a location but also adhere to certain new rules related to the hierarchy of the food chain. This twist adds another dimension you need to consider when drafting your solution.

There are several ways how to approach this scenario, which we worked on with our candidates and created an Excel sheet that helps you solve the eco-system puzzle. Below is a high-level approach you can use when going into the game.

What you need to know when approaching the species selection

  1. Selecting 8 species: From a set of 39 animals, you must choose 8. These species include 9 producers (like corals and algae) and 30 animals (such as sharks, tuna, etc.). Producers consume natural resources and do not require calories, while animals consume other organisms and require calories for survival.
  2. Environmental conditions: Species are divided into three environmental ranges, each with specific environmental characteristics like depth and temperature. For instance, depth may be categorized into ranges such as 11-15m, 16-21m, and 22-27m.
  3. Distribution of species: In each environmental range, you’ll find 3 producers and 10 animals. Your final ecosystem should consist of species all from the same range.

Having this key insight into the food chain mechanics in the McKinsey Ecosystem Game can be a significant advantage. As this information isn’t explicitly communicated by McKinsey, most candidates would typically need to deduce these details during the game, consuming valuable time within the 35-minute limit. However, being aware of this beforehand allows you to approach the game with a more informed strategy.

  • Start with producers: Knowing the calorie dynamics, you can begin by selecting a set of producers that not only share the same location characteristics but also provide the right amount of calories for enough animals. This understanding narrows down your options significantly, reducing the initial choice of 39 animals to a more manageable 10.
  • Focus on the right producers: Identifying the correct set of producers is crucial, as they form the foundation of your food chain. Choosing the right producers simplifies the subsequent steps in creating a sustainable ecosystem.

The game intricately simulates a natural food chain, requiring you to strategically link species as either food sources or predators to create a sustainable ecosystem. Here’s a breakdown of how this works and how you can effectively create a sustainable chain:

1. Species interactions:

Each species in the game has relationships with others – as either a predator or a food source. For instance, a Blue Jay might be preyed upon by a Shark, while it feeds on Yellow Fish.

2. Caloric dynamics:

Every species is assigned specific caloric values: calories provided and calories needed. These caloric values are crucial in determining which species from the available 13 you should select to form your final ecosystem of 8. The moment you select your 3 producers, you are only left with choosing 5 animals out of 10. A much easier task than before.

3. Eating rules and algorithm to test your sustainability:

The game outlines essential rules about the feeding mechanism. The key rules include:

  • The species with the highest ‘calories provided’ value eats first.
  • It consumes the species offering the highest caloric value as a food source. In the case of ties, it splits its consumption 50/50.
  • Consumption reduces the ‘calories provided’ by the prey by the amount of ‘calories needed’ by the predator. A species needs non-zero ‘calories provided’ to survive, and all its ‘calories needed’ should be zero after feeding.
  • After the first species feeds, the next one with the highest ‘calories provided’ follows suit, and the process repeats.

4. Ensuring chain sustainability:

It’s crucial to ensure each animal receives adequate calories from its food source and that no species depletes its ‘calories provided’ to zero. If a species either doesn’t receive enough calories or depletes its own, the chain becomes unsustainable, leading to failure in the game.

To quickly and efficiently establish a sustainable chain, you must:

  • Carefully analyze caloric values: Assess the ‘calories provided’ and ‘calories needed’ for each species to determine the feeding order and the sustainability of the chain.
  • Ensure continuity: Verify that every animal in your chain is connected and that there’s continuity in the food chain.
  • Balance the ecosystem: Maintain a balance where no species runs out of calories while ensuring each one’s dietary needs are met.

By following these steps and paying close attention to the caloric requirements and relationships between species as well as the eating rules algorithm about who eats first, second, third, etc., you can successfully create a sustainable food chain within way less than the allotted time in the McKinsey Ecosystem Game.

Once you have successfully identified the 8 species for your ecosystem, the next critical step is to choose an appropriate location for this ecosystem on the island.

What you need to know when approaching the location selection

How to Approach the Location Selection:

  1. Navigating the map: The game presents you with a map where you can use your cursor to explore different potential locations for your ecosystem.
  2. Analyzing location conditions: Each location on the map comes with seven different environmental conditions. However, not all of these conditions are relevant to your task. Your focus should be on the variables that you identified as important in the previous step while choosing your species, usually just 2 to 4 variables.
  3. Identifying relevant variables: Recall the parameters you noted earlier for each species. These are the variables you need to match in the location selection process.
  4. Utilizing the interface for matching: As you hover your cursor over different locations on the map, you can refer to the top-right menu on your screen. This menu displays the environmental variables at the current cursor position. You need to check if they are all within the required range for your selected species. If you approach this effectively, you can do this in less than 1 minute.

By methodically checking these variables and finding a location that aligns with the environmental requirements of your 8 species, you can complete this task efficiently. Proper selection of species in the first task significantly simplifies this process, allowing you to quickly identify a suitable location without getting distracted by irrelevant data.

This streamlined approach helps ensure that your ecosystem is not only sustainable in terms of species interdependence but also well-suited to the chosen location’s environmental conditions.

Plant Defense

the image introduces the mckinsey plant defense game

In this scenario, which was active until March 2023, you need to defend a plant species from invaders using several tools at your disposal in a static, round-based tower defense-style game. The tools consist of barriers that slow down invaders and predators that damage and eradicate them.

In this game you need to defend a plant at the center of a map from an invasive species for as long as possible. This scenario is broken down into 3 rounds. Each round lasts between 8 to 12 minutes, presenting a slight variation of the game with increasing complexity and an increase in the map size. For each round, invaders spawn in several turns per map.

Each round is divided into two parts.

In the first part, you can actively manage your defense strategy in order to react to new invaders that spawn every 3 to 5 turns. You can manage 15 turns by initially placing your defense units on the map, adjusting their positioning after every turn, and selecting new defense units every 5 turns.

Your goal is to have the plants survive each of these increasingly difficult turns. You can slow the invaders down so that they do not arrive at your plant within the number of turns or eliminate them fully before they do so.

In the second part, the endgame, you are no longer able to change your strategy and the placement of your defense units. The game fast-forwards until your plant is defeated. Depending on the quality of your last placement strategy it might take the invaders many turns to kill the plant, ideally more than 30.

Your goal is to optimize for the plant to survive as many turns as possible. Your product score is the direct result of the turns survived, while your process score focuses on how well you adjust to the changing behaviors of attackers and how much you can learn and adapt over the course of the turns and over the course of the 3 rounds.

In order to do this, you need to choose certain animals that eat the invasive species and natural barriers/ terrain to slow them down and block them, in a static and turn-based environment, contrary to most other tower defense games that are dynamic.

You are presented with information about what each tool such as animals or geographical/terrain barriers can do, e.g., how many invasive species an animal can kill in a given time or how much a forest can slow the invaders down. These animals have different stats in terms of their reach/sphere of influence (shown as squares) as well as the damage that they are able to inflict on the invaders.

For instance, there could be a dog and an eagle as animals. The eagle has a large radius and inflicts less damage whereas the dog has high damage but a smaller range of effectiveness (e.g., one square only). Some animals have a large radius and high damage (usually during the last game). The damage inflicted might also differ depending on the type of invader. The barriers are elements such as mountains, rocks, and forests. Mountains block invaders and make them change their pathway toward the plant (ideally make the pathway longer). Rocks and forests slow invaders down (different effectiveness for different invaders)

The invaders will start attacking the plants once they reach it in the middle and the game ends.

While initially, you will be able to kill the invaders, they will show up in greater numbers in each consecutive wave and it is possible that you will be defeated. This is not, per se, a bad thing since it will die eventually in the fast-forward mode of the game. Keep the plant alive for as long as possible.

The aim is to defend the plant in the center for as long as possible, hence, to kill all invaders before they reach the plant. It is very important to make use of both defending animals and barriers to unlock their synergistic effects and keep the invaders as long as possible in the sphere of influence of the animals.

Use the untimed tutorial to think about the most effective combinations and layouts of the tools before starting the game. Prepare using video games in the tower defense niche to train yourself for this scenario. The key in this game is to show adaptability by being able to learn quickly and improve your strategies and reactions with each turn and with each game.

Creating a strategy

Let’s again break down your approach into several steps.

  1. Familiarize yourself with the map
  2. Create your initial strategy
  3. Focus on new invaders first
  4. Secure the plant from future attacks
  5. Adjust your strategy as the game evolves

Disease Identification

the image is a screenshot of the imbellus disease and disaster identification game

It seems that McKinsey reintroduced a game briefly that was already present in the beta testing stages of the PSG, with a slight variation. It replaced the tower defense game for roughly 5% of the candidates over the course of late 2020 and early 2021. By June 2021, it appears that the game never really made it out of the testing stage and we have not heard about any reappearance in 2022. Nonetheless, let’s look into them since we cannot guarantee that they won’t come back in one form or another.

As a player, you are tasked with identifying which animals on the map will be infected by a given disease. The nature of the disease is not important. What is important is to identify patterns of the disease and ultimately identify which animals would be infected in the next turn.

The game has many animals on the map. There are also three time periods, which they call Time 1, Time 2, and Time 3. In Time 1, a small subset of animals is already infected. When you click on Time 2, that same map will show which additional animals got infected. Your goal is to identify which animals will get infected in Time 3. The approach to this game is relatively simple:

  1. Figure out what the key variables are that could give a hint about the disease progression.
  2. Create an array of different filters and look at them through different points in time to see the changes in the animal population.
  3. Move to time 3 and select the next animals that will be affected by the disease based on your tested hypotheses from step 2 (e.g., if you know that all animals above 6 years are affected by the disease and in time 3 there are 20 new animals that are above 6 years of age, select them)

Contrary to the old version which was used in beta tests before the game was actually launched, you do not need to provide a remedy or a treatment plan.

Disaster Identification

Another game has not made a new appearance since 2021. In this game, candidates had to figure out the nature of a natural disaster impacting an animal population and then place the animals on another area of the map so that the most number of animals survive. The mechanics are similar to the ecosystem game.

In this game, you can display three things, a map, species, and a list of events. You can tackle the game in 4 steps:

  • Identify what event has happened in an area (a natural disaster such as a tornado or a flood) by combining information from an event description with variables on the screen.
  • Identify dominant ranges to move the animals to an area that is best suited for their survival.
  • Select the location by clicking on it and check for the relevant ranges you identified before. Prioritize characteristics that allow for the greatest number of animals to survive.
  • Sanity check your selection in a similar manner as for the ecosystem game.

Migration Planning

the image depicts the mckinsey imbellus migration management game

A new game was briefly tested in 2022. We call it the Migration Planning game.

Your task is to plan the migration of 30 to 50 animals from a starting position to an endpoint on a map by selecting the best route out of several alternatives.

You have to solve up to 15 different scenarios within 35 to 40 minutes. Each scenario consists of 3 to 5 turns that have you decide on the next step of your route. In turn 1 you select the first step on your route, in turn, 2, the second leg, and so on until you reach the desired endpoint.

You start with a given number of animals and a specific set of resources (consumables such as food or water). With each turn of the game, a predetermined number of animals will die, and resources will be reduced by a specific amount, depending on your selected route. Alternatively, you can also select intermediate points on your route that will replenish and multiply existing resources as well as collect additional animals along the way.

The objective of the game is two-fold: First, you need to ensure that the highest number of animals survive until you reach the destination. Second, you need to arrive at the endpoint with some of the resources preserved as well. As said before, there are up to 15 different scenarios with 3 to 5 turns each, which leads to 45 to 75 unique decisions you must make along the way.

Organize the migration of 30 to 50 animals from one spot to the next by managing resources and animals from start to finish in 3 to 5 turns. Select the most optimal route to preserve resources and animals along the way and pass 15 rounds in total.

Map the routes on a piece of paper or in an Excel sheet.

  • Write down each available route
  • Calculate the outcome variables for resources and animals for every route
  • Select the route where most animals survive and resource requirements are met

Preparing for the McKinsey Solve Game

McKinsey’s official message suggests you cannot meaningfully prepare for the Solve Game. In practice, candidates who prepare properly perform dramatically better. The reason is simple: the assessment is not random. It tests a defined set of behaviors, in a defined set of game formats, under predictable constraints. Preparation turns it from an opaque experience into an execution task.

Why Preparation is Non-Negotiable

The baseline pass rate is low. The baseline pass rate is low. Candidates who walk in unprepared routinely underestimate the time pressure, the need for structured decision-making, and how quickly small mistakes compound. Preparation fundamentally shifts the odds. If you have never seen the games before or do not know which actions to take at each stage, you will spend valuable time figuring things out while the clock is running. The goal is to enter the first game already knowing how to navigate the environment, what to prioritize, and how to commit to decisions.

The downside is significant. In many candidate tracks, failing the assessment can materially delay your recruiting timeline. Even when a formal reapplication wait period varies by program and office, a weak result typically means you must return later with a stronger profile and a better story.

The game rewards learnable behaviors. The Solve Game is built to measure higher-order thinking: structuring, hypothesis testing, prioritization, and decision quality under uncertainty. These are trainable skills. The best candidates do not “wing it,” they execute a method.

Digital familiarity is an advantage, unless you neutralize it. Comfort with interfaces, drag-and-drop mechanics, and time-based navigation can create an edge when playing the games. The fastest way to remove this bias is targeted practice in a realistic environment.

The Preparation Model That Works

Most candidates fail because they do either strategy without execution, or execution without strategy. You need both.

Step 1: Build the strategy layer (what to do and why).
This is where most generic advice stops. You need a clear mental model of each game’s objective, scoring drivers, and the common traps. That is exactly what our strategy guide and video course are designed to provide: step-by-step approaches that are robust across variations, plus the logic behind them so you can adapt under pressure.

Step 2: Convert strategy into muscle memory (how to execute under time pressure).
Knowing the right approach is not enough if you cannot execute it quickly and calmly. This is why realistic simulations matter. They train interface speed, decision sequencing, note-taking discipline, and time management. In other words, they turn “I understand” into “I can deliver in 65 minutes.” (usually much faster after practice)

What “Good Preparation” Actually Looks Like

Use these principles to guide your practice, regardless of which resources you use:

Master the objective first, then optimize. Start every game by clarifying the goal, constraints, and success metric. Candidates lose points by optimizing the wrong thing.

Operate with time-boxes and stopping rules. Decide in advance how long you will spend exploring, analyzing, and executing. If a decision exceeds your time-box, choose the best available option and move.

Be hypothesis-driven, not exhaustive. Explore with intent. Form an initial hypothesis, gather only the data needed to validate it, then iterate. Random clicking and “reading everything” wastes time and weakens process signals.

Document like a consultant, not a student. Keep notes short, structured, and actionable. Label key variables, assumptions, and constraints so you can act quickly without re-reading.

Practice the interface, not just the logic. Drag-and-drop accuracy, navigation speed, and tool familiarity directly influence performance because the timed games cannot be paused. If you don’t use a simulation beforehand, take extra time in the tutorials to practice.

Recommended Path

If you want the highest probability of success with the least wasted effort, follow this order:

  1. Learn the strategies: Use the guide and videos to understand what each game tests, how scoring works, and which decisions matter.
  2. Practice realistically: Use full simulations to build speed, execution discipline, and confidence under real timing.
  3. Refine with feedback: Identify where you lose time or make repeat errors, then re-run with specific improvement targets. Our simulations come with detailed feedback report for exactly that reason.

That is the difference between hoping you will “figure it out” on test day and walking in with a plan you can execute.

Test-taking Tips and Advice

To excel in the McKinsey Solve Game, you need two things: a repeatable process and disciplined time control. The game is designed to reward structured thinking under pressure, not perfection. Use the tips below to maximize both your product score (outcomes) and your process score (how you worked).

Do not chase “the right answer,” execute a winning process: Every candidate sees different numbers and variations. Trying to replicate outcomes from other people’s experiences is a trap. Focus on clean problem definition, hypothesis-driven exploration, and consistent decision logic.

Work forward, not backward: The Solve Game rewards coherence. In all games, verify your inputs and assumptions before you commit, then execute decisively. Avoid frequent backtracking (especially in the Red Rock game when you switch from the Analysis Stage back to the Investigation Stage), as it burns time and often signals an unstructured approach.

Use 80/20 decision-making with explicit stopping rules: You rarely have time for perfect optimization, especially in the Sea Wolf game. Aim for the best answer you can justify within the constraints. Set clear cutoffs, for example: “I will explore for X minutes, then decide,” or “I will test up to Y options, then lock the best-performing one.” Sometimes, a 100% perfect outcome is not possible in the Sea Wolf game.

Treat the tutorial as free points: The tutorials are untimed. Use them to understand mechanics, scoring drivers, and available tools before the clock starts. Most candidates waste this advantage and pay for it later with avoidable trial-and-error. If you train with our game simulations, you will already know how the games operate, where to click, how to use drag-and-drop mechanics, and how to apply the right strategies, allowing you to execute confidently and feel fully at home in the assessment.

Read instructions like a contract: Many failures come from missing a constraint or objective, not from weak math. Before you act, confirm: objective, constraints, success metric, and what exactly is being asked. A single missed detail can invalidate an otherwise strong approach in the Red Rock game.

Build a clean research journal, then leverage it: In the Red Rock game, use consistent labels, short headings, and highlight key numbers and insights. This improves your analysis speed and visibly supports a structured process. Your notes should let you resume instantly after any decision point.

Time-box every phase, then enforce it: Once the timed section starts, you cannot pause. In the Red Rock, enter with a plan for how long you will spend on exploration, analysis, and execution. If a step is running long, cut scope, decide, and move. In the Sea Wolf, allocate an equal amount of time to each game. Candidates typically fail by over-investing early and rushing critical decisions late.

Do not “hunt” for data, target it: Explore with a purpose. Start with a hypothesis, then collect only the information required to confirm or reject it. Random clicking and exhaustive reading look thorough but usually score poorly and kills your clock.

Stay stable operationally: Take the assessment on a reliable device, with strong connectivity, and close unnecessary apps. If your system tends to run hot or slow under load, avoid running anything in parallel. You want zero distractions and zero technical surprises.

Beat the McKinsey Solve Game

The original and most comprehensive guide from former McKinsey consultants with

89% pass rate in the Solve Game*

Your benefit

  • Crack every game: Proprietary guide and video insights detailing the exact steps and strategies used by successful candidates
  • Score high: Tailored tactics and gameplay walkthroughs based on real test-taker feedback
  • Prepare efficiently: Focus on what matters most and avoid wasted preparation time with proven methods to master all required skills
  • Interview-ready bonus: Includes a free 14-page McKinsey Interview Primer with essential guidance for case and PEI preparation

*Based on customer feedback from November to December 2025
Latest update: January 2026

Our Credentials

  • 9,000+ candidates supported from more than 70 countries since November 2019
  • 600+ test-taker interviews informing continuous refinement, combined with expert game designer input and firsthand McKinsey experience
  • Complete preparation suite including fully playable game simulations, a 129-page strategy guide, automated Excel tools for Ecosystem Creation, and a video course covering gameplay and winning strategies
  • 100% proprietary content

Click on the image below to learn more about our Solve Game Guide and Simulation.

McKinsey Solve Game Guide 23rd Edition

SALE: $169 / $99

McKinsey Solve Game FAQ

Navigating the McKinsey Solve Game can be a challenging part of your journey towards joining a top-tier consulting firm. To help demystify the process and enhance your preparation, we’ve compiled a list of frequently asked questions. Whether you’re wondering about the skills assessed or looking for the best preparation resources, you’ll find the answers here.

What specific skills does the McKinsey Solve Game assess?

  • The game evaluates problem identification, strategic solution development, decision-making under pressure, adaptability, and quantitative reasoning.

Can you really prepare for the McKinsey Solve Game, and how?

  • Yes, preparation is possible and beneficial. Focusing on having the right tools at your disposal such as an Excel Solver, playing similar simulation games, and developing a strategic approach to problem-solving and quantitative questions can enhance your readiness.

What are the key strategies for succeeding in the ecosystem simulation?

  • Success involves understanding ecosystem balance, prioritizing tasks, managing time effectively, and applying logic to predict the outcomes of different actions.

Are there any official practice tests available for the Game?

  • McKinsey does not provide official practice tests.

How does the Solve Game differ from traditional consulting firm recruitment tests?

  • Unlike traditional pen-and-paper tests or computer-based case assessments that focus on business scenarios, the Solve Game uses gamified simulations to assess a wider range of problem-solving and strategic thinking skills in diverse contexts.

What resources are recommended for McKinsey Solve Game preparation?

  • Comprehensive guidebooks and videos, playing 3rd party simulations, and online forums for candidate experiences.

How important is game familiarity in succeeding in McKinsey’s Solve Game?

  • Familiarity with the game’s format and the types of challenges presented can significantly improve performance by reducing the learning curve and anxiety during the actual assessment.

Can playing similar digital games improve my performance in the Solve Game?

  • Not anymore. Playing similar strategy games was only useful for the former Plant Defense scenario, which followed classic tower-defense mechanics. The current Solve Game modules do not have close consumer-game equivalents. Effective practice today requires dedicated preparation simulations that are specifically designed to mirror the real Solve Game environment.

What is the most challenging aspect of the McKinsey Solve Game, according to past participants?

  • Many participants find the time pressure and the requirement to make strategic decisions with incomplete information to be the most challenging aspects. It’s hard to figure out what to do and then execute when everything is new and unexpected.

How does McKinsey use the Solve Game results in the recruitment process?

  • The results are used alongside resume screenings to provide a holistic view of a candidate’s problem-solving abilities and potential as a consultant, influencing the decision on whether to proceed with the candidate. The inofficial cut-off score floating around is being in the top 20% of test-takers alongside a strong resume.
Share the content!

15 Responses

  1. Lenka says:

    Hello, thank you for this introduction. I would like to ask about one thing. In the ecosystem… From all 8 species – they have to survive? Or they can be eaten by predators? I understand how to create the food chain, but still…if you create a food chain and the species do not replicate, they will be eaten by predators…

    • StrategyCase says:

      Dear Lenka,
      All species in the food chain (animals and plants) need to survive. The sum of the calories provided by a species – the sum of calories needed for the predator species should always be positive.
      Cheers,
      Florian

  2. Angelina says:

    hi Florian,

    I only have 3 hours before the PSG is due, is it possible or useful to buy the guide given such a short time limit?
    Thank you

    • StrategyCase says:

      Dear Angelina,
      3 hours would be enough to read through the strategy section, watch the videos and familiarize yourself with the Excel. While not ideal and we receommend more time to practice, it would still make sense.
      Cheers,
      Florian

  3. […] using digital badges to recognise learning and, for example, the consultant company McKinsey uses a game during its recruitment process,” adds Nikoletta-Zampeta […]

  4. Keyi Yu says:

    Hello Florian Daniel or Colleague,
    I am very pleasantly surprised to see this guide that you have masterfully complied. Having tips from insiders is such a confidence boost! I purchased this pack without hesitation and am hoping to try it out before investing in the comprehensive 6h coaching program.
    Nonetheless, I wonder if you can email me back by helping me with downloading the actual guide? I encountered a technical issue whereby I completed my payment on my phone, but it became impossible to download it via my laptop. I am very worried as the deadline of the test is approaching so could you please get back to me asap?

    Many Thanks
    Aspiring Consultant

    • StrategyCase says:

      Hey there,

      I have just sent you your documents, which also contain access to the video program.

      Please let me know if I can assist further.

      Kind regards,
      Florian

  5. Emmanuel says:

    Hi, how long would you suggest I prepare for the McKinsey digital assessment test after purchasing the digital assessment guide? 2 weeks? 4 weeks?

    • StrategyCase says:

      Hi Emmanuel,

      We have candidates that prepare between 2 days and 1 month. The shorter your preparation time, the more your focus should be on learning the proven strategies we outline in our guide (so that you can implement them properly on the game day) and go through and practice the most effective and important tools we provide you with to quickly raise your skill levels.

      Obviously, when you have more time on your hands, you can prepare in a much more relaxed way and go deeper with all our exercises and tools. Generally, I would say that 2 weeks is the sweet spot we have seen with our candidates and it is rare for them to fail after they have gone through all exercises and tools, practiced the preparation tips, and have our game-plan and strategies internalized over this time period.

      4 weeks would give you enough time to prepare without a rush, and in parallel to the case interview practice. In any case, should something change in the game between your purchase and the testing date, we will send you a new version of the guide and the videos free of charge!

      Let me know if you have any further questions!

      All the best for your preparation and your application.

      Florian

  6. Luiz says:

    I heard that there are also other games that could be part of the PSG like predicting and preventing an environmental disaster. Are you sure that there are ‘only’ the 2 two games you describe?

    • StrategyCase says:

      Hi Luiz, we talk briefly about these potential other scenarios in our Problem Solving Game Guide. Be aware that they were used during the trial stages in 2018/19 only and none of our more than 700 customers has reported on them pro-actively. From the 80+ customers we interviewed since November 2019, all went solely through the ecosystem game and the tower defense-like game. In the ecosystem game, recent candidates report having done the mountain ridge scenario and not the reef (even though this has no impact on the actual gameplay).

  7. Patrícia says:

    Hi, how do I know if I passed the ecosystem simulation task?

    • StrategyCase says:

      Hi Patricia, on an aggregate level the game looks at both your product score (did you produce a good outcome?) and your process score (did you perform well under stress while working towards the outcome?).

      In order to pass the ecosystem simulation, ideally, you reach the threshold McKinsey set for both scores (which is unknown). For the product score, you should be able to test your hypotheses during the game and see if your food chain is actually sustainable and works out. However, for the process score, you can only take a guess. McKinsey and Imbellus record every movement of your mouse, every click, as well as how long you pause, go back and forth in the menus, etc. In short, the more you have worked in a calm and collected manner towards selecting your food chain, the higher the chances to reach a solid process score.

  8. Federico Minotti says:

    Hi, I have one question, Is McKinsey problem-solving game material included in Mc Kinsey program?

Leave a Reply