Design Space Exploration Planner
Planning a design study is the difference between spending weeks of compute time on the wrong method and finishing in days with the right one. This tool compares seven DOE and sampling methods side by side, calculates the number of simulation runs each requires, estimates your total project time, and shows how intelligent search with Simcenter HEEDS can cut your evaluation budget dramatically.
How Many Simulations
Do You Actually Need?
Find out how many simulation runs your design study actually requires. Then see how intelligent sampling methods can cut that number dramatically.
Free DOE Calculator for Simulation Design Space Exploration
- Describe your studyVariables, levels, and simulation time
- Compare methodsFull Factorial vs. LHS vs. intelligent search
- See what you saveCompare compute time across methods
Each dot is one simulation. Full Factorial uses a rigid grid. LHS spreads samples evenly with far fewer points. This is a 2D illustration. In higher dimensions, gaps between samples grow and coverage becomes less uniform.
| Method | Runs | Wall Time | Relative Cost |
|---|
Estimates based on published DOE references and Simcenter HEEDS benchmarks. Wall time excludes setup overhead (typically 5-15% extra).
- Full Factorial
- Tests every single combination of all variable levels. Gives complete interaction data, but the number of runs grows exponentially. Only practical for 2 to 4 variables with few levels.
- Fractional Factorial
- Runs a carefully chosen subset of the full factorial using a 2-level (high/low) design. The number of runs follows 2^(k-p), where p is chosen to maintain Resolution IV or higher when possible. Captures main effects and two-factor interactions. The go-to screening method for many variables. Closely related to Taguchi orthogonal arrays.
- Central Composite (CCD)
- Adds axial (star) points and center runs around a factorial core. Lets you fit quadratic models for response surface optimization. Best suited for 2 to 6 continuous variables.
- Box-Behnken
- A response surface design that avoids testing at extreme corner combinations. Uses fewer runs than CCD for 3 to 6 variables. Useful when extreme settings are physically unreliable or expensive.
- Latin Hypercube (LHS)
- Divides each variable range into N equal slices and places exactly one sample per slice. Covers the entire design space uniformly. Common rule of thumb: use 10 times the number of variables as starting point.
- Quasi-Random (R₂)
- A mathematically generated sequence that fills space more evenly than pure random sampling. No grid structure, making it flexible for high-dimensional problems. Good for initial exploration.
- Simcenter HEEDS (SHERPA)
- Not a fixed sampling plan but an intelligent search. The SHERPA algorithm inside HEEDS runs multiple optimization strategies in parallel, learning from each result. Published case studies show 30 to 50% fewer runs in many problems, though results depend on problem complexity and objectives.
LHS explores the design space uniformly but does not search for the best design. Simcenter HEEDS actively optimizes, learning from every run to converge on the best solution faster.
Ready to optimize smarter?
Simcenter HEEDS integrates with STAR-CCM+, Nastran, Abaqus, and 50+ CAE tools. No optimization expertise required.
Used by engineering teams in automotive, aerospace, marine, and energy across Europe.
Frequently Asked Questions
DSE systematically evaluates how design variable changes affect performance using structured sampling strategies to map behavior across the entire parameter range.
It tests every combination. 5 variables at 10 levels = 100,000 simulations. The number grows exponentially. Smart sampling achieves comparable coverage with far fewer runs.
LHS divides each variable range into N equal intervals with one sample per interval, guaranteeing stratified coverage. Rule of thumb: 10 × k samples for screening.
Simcenter HEEDS uses SHERPA, an adaptive algorithm that combines multiple global and local search strategies simultaneously, learning from each run. Published benchmarks show it can find better designs in substantially fewer runs than static sampling, with typical reductions of 30 to 50% in many engineering applications.
Screening: Fractional Factorial. Space-filling: LHS. Response surfaces: CCD or Box-Behnken. Many variables (>8): Simcenter HEEDS outperforms all static methods.
Yes. Simcenter HEEDS integrates directly with STAR-CCM+ for automated geometry, meshing, solving, and post-processing. Volupe can help set up your first study.
Built by Volupe · Siemens Platinum Smart Expert Solutions Partner
Also try: GCI Calculator · Turbulence BC · Y⁺ Calculator
How to Plan a Simulation Design Study Without Wasting Compute Time
Every simulation engineer eventually faces the same question: how do you systematically explore a design space without running an unrealistic number of simulations? If you have 6 geometry parameters, each with 10 possible values, a brute-force approach means 1,000,000 simulations. At 2 hours per run, that is over 200 years of compute time on a single machine.
The calculator above helps you answer this question. It compares seven established sampling methods side by side, showing you exactly how many simulation runs each method requires and how long the study will take given your compute capacity. Below, we explain the thinking behind each method so you can make an informed choice for your specific problem.
The Full Factorial Problem: Why Brute Force Does Not Scale
Full Factorial testing means evaluating every combination of every variable at every level. For 2 variables with 3 levels each, that is just 9 runs. Manageable. But the number of combinations grows exponentially. This is sometimes called the curse of dimensionality.
With 3 variables and 5 levels you need 125 runs. With 5 variables and 10 levels: 100,000. With 10 variables and 10 levels: 10 billion. No engineering team has the time or budget for that. The entire field of Design of Experiments exists to solve this problem: how do you learn as much as possible about a system with as few experiments as possible?
Understanding the Sampling Methods
Fractional Factorial: Screening with Minimal Runs
If your goal is to identify which variables actually matter before investing in a full study, Fractional Factorial is the standard starting point. It uses a carefully chosen subset of a 2-level (high/low) full factorial design. Instead of testing every combination, it tests a strategically selected fraction that still captures main effects and key two-factor interactions.
The number of runs follows 2^(k-p), where k is the number of variables and p determines the fraction size. A higher p means fewer runs but more confounding between effects. The designs are chosen to maintain the highest possible resolution, meaning main effects remain distinguishable from interactions.
Fractional Factorial is particularly useful when you have 5 or more variables and suspect that only a few of them significantly affect the response. It is closely related to Taguchi orthogonal arrays, which apply the same mathematical principles with a focus on robust design.
Important to note: Fractional Factorial always uses a 2-level design (high and low values for each variable), regardless of how many levels you specify in the calculator above. This makes it excellent for screening but less suitable for detecting nonlinear behavior.
Central Composite Design (CCD): Building Response Surfaces
When you move beyond screening and want to build a mathematical model of how your response varies with the inputs, Central Composite Design is the established approach. CCD extends a factorial design by adding axial (star) points along each variable axis and center point replicates.
The structure is: a factorial core (2^k runs), axial points (2k runs), and center points (typically 6 for up to 6 variables). This allows fitting of full quadratic models, which capture both linear effects, interactions, and curvature.
CCD works well for 2 to 6 continuous variables. Beyond that, the factorial core grows quickly (2^7 = 128 runs just for the core), making it less practical for high-dimensional problems. For those cases, consider Latin Hypercube Sampling or adaptive methods like Simcenter HEEDS.
Box-Behnken: When Extreme Corners Are Problematic
Box-Behnken designs are an alternative to CCD for response surface modeling. The key difference: Box-Behnken avoids testing at the extreme corner points of the design space. This is valuable when extreme combinations are physically unreliable, difficult to manufacture, or likely to cause simulation failures.
For 3 to 6 variables, Box-Behnken typically requires fewer runs than CCD while still supporting quadratic model fitting. The trade-off is that it provides less information about behavior at the edges of the design space.
Latin Hypercube Sampling (LHS): Efficient Space-Filling
Latin Hypercube Sampling takes a fundamentally different approach. Instead of placing samples at specific factorial or axial locations, LHS divides each variable range into N equal intervals and places exactly one sample in each interval. The result is a design that fills the entire space uniformly, without the rigid grid structure of factorial methods.
The widely used rule of thumb is to start with 10 times the number of variables as the sample count. So for 6 variables, begin with 60 LHS samples. This provides reasonable coverage for initial exploration and surrogate model fitting.
LHS is the default choice for many CAE engineers because it works with any number of variables, handles both continuous and discrete parameters naturally, and does not assume anything about the shape of the response. It is space-filling rather than model-based, meaning it covers the design space regardless of whether the underlying behavior is linear, quadratic, or highly nonlinear.
The calculator uses LHS samples (not levels) because LHS treats variables as continuous ranges. The “Levels per Variable” setting in the calculator above does not affect LHS or Simcenter HEEDS. Those methods work with continuous variables and are not constrained to discrete levels.
Quasi-Random Sequences (R₂): Mathematically Uniform Coverage
Quasi-random sequences fill space more evenly than both random sampling and LHS by using low-discrepancy mathematical sequences. The R₂ sequence (based on the generalized golden ratio) provides particularly good uniformity in moderate dimensions.
In practice, quasi-random sampling is used for initial exploration when you want guaranteed uniform coverage without the stratification constraints of LHS. For most engineering applications, LHS and quasi-random methods produce similar results, but quasi-random sequences can be incrementally extended without redesigning the entire sample plan.
Simcenter HEEDS with SHERPA: Intelligent Adaptive Search
All the methods described above are static sampling plans. You decide the number of samples, generate the design, run all simulations, and analyze the results afterward. Simcenter HEEDS takes a fundamentally different approach through its SHERPA algorithm.
SHERPA (Simultaneous Hybrid Exploration that is Robust, Progressive, and Adaptive) runs multiple optimization strategies in parallel. After each batch of simulations completes, SHERPA analyzes the results and decides where to sample next. It focuses computational effort on the most promising regions of the design space rather than covering everything uniformly.
Published case studies and benchmarks from Siemens show typical reductions of 30 to 50% in the number of evaluations needed to find near-optimal designs, compared to static sampling approaches. The actual improvement depends on problem complexity, the number of objectives and constraints, noise in the response, and the shape of the design landscape.
The key difference between LHS and HEEDS is what you get at the end. LHS gives you a map of the design space: you understand how each variable affects performance. HEEDS gives you a near-optimal design: it finds the best solution within your evaluation budget. For many engineering teams, the distinction matters because the goal is not to understand everything about the design space but to find the best design within a project deadline.
Simcenter HEEDS integrates directly with Simcenter STAR-CCM+, Nastran, Abaqus, and over 50 other CAE tools. It handles the automation of geometry updates, meshing, solving, and post-processing without requiring optimization expertise from the user.
How to Choose the Right Method for Your Problem
The choice depends primarily on two things: what you want to learn and how many variables you have.
Screening (which variables matter?): Use Fractional Factorial for up to about 12 variables. It identifies the important factors with minimal runs, after which you can focus a more detailed study on just the significant variables.
Response surface modeling (how do variables interact?): Use CCD or Box-Behnken for 2 to 6 variables. These methods support fitting quadratic surrogate models that capture curvature and interactions. Choose Box-Behnken when extreme corner combinations are problematic.
Space-filling exploration (what does the design space look like?): Use LHS for any number of variables. Start with 10 times the number of variables and increase if the surrogate model fit is poor. This is the safest general-purpose choice when you do not have strong assumptions about the response shape.
Optimization (what is the best design?): Use Simcenter HEEDS when you want to find the optimum rather than map the entire space. Particularly effective with 8 or more variables where static methods become expensive, or when you have multiple competing objectives and constraints.
Many experienced teams combine methods: start with a Fractional Factorial screen, narrow down to the important variables, run an LHS study on those, and then let HEEDS optimize within the most promising region.
Practical Considerations for Simulation Studies
Wall Time vs. Compute Time
The calculator estimates wall-clock time based on perfect parallelization: total runs divided by simultaneous jobs, multiplied by time per run. In practice, expect 5 to 15% additional time for simulation startup and teardown, job scheduling overhead, and the occasional failed run that needs to be restarted.
How Many Simultaneous Runs Can You Afford?
This depends on your infrastructure. On a single engineering workstation with 32 to 64 cores, running simulations that each use 8 to 16 cores, you typically manage 2 to 4 simultaneous runs. With access to an HPC cluster, 8, 16, or more parallel jobs become possible. The calculator lets you adjust this number to see how parallelization affects your project timeline.
When to Use This Calculator
This tool is designed for the planning phase of a design study. Use it to answer questions like: How long will a 10-variable study take with our current hardware? Is it worth adding more compute capacity? Should we screen first or go straight to optimization? How much time does intelligent search save compared to traditional DOE?
The calculator provides estimates based on published DOE references and Simcenter HEEDS benchmarks. Actual results depend on problem complexity, simulation stability, and the specific characteristics of your design space.
About This Tool
The Design Space Exploration Planner was built by Volupe, a Siemens Platinum Smart Expert Solutions Partner specializing in simulation software and consulting across Europe. We help engineering teams set up, run, and analyze design studies using Simcenter STAR-CCM+, Simcenter HEEDS, and related tools.
If you want help planning your first design study or optimizing an existing workflow, get in touch with our team. We can help you go from the numbers in this calculator to a running optimization study in your actual simulation environment.
You might also find these tools useful:
- Y+ Calculator for CFD mesh sizing
- Grid Convergence Index (GCI) Calculator for mesh independence studies
- Turbulence Boundary Conditions Calculator for inlet condition estimation