Decision Boundary Calculator

Set means, spreads, priors, and costs fast. Get one or two thresholds, plus clear rules. Export a report and share your decision basis securely.

Calculator
Large screens: 3 columns. Medium: 2 columns. Mobile: 1 column.
Center of class 1 distribution.
Spread; must be positive.
Center of class 2 distribution.
Spread; must be positive.
Any positive value; auto-normalized with P₂.
Any positive value; auto-normalized with P₁.
Penalty for missing class 1.
Penalty for false alarms of class 1.
Only affects the plotted range.
Must be greater than x-min.
Higher values look smoother.
Tip: If σ₁ ≈ σ₂, you’ll get one threshold.
Example data table
A typical setup for two overlapping measurement distributions.
Scenarioμ₁σ₁μ₂σ₂P₁P₂C₁₂C₂₁Interpretation
Quality pass vs. fail0.01.02.01.50.700.3014 False pass is costly; shift boundary stricter.
Control vs. treatment10.02.012.02.00.500.5011 Equal variances; one clean threshold.
Signal vs. noise-1.00.81.20.50.850.1521 Prefer catching signal; accept some false alarms.
Use one row’s values to test quickly.
Formula used
Bayes risk rule for two 1D Gaussian classes.

Let class-conditional densities be f₁(x)=𝒩(μ₁,σ₁²) and f₂(x)=𝒩(μ₂,σ₂²), with priors P₁, P₂, and misclassification costs C₁₂, C₂₁.

The boundary occurs where expected conditional risks tie: C₁₂·P₁·f₁(x) = C₂₁·P₂·f₂(x).

Define k = (C₂₁·P₂·σ₁)/(C₁₂·P₁·σ₂). After taking logs, the boundary satisfies:

  • Equal variances (σ₁ = σ₂): a single linear threshold.
  • Unequal variances: a quadratic equation that can yield 0, 1, or 2 real thresholds.
How to use this calculator
A practical workflow for decision thresholds.
  1. Enter μ and σ for both classes from your data.
  2. Set P₁ and P₂ to reflect base rates.
  3. Choose C₁₂ and C₂₁ to match real impact.
  4. Click Compute decision boundary to get thresholds.
  5. Review the decision rule and the density chart.
  6. Export CSV or PDF to document your assumptions.
Operational meaning of a decision boundary
Data-focused guidance for boundary setting.

In this calculator, a decision boundary is the x value where scaled evidence for two classes is equal. Evidence combines the normal density f(x) with business context through priors and costs. When you compute a threshold, you are selecting the cut point that minimizes expected loss under the specified assumptions.

Inputs that drive the boundary most
Data-focused guidance for boundary setting.

The means μ1 and μ2 set separation, while σ1 and σ2 control overlap. As overlap increases, the boundary becomes more sensitive to priors and costs. If P2 grows relative to P1, the boundary moves toward class 1 values, reducing false class 2 decisions. If C21 increases, the boundary becomes stricter against predicting class 1.

Equal-variance case and single threshold behavior
Data-focused guidance for boundary setting.

When σ1 equals σ2, the log likelihood ratio is linear in x, so the boundary is a single threshold. With μ1=0, μ2=2, σ=1, equal priors, and equal costs, the boundary is near x=1.00. Changing priors to P1=0.70 and P2=0.30 shifts the cut lower, favoring class 1.

Unequal variances and two-threshold regions
Data-focused guidance for boundary setting.

When σ1 differs from σ2, the equality becomes quadratic and can produce zero, one, or two real solutions. Two thresholds mean the preferred class can switch twice as x increases. This often occurs when one class is narrow and peaked while the other is wider. Read the rule text to see which intervals map to each class.

Using priors and costs as decision levers
Data-focused guidance for boundary setting.

Priors represent base rates from production, sampling, or historical logs. Costs translate consequences like refunds, safety incidents, or missed detections. The calculator uses k=(C21·P2·σ1)/(C12·P1·σ2); increasing k moves the boundary toward class 2. Document priors and costs, then export a report to keep choices auditable.

Practical validation and reporting
Data-focused guidance for boundary setting.

Validate inputs by estimating μ and σ from labeled samples and checking fit with residual checks. Then stress test: vary priors by ±20% and costs by 2× to see stability. If thresholds move drastically, improve measurement quality or gather more labels. Store exported CSV or PDF with model notes for traceable governance in practice today.

FAQs
Common questions when setting thresholds.

1) Why can there be two thresholds?

With unequal variances, the boundary equation is quadratic. The preferred class can switch twice as x increases, creating a middle interval where the other class is optimal.

2) What do priors change in the result?

Priors scale each density by base rate. Increasing a class prior increases its influence, shifting the boundary to reduce expected errors against that class under the same costs.

3) How should I pick misclassification costs?

Translate consequences into relative penalties. Use higher cost for the error you want to avoid most, such as safety misses or expensive false approvals.

4) What if the calculator shows no real boundary?

The scaled densities may never intersect, meaning one class dominates for all x in the plotted range. Verify parameters and consider whether the normal assumption or ranges are appropriate.

5) Does the plot affect the threshold?

No. The threshold is computed analytically from μ, σ, priors, and costs. The plot only visualizes the two PDFs and the computed boundary lines.

6) Can I use this for real data that is not normal?

You can as an approximation, but accuracy depends on fit. If distributions are skewed or multimodal, consider transforming data or using a nonparametric density estimate and recomputing boundaries.

Related Calculators

Important Note: All the Calculators listed in this site are for educational purpose only and we do not guarentee the accuracy of results. Please do consult with other sources as well.