Anomaly Detection Series Calculator

Turn raw signals into trustworthy alerts for decisions. Tune thresholds, compare models, and review scores. See results above, then export charts and tables fast.

Calculator

Paste your series, pick one or more methods, then run detection. Use labels for timestamps, IDs, or sample names.
Accepts commas, spaces, or new lines. Non-numeric tokens are ignored.
Use dates, sensor IDs, or sample names. If counts mismatch, labels are ignored.
Select multiple methods to get broader coverage.
Log transform needs values greater than -1.
Useful when trends hide anomalies.
Trailing moving average window in points.
Typical range: 2.5 to 4.0.
Common robust cutoff: 3.5.
Classic Tukey fences use 1.5.
Uses previous window points as baseline.
Best for local regime changes.
Higher alpha reacts faster to changes.
Higher L reduces false positives.
Limits on-page table length only.
Results appear above, right under the header.

Example data table

This example contains two spikes and one sudden drop.
Label Value Note
2026-01-0110Normal range
2026-01-0211Normal range
2026-01-0312Normal range
2026-01-0413Normal range
2026-01-0555High spike
2026-01-0614Normal range
2026-01-0715Normal range
2026-01-0816Normal range
2026-01-094Sudden drop
2026-01-1017Recovery
2026-01-1160Second spike
2026-01-1218Normal range
Click “Load example” above to populate the form.

Formula used

  • Global Z-score: z = (x − μ) / σ. Flag if |z| ≥ threshold.
  • Modified Z (MAD): mz = 0.6745 (x − median) / MAD. Robust under heavy tails.
  • IQR fences: bounds = [Q1 − k·IQR, Q3 + k·IQR], where IQR = Q3 − Q1.
  • Rolling Z: compute μᵣ and σᵣ from previous window; zᵣ = (x − μᵣ)/σᵣ.
  • EWMA: mₜ = αxₜ + (1−α)mₜ₋₁, residual rₜ = xₜ − mₜ, flag if |rₜ| ≥ L·σᵣ·√(α/(2−α)).

How to use this calculator

  1. Paste your numeric series into the values field.
  2. Optional: add matching labels for timestamps or IDs.
  3. Select one or more detection methods to compare.
  4. Tune thresholds, window sizes, and direction filters.
  5. Run detection and review flagged points above the form.
  6. Export CSV or PDF for sharing and audit trails.

Data preparation for time series alerts

Reliable anomaly detection starts with clean inputs. This calculator accepts values separated by commas, spaces, or new lines, then ignores non‑numeric tokens. Optional labels let you track timestamps, batch IDs, or sensor names. If label counts do not match the series, default point names are used to prevent misalignment. Use log(1+x) when magnitudes vary, and standardization when features need comparable scale. First differencing can remove drift before scoring extremes.

Choosing robust versus parametric detectors

Method choice depends on distribution shape and outlier frequency. Global Z‑score uses the mean and standard deviation, working best when data is roughly normal and stable. Modified Z replaces mean with median and scales by MAD, making it resistant to heavy tails and sudden spikes. IQR fences compare each value to quartile‑based bounds, often effective for skewed metrics. Select multiple methods to cross‑validate flags and reduce blind spots overall.

Windowed baselines for changing behavior

Nonstationary series benefit from local baselines. Rolling Z computes mean and deviation from a trailing window, so regime shifts are handled without overreacting to old history. EWMA builds a smoothed baseline using alpha, then evaluates residuals against a dynamic limit; larger alpha reacts faster, while larger L reduces alerts. Pair smoothing with a small window when measurements are noisy. For seasonal data, consider labeling cycles and comparing within periods. Keep windows consistent.

Threshold tuning for practical monitoring

Thresholds should reflect business risk, not only statistics. Start with common defaults, then measure the anomaly rate reported in the results panel. If too many points are flagged, increase Z or Modified Z cutoffs, raise the IQR multiplier, or increase L for EWMA. If you miss known incidents, lower thresholds or reduce smoothing. Direction filtering is useful for one‑sided KPIs, such as latency spikes or inventory drops. Recalibrate after data changes.

Reporting, sharing, and audit readiness

Operational teams need explanations they can audit. The flagged table lists raw values, processed values, and the exact rule that triggered each alert, enabling fast triage. CSV export supports downstream dashboards and model monitoring pipelines, while PDF export provides a snapshot for incident reviews and compliance records. Record your chosen settings alongside the report so reruns are comparable. When sharing results, include labels to pinpoint the time or asset impacted with precision.

FAQs

What series length is recommended?

At least 20 points helps stabilize quartiles and deviations, but the calculator runs with 3+. For rolling methods, use a window that leaves enough history, such as 12–50 points.

Should I use population or sample deviation?

Choose population when the series represents the full period you care about. Choose sample when values are a sample from a larger process and you want an unbiased estimate. Differences are small for long series.

When is Modified Z better than Z-score?

Modified Z is preferred when the data is skewed, contains repeated spikes, or violates normality. Median and MAD reduce the influence of extreme points, so the baseline stays stable while outliers still stand out.

What do transform and differencing change?

Transforms change scale before scoring. Log(1+x) compresses large ranges, and standardization rescales to comparable units. First differencing converts levels into changes, which can reveal sudden shifts even when the original series trends upward.

Why do labels sometimes disappear?

Labels are only applied when their count matches the number of values. This prevents accidental off‑by‑one pairing that would misreport which timepoint was anomalous. Fix by adding or removing labels to match exactly.

How should I interpret “processed value”?

Processed value is the number after transform, differencing, and smoothing. Detection rules run on processed values, but raw values are shown for business context. If processed becomes NA, it was not evaluated due to invalid math or missing history.

Related Calculators

ARIMA Forecast CalculatorGRU Forecast CalculatorMoving Average ForecastSeasonality Detection ToolTime Series DecompositionAuto ARIMA SelectorForecast Accuracy CalculatorMAPE Error CalculatorRMSE Forecast ErrorMAE Error Calculator

Important Note: All the Calculators listed in this site are for educational purpose only and we do not guarentee the accuracy of results. Please do consult with other sources as well.