Calculator Inputs
Example Data Table
| Case | Feature 1 | Feature 2 | Feature 3 | Feature 4 | Observed Target | Use Case |
|---|---|---|---|---|---|---|
| A | 0.82 | 0.45 | 0.66 | 0.30 | 1 | Strong positive tendency |
| B | 0.64 | 0.52 | 0.48 | 0.41 | 1 | Moderate probability pattern |
| C | 0.22 | 0.30 | 0.18 | 0.72 | 0 | Low-score classification pattern |
| D | 0.91 | 0.78 | 0.57 | 0.20 | 1 | High signal combination |
| E | 0.35 | 0.88 | 0.27 | 0.61 | 0 | Conflicting feature profile |
Use these rows to test the default settings or compare your own feature patterns.
Formula Used
This calculator models a compact feed-forward network with one hidden layer, three hidden neurons, four feature inputs, adjustable activation functions, editable weights, editable biases, and optional min-max normalization.
How to Use This Calculator
- Choose the activation function that best fits your experiment.
- Select classification for binary output or regression for numeric output.
- Enter four feature values representing the observation you want predicted.
- Provide minimum and maximum values if you want automatic normalization.
- Adjust hidden-layer weights, biases, output weights, and output bias.
- Set a classification threshold when using binary decision output.
- Press Calculate Prediction to display the result above the form.
- Use the CSV or PDF buttons to export the calculated report.
FAQs
1. What does this calculator estimate?
It estimates either a binary class or a numeric score using a small neural network structure. You control the inputs, weights, biases, activation function, and threshold.
2. Why are there four features and three hidden neurons?
That layout keeps the model advanced enough for experimentation while remaining practical for manual review. It also makes each weighted path easier to inspect and export.
3. When should I enable normalization?
Enable normalization when your feature scales differ widely. Scaling helps prevent large raw values from dominating the hidden-layer sums and improves interpretability across inputs.
4. What is the difference between classification and regression here?
Classification converts the output score into a probability and final class. Regression keeps the raw output score as the predicted numeric result without thresholding.
5. How should I choose an activation function?
Use sigmoid for smooth bounded responses, tanh for centered nonlinear behavior, ReLU for simple sparse activations, and linear for direct proportional influence.
6. Can I use this as a training tool?
This file is a prediction and experimentation tool, not a training engine. It helps you test known parameters rather than fit parameters from historical datasets.
7. Why can normalized values exceed 0 to 1?
If an input falls outside the entered minimum and maximum range, min-max scaling will naturally produce values below zero or above one. That behavior can be informative.
8. What do the export buttons include?
The export tools include the result summary, network score, hidden outputs, normalized inputs, and selected model settings. They are useful for reporting or comparison snapshots.