Least Squares Error Formula:
From: | To: |
The least squares error measures the discrepancy between observed values (y) and values predicted by a linear model (Xβ). It's the sum of squared residuals, minimized in linear regression to find the best-fitting model.
The calculator uses the matrix algebra formula:
Where:
Explanation: The formula computes the sum of squared differences between observed values and model predictions.
Details: Minimizing this error yields the best linear unbiased estimator (BLUE) under Gauss-Markov assumptions. It's fundamental in regression analysis and machine learning.
Tips: Enter y as comma-separated values, X matrix with rows separated by semicolons and values by commas, and β as comma-separated values. Dimensions must match (X: n×p, y: n×1, β: p×1).
Q1: What units does the error have?
A: The error is in squared units of the dependent variable (y).
Q2: How is this related to R-squared?
A: R-squared = 1 - (SSE/SST), where SSE is this error and SST is total sum of squares.
Q3: When would I use this calculation?
A: When evaluating linear models, comparing model fits, or performing regression diagnostics.
Q4: What's the difference between SSE and MSE?
A: MSE (Mean Squared Error) is SSE divided by degrees of freedom (n-p).
Q5: Can this handle weighted least squares?
A: This calculator implements ordinary least squares. Weighted versions require additional inputs.