Questions tagged [linear-regression]

Techniques for analyzing the relationship between one (or more) "dependent" variables and "independent" variables.

"Regression" is a general term for a wide variety of techniques to analyze the relationship between one (or more) dependent variables and independent variables. Typically the dependent variables are modeled with probability distributions whose parameters are assumed to vary (deterministically) with the independent variables.

Ordinary least squares (OLS) regression affords a simple example in which the expectation of one dependent variable is assumed to depend linearly on the independent variables. The unknown coefficients in the assumed linear function are estimated by choosing values for them that minimize the sum of squared differences between the values of the dependent variable and the corresponding fitted values.

769 questions
10
votes
1 answer

Assumptions of linear regression

In simple terms, what are the assumptions of Linear Regression? I just want to know that when I can apply a linear regression model to our dataset.
Anvay Joshi
  • 119
  • 4
9
votes
3 answers

What is the difference between residual sum of squares and ordinary least squares?

They look like the same thing to me but I'm not sure. Update: in retrospect, this was not a very good question. OLS refers to fitting a line to data and RSS is the cost function that OLS uses. It finds the parameters that gives the least residual…
sebastianspiegel
  • 891
  • 4
  • 11
  • 16
7
votes
2 answers

The Why Behind Sum of Squared Errors in a Linear Regression

I'm just starting to learn about linear regressions and was wondering why it is that we opt to minimize the sum of squared errors. I understand the squaring helps us balance positive and negative individual errors (so say e1 = -2 and e2 = 4, we'd…
stk1234
  • 583
  • 1
  • 6
  • 6
7
votes
3 answers

Understanding Locally Weighted Linear Regression

I'm having problem understanding how we choose the weight function. In Andrew Ng's notes, a method for calculating a local weight, a standard choice of weights is given by: What I don't understand is, what exactly is the x here? Apparently Note…
lte__
  • 1,320
  • 5
  • 18
  • 27
5
votes
2 answers

How do I fit a curve into non linear data?

I did an experiment in my Uni and I collected data $(ω,υ(ω))$ modeled by the equation: $$ v(ω)=\frac{C}{\sqrt{(ω^2-ω_0^2 )^2 +γ^2 ω^2}} $$ where $ω_0$ is known. Do you know how can I fit a curve to my data $(ω,υ(ω))$ ? and how to find the parameter…
4
votes
2 answers

Difference between Non linear regression vs Polynomial regression

I have been reading a couple of articles regarding polynomial regression vs non-linear regression, but they say that both are a different concept. I mean when you say polynomial regression, in fact, it implies that its Nonlinear right. Then why…
jsr
  • 43
  • 1
  • 6
3
votes
2 answers

why R-square always keep increasing

I have read in multiple articles that R-square always increases with the number of features, even though a feature may not be of any significance. The formula for R-square is $$1 - \frac{\sum(y-\hat{y})^2}{\sum(y-\bar{y}^2)}$$ If the denominator is…
Hitesh Somani
  • 399
  • 2
  • 10
3
votes
1 answer

For a linear model without intercept, why does the redundent term in one-hot encoding function as intercept?

In this question Elias Strehle pointed out that if we keep all the levels during one hot encoding on a linear model without an intercept, the redundant feature will function as an intercept. Why is this the case? Isn't that in a linear model, the…
3
votes
1 answer

Is it ok to trust regression predictions when none of the coefficients are statistically significant?

Background to the problem: I am estimating individual treatment effects using double machine learning model. I do not know true treatment effects for my problem. Double ML: Given Y (outcome), T (treatment) and X ( features) Y = aT + bX +…
Chandra
  • 131
  • 2
3
votes
1 answer

Elements of Statistical Learning - question on p. 12

I am starting to work through Elements of Statistical Learning, and right off the bat I am coming across things that I don't understand. I would be grateful for any help from this community. Please let me know if this is not the appropriate forum to…
2
votes
0 answers

Reproduce Figure 3.2 in Introduction to Statistical Learning

Has anyone reproduced Figure 3.2 in Introduction to Statistical Learning (James et al)? https://trevorhastie.github.io/ISLR/ISLR%20Seventh%20Printing.pdf They have a contour plot with circles. Here is my code, which produces a contour plot with…
2
votes
2 answers

Does it violate the assumptions of linear regression to perform it on time series data?

One of the assumptions of linear regression says that the errors must be independent i.e., the residuals must not depend on each other. Let's say we are using linear regression to model the temperature on a given day. If it is 13:00 and 20 degrees,…
codeananda
  • 278
  • 5
  • 11
2
votes
0 answers

Least squares with non-negative eigen values

I am trying to use least squares to solve a problem of the form $u = - K v$ where u and v are vectors of size 3, and K is a 3X3 matrix. Where I want to estimate K, given u and v. I have multiple data for u and v, setup with the hope that the…
2
votes
2 answers

Difference between various linear regression implementations

Aim: To find the coefficients for the regression line (hyperplane in case of multiple variables?) that models the data best. Let's call this w What is the difference between: 1) Estimating using MAP: $w=(XX^T+\lambda I )^{-1}Xy^T$ where $X$ is the…
rahs
  • 160
  • 6
2
votes
3 answers

Linear Regression and learning rate

Could you please tell me why do we use a learning rate to move into the direction of the derivative to find the minimum? Why is it not good if you simply count it where is it 0?
user3435407
  • 121
  • 1
  • 3
1
2 3