检验简单线性回归中的斜率是否等于R中的给定常数

l3zydbqr  于 2023-02-10  发布在  其他
关注(0)|答案(4)|浏览(151)

我想测试简单线性回归中的斜率是否等于除零之外的给定常数。

> x <- c(1,2,3,4)
> y <- c(2,5,8,13)
> fit <- lm(y ~ x)
> summary(fit)

Call:
lm(formula = y ~ x)

Residuals:
   1    2    3    4 
 0.4 -0.2 -0.8  0.6 

Coefficients:
            Estimate Std. Error t value Pr(>|t|)   
(Intercept)  -2.0000     0.9487  -2.108  0.16955   
x             3.6000     0.3464  10.392  0.00913 **
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 0.7746 on 2 degrees of freedom
Multiple R-squared:  0.9818,    Adjusted R-squared:  0.9727 
F-statistic:   108 on 1 and 2 DF,  p-value: 0.009133
> confint(fit)
                2.5 %   97.5 %
(Intercept) -6.081855 2.081855
x            2.109517 5.090483

在这个例子中,我想测试斜率是否等于5。我知道我不会拒绝它,因为5在95%置信区间内。但有没有一个函数可以直接给予我p值?

8gsdolmq

8gsdolmq1#

您只需构建零假设斜率=5的t统计量:

# Compute Summary with statistics      
sfit<- summary(fit)
# Compute t-student H0: intercept=5. The estimation of coefficients and their s.d. are in sfit$coefficients
tstats <- (5-sfit$coefficients[2,1])/sfit$coefficients[2,2]
# Calculates two tailed probability
pval<- 2 * pt(abs(tstats), df = df.residual(fit), lower.tail = FALSE)
print(pval)
pieyvz9o

pieyvz9o2#

测试拟合是否与特定系数显著不同的一种方法是构建一个“偏移”,其中该系数用作应用于x值的因子。您应将此视为重新设置“零”,至少斜率为零。截距仍“自由”“移动”,呃,待估计。

fit2 <- lm( y~x +offset(5*x) )
#----------------
 summary(fit2)
#--------
Call:
lm(formula = y ~ x + offset(5 * x))

Residuals:
   1    2    3    4 
 0.4 -0.2 -0.8  0.6 

Coefficients:
            Estimate Std. Error t value Pr(>|t|)  
(Intercept)  -2.0000     0.9487  -2.108   0.1695  
x            -1.4000     0.3464  -4.041   0.0561 .
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 0.7746 on 2 degrees of freedom
Multiple R-squared:  0.9818,    Adjusted R-squared:  0.9727 
F-statistic:   108 on 1 and 2 DF,  p-value: 0.009133

现在与fit-对象的结果进行比较,x的系数相差5,模型拟合统计量相同,但正如您所怀疑的,x-变量的p值要低得多......呃,要高得多,即不太显著。

> summary(fit)

Call:
lm(formula = y ~ x)

Residuals:
   1    2    3    4 
 0.4 -0.2 -0.8  0.6 

Coefficients:
            Estimate Std. Error t value Pr(>|t|)   
(Intercept)  -2.0000     0.9487  -2.108  0.16955   
x             3.6000     0.3464  10.392  0.00913 **
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 0.7746 on 2 degrees of freedom
Multiple R-squared:  0.9818,    Adjusted R-squared:  0.9727 
F-statistic:   108 on 1 and 2 DF,  p-value: 0.009133
k97glaaz

k97glaaz3#

我的印象是,car包中的linearHypothesis函数提供了一种标准的方法来实现这一点。
例如

library(car)

x <- 1:4
y <- c(2, 5, 8, 13)
model <- lm(y ~ x)

linearHypothesis(model, "x = 1")
#> Linear hypothesis test
#> 
#> Hypothesis:
#> x = 1
#> 
#> Model 1: restricted model
#> Model 2: y ~ x
#> 
#>   Res.Df  RSS Df Sum of Sq      F  Pr(>F)  
#> 1      3 35.0                              
#> 2      2  1.2  1      33.8 56.333 0.01729 *
#> ---
#> Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

这里假设检验表明,限制模型(即x的系数等于1的情况)以统计学显著性方式解释了比完整模型更小的方差,如F统计量所评估。
这比在公式中使用offset更有用,因为您可以一次测试多个限制:

model <- lm(y ~ x + I(x^2))
linearHypothesis(model, c("I(x^2) = 1", "x = 1"))
#> Linear hypothesis test
#> 
#> Hypothesis:
#> I(x^2) = 1
#> x = 1
#> 
#> Model 1: restricted model
#> Model 2: y ~ x + I(x^2)
#> 
#>   Res.Df  RSS Df Sum of Sq    F  Pr(>F)  
#> 1      3 30.0                            
#> 2      1  0.2  2      29.8 74.5 0.08165 .
#> ---
#> Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
1u4esq0p

1u4esq0p4#

我喜欢使用emmeans包的解决方案(因为我已经总是加载emmeans,因为它非常有用...)

> library(emmeans)
> fit.emt <- emtrends(fit, ~1, var="x") 
> fit.emt
 1       x.trend    SE df lower.CL upper.CL
 overall     3.6 0.346  2     2.11     5.09

Confidence level used: 0.95 
> test(fit.emt, null=5)
 1       x.trend    SE df null t.ratio p.value
 overall     3.6 0.346  2    5  -4.041  0.0561

相关问题