기본 콘텐츠로 건너뛰기

3월, 2019의 게시물 표시

False positive, False negative, Type I error, Type II error

False positive, False negative, Type I error, Type II error False positive, False negative, Type I error, Type II error True False True Correct Type II error False Type I error Correct Binary classification Actual Class Positive Actual Class Negative Assigned Positive True Positive False Positive Assigned Negative False Negative True Negative Statistics A positive result corresponds to rejecting the null hypothesis, while a negative result corresponds to failing to reject the null hypothesis; “false” means the conclusion drawn is incorrect. Thus a type I error is a false positive, and a type II error is a false negative. Wikipedia null hypothesis( H 0 ) H_0) H 0 ​ ) True null hypothesis( H 0 ) H_0) H 0 ​ ) False Fail to reject Correct (True Negative) (1- α \alpha α , confidence level) Type II error (False Negative) ( β \beta β ) Reject Type I error (False Positive) ( α \alpha α , signif

inequalities in fundamental statistics

inequalities Hoeffding’s Inequality Given n ( n > 0 ) n(n>0) n ( n > 0 ) i.i.d. random variables X 1 , X 2 , . . . , X n ∼ i i d X_1,X_2,...,X_n \overset{iid}{\sim} X 1 ​ , X 2 ​ , . . . , X n ​ ∼ i i d that are almost surely bounded – meaning P ( X ∉ [ a , b ] ) = 0 \mathbf{P}(X \notin [a,b])=0 P ( X ∈ / ​ [ a , b ] ) = 0 : P ( ∣ X n ˉ − E [ X ] ∣ ≥ ϵ ) ≤ 2 exp ⁡ ( − 2 n ϵ 2 ( b − a ) 2 ) for all  ϵ > 0 \mathbf{P}\left(\left| \bar{X_n} - \mathbb{E}[X]\right| \ge \epsilon\right) \le 2 \exp\left(-{2n\epsilon^2 \over (b-a)^2}\right) \qquad \text{for all }\epsilon \gt 0 P ( ∣ ∣ ​ X n ​ ˉ ​ − E [ X ] ∣ ∣ ​ ≥ ϵ ) ≤ 2 exp ( − ( b − a ) 2 2 n ϵ 2 ​ ) for all  ϵ > 0 Unlike for the central limit theorem, here the sample size n n n does not need to be large. Markov inequality For a random variable X ≥ 0 X\ge 0 X ≥ 0 with mean μ > 0 \mu \gt 0 μ > 0 , and any number t > 0 t \gt 0 t > 0 : P ( X ≥ t ) ≤ μ t \mathbf{P}(X \