Hey Tony, your "answer" is based on a sample that has been taken. We take
the sample to provide evidence to make a conclusion regarding the null. And
even with our best made plans, sometimes samples do not reflect the
population. So if we happen to get one of these weird samples, it can cause
us to reject a null that is actually true, or fail to reject a null that is
false. This is how we get the errors mentioned.
In the passage rate example, do you mean that the average grade of the 150
students of the sample is 88/100 or that the proportion of students that
passed (in terms of either pass or fail) the final exam was 88%. Im asking
that because if the latter is the case, what is the meaning of a standard
deviation of 4% on that proportion? Thanks! And of course thanks for all
your videos!!!
How would you make an error though? Do you mean BEFORE you run the
statistical test? Because if you run the test you would have the answer so
how could you have a Type I or Type II error??
Very good and crisp explanations of Z test, T test and Type I and Type II
errors. Good work. could it possible for you to load videos on Residuals,
Principal Components and Scatter plots
I'm still confused.... but the good thing about Youtube is you can play it
over and over again lol..... hopefully it will stick in my head. I wish you
had a few more examples
Thanks for this video. Very clear and well paced.
Do you have more on power?
I was hoping to find more about how power, variance and sample size
interact.
+Matthew Taylor I have a video that walks through the factors that affect the power (of a Z test), available here: https://www.youtube.com/watch?v=K6tado8Xcug.
Statistics 101: Controlling Type II Error using Sample Size
Statistics 101: Controlling Type II Error using Sample Size. How can Type II error be controlled in the same manner as Type I error? In a single sample ...
Just a quick question ... So would we only control the Type II error if we
were testing our from the perspective of the alternative hypothesis ? What
I'm actually asking is : in practical situations, when would we want to
control the Type II error ?
This is the best series I have come across on hypothesis testing. Thanks a
lot for all your efforts in making these videos.
I have become you fan! I will surely watch all other playlists you have
updated on Statistics.
I think controlling type 2 error is impossible since we cannot choose one
point of mu alternative. This means that we ALWAYS cannot say that we
accept the null hypothesis. Am I right?
thank you so much for sharing this, Brandon, you can never imagine how a
group of chinese students are benefiting from your video, this is alot more
helpful than our text book here !!
The fact that you replace σ with s seems risky to me. For example when you
study normal or not normal population and you don't know σ you use the same
notation to estimate σ through s but you never use an arithmetic value for
s (later in this occasion you go to your t scores and get it done). It's
like saying I got a sample and I found this standard deviation so I am
gonna use it to solve my problem. With the same logic I found the sample
mean so since it estimates the population mean
Great like always Sal, i was just thinking about what if you post under
downloads of the khanacademy home page just the screenshots from Paint of
all yours videos. In that way i think we could have a quick preview of all
the Info in that lecture and we would not always need to play the whole
video when just refreshing the memory ;) . I hope you get it what i mean.
You are just great!!! Thanks a lot !
Actually he qualifies that statement by saying he is "reasonably confident"
that there is a 95% chance that the true mean is in that interval, and he
makes the point over and over that this is really a best guess, because we
don't know the true standard deviation of the sampling distribution.
actually your wording is a bit incorrect, you don't say there is a 95%
chance that it is in that intervall, you say that you are confident in 95
of 100 times that it is in that intervall - it has nothing to do with
chance in a confidence intervall!
This question has popped up several times when I've been watching these
videos. Why can you say that μx = μ = p I agree that μx would approximate μ
= p depentent on the size of the n. Wouldn't rather be μx ≈ μ = p?
@BreakerByte Thanks, I've actually viewed the entire playlist. The answer
dawned on me basically as soon as I answered the question, but thanks
anyway. I need to get through the probability playlist as well though
At 5:30, why is the sample variance divided by (n-1) instead of n? In the
most recent videos, the standard error for the sample mean was simply
divided by n. Is Bernoulli different? Why is it (n-1?
It's actually .2476 because the next digit after 5 is 7 and you roundup
that.
AP Stat Ch 21 Video 2 Tests and Intervals.mp4
Table of Contents: 00:03 - Reducing Both Type I and Type II Error 00:24 - Reducing Both Type I and Type II Error (cont.) 01:04 - Reducing Both Type I and Type II ...
Thank you so much! I always watch your videos days before we get to these
topics! Love you a lot! :)
20. Examples of Errors in Hypothesis Testing and Impact of Alpha
Building up on previous lecture on Type 1 and Type 2 errors in Hypothesis testing, this video discusses how changing the value of alpha can impact the ...