Home page

Review and intuition why we divide by n-1 for the unbiased sample | Khan Academy




Video quality The size Download

Information Review and intuition why we divide by n-1 for the unbiased sample | Khan Academy


Title :  Review and intuition why we divide by n-1 for the unbiased sample | Khan Academy
Lasting :   9.44
Date of publication :  
Views :   338 rb


Frames Review and intuition why we divide by n-1 for the unbiased sample | Khan Academy





Description Review and intuition why we divide by n-1 for the unbiased sample | Khan Academy



Comments Review and intuition why we divide by n-1 for the unbiased sample | Khan Academy



Julio Cesar Jovelina
Why do we not use |Xi - x̄ | instead of (Xi - x̄ )² ?
Comment from : Julio Cesar Jovelina


Julio Cesar Jovelina
What if you take the 3 highest values?
Comment from : Julio Cesar Jovelina


Kevins Math Class
6:10 Why we divide by n - 1 in variance
Comment from : Kevins Math Class


Dann
whos this man? he knows so much and explains so majestic I wonder why he does not have a statue in the main square of my city ? he deserve a few
Comment from : Dann


posthocprior
That was unclear
Comment from : posthocprior


carlneedsajob
thank you sal :4)
Comment from : carlneedsajob


Ben Wearne
N-1 is "better", but it is still very flawed
Comment from : Ben Wearne


Jeremy Falcon
Just gotta say, you're videos are awesome Glad they exist
Comment from : Jeremy Falcon


Sabreen Elzein
So instead of the sample lying somewhere much lower than the true population mean, what if it's lying much higher? Would it be correct to use n+1 instead of n-1 in order to deliberately make the sample variance smaller?
Comment from : Sabreen Elzein


adarsh tiwari
After 3 videos, I finally understood this n-1 Basically when we consider a sample from our population and calculate the mean for it, it may or may not be as close to the overall population mean (which is thr mean that matters) so to lower the possibility of a highly distinct sample mean/variance we use n-1 to reach at least near the population mean
Comment from : adarsh tiwari


sh di
A very interesting and important discussion I made a break in the middle and thought about it by myself I have a rather short explanation: If the sample size n is very small, such as 3, the variance calculated for the sample has more chance to be very different from the actual variance The smaller the n is, the more effect has this '-1' on the result brWhy do we use '-1' and not some other values like '-2', I think it is just a tradition For the smallest sample size of 2, this unbiased variance can still be calculated However, it is not really purely 'unbiased', just relatively 'unbiased'
Comment from : sh di


Leo M
This is not explained at all
Comment from : Leo M


Shivay Shakti
But the same can be there for the other end where we would overestimate it?
Comment from : Shivay Shakti


AJ
Awesome video! Thank you!
Comment from : AJ


MANU PANDIT
HibrHow is this S2 variance of sample different from the sigma squared /n formula ( population variance /n) which is also the sample variancebrbrthanks
Comment from : MANU PANDIT


DarkTealGlasses
Much better than what my school teacher taught me
Comment from : DarkTealGlasses


Arthur Pletcher
Because of the upper and lower boundaries, samples are biased to be less spread, compared to the population mean, which is typically more centralized
Comment from : Arthur Pletcher


Swapnamay Sen
What is bogus logickhan academy is jack of all trade,master of none
Comment from : Swapnamay Sen


scott lomagistro
I get the math What I don't get is how you're able to write with the drawing/annotation feature so freakin' nicely?!?!? Either you missed your calling as a steady-handed microsurgeon or there is some sort of stabilization assistance with the program you're using
Comment from : scott lomagistro


Prabhjyot SIngh
by this logic it can be n+1 also ig
Comment from : Prabhjyot SIngh


Kanishk Vishwakarma
8:40 I think you should not represent the true variance and the sample variance on the same number line you drew for the population points Also the consequence of your putting them together is you're visualizing the distance between the sample variance and the population variance on the same number line, resulting in your conclusion that because the sample points are far from the population mean, the variance is far too Ponder over it, you'll realize brLove your lectures BTW 😃
Comment from : Kanishk Vishwakarma


VGF80
Let's say a report comes out that mentions standard deviation How are we supposed to know which formula was used to calculate that standard deviation
Comment from : VGF80


liu shao min
The analogy you’re using is probably not very convincing/intuitive enough Because there’s also a likelihood that the sample is over-estimating the population mean, so why don’t we divide it by n+1?
Comment from : liu shao min


Jack
This is terrible Still no explanation of why it is unbiased if using n-1
Comment from : Jack


john Hendrickson
I would like to know why we use the square of the difference between x and xbar, and not the absolute value of the difference?
Comment from : john Hendrickson


MrVpassenheim
NOT one of Khan Academy's shining moments You're other video (thanks Dhiraj Budhrani) is MUCH better (with the simulation & a mathematical explanation!)
Comment from : MrVpassenheim


Arvin Pillai
Starts at 500
Comment from : Arvin Pillai


ASomewhatLongAndMeaninglessUserame
9:08 - You are just as likely to be overestimating, you just chose to pick the bottom points rather than the top ones This offers literally NO explanation, let alone an intuitive one, as to why I should expect there to be a downwards bias
Comment from : ASomewhatLongAndMeaninglessUserame


Jayrald Basan
So this means that the n-1 of the sample variance equation was just an arbitrarily chosen value because it's empirically closer to the actual population variance? Or is there any equation or a logical path in deriving the n-1? I kinda see that it's the former but kinda feel that there might be a theory that could explain why n-1 is the most appropriate and not any other value and that it's just a natural consequence of our math Anyone who does have one, please tell me!brThank you for the video Khan Academy! It was very informative!
Comment from : Jayrald Basan


imbolc
I can't understand why we would underestimate variance in general this way Let's take population [0, 10, 20] and its sample [0, 20] They have the same mean 10, and variance of the population is (100 + 100 + 0) / 3, while variance of the sample is (100 + 100) / 2, so we overestimate the variance
Comment from : imbolc


ᴠᴧᴨᴛᴧᴃᴌᴧcᴋ
So I guess the biased variance is better if your sample is still close to the entire population
Comment from : ᴠᴧᴨᴛᴧᴃᴌᴧcᴋ


Baptiste Roussel
I had the intuition that overestimation and underestimation would compensate each other Why is it not the case?
Comment from : Baptiste Roussel


Lucia Breccia
Why isn't this video on the statistics playlist?
Comment from : Lucia Breccia


Alberto Rivero
starts at 5:05
Comment from : Alberto Rivero


clancym1
this does not give an explanation for why it is exactly n-1
Comment from : clancym1


J S
still dont get it yes you would be underestimating it if u take the sample cluster below the mean but if the cluster is above the mean? you would be overestimating it! seems arbitrary to me
Comment from : J S


f lotars
So why minus - 1? Why not - 2 ? Or minus 6,345 ? This is still not an explanation of the n - 1 :-(
Comment from : f lotars


Casey
Didn't say anything about n-1, misleading title
Comment from : Casey


Upgrad3r
I love you, fuck the rest of explanations on internet, this made me understand
Comment from : Upgrad3r


Matthew
If you want a more technical explanation/proof, Wikipedia Bessel's Correction This video has some good intuition though
Comment from : Matthew


shahdatyoutube

Comment from : shahdatyoutube


Drop Dead Fred
I GET IT! I had to work out the proof and think about it really hard, but I get it! I have an intuition for why n-1 makes sense! Message me with your questions, because I don't think I can explain it easily in the comment boxes
Comment from : Drop Dead Fred


Archer WhiteDragon
We all hold the key to our part save the world by using and combining knowledge to promote peace throughout the world I'm starting it off as an inventor and entrepreneur
Comment from : Archer WhiteDragon


Archer WhiteDragon
Knowledge is the only hope for world peace We must have save trench town As an actual real world issues that can be mathematically save the world from this little island If you you can do it!
Comment from : Archer WhiteDragon


Archer WhiteDragon
Hey, help me resolve world economics Bringing knowledge
Comment from : Archer WhiteDragon


Johann Schmidt
Say there's a population with a known population mean, and you take N random values from it, is there a way to calculate a probability density of deviance of the sample mean from the population mean? I hope that was a coherent question
Comment from : Johann Schmidt


Blake Shurtz
Tackle chance variability first
Comment from : Blake Shurtz


Drop Dead Fred
As n approaches N, s_n approaches sigma, but s_n-1 approaches something that is not sigma So what gives?
Comment from : Drop Dead Fred


alkalait
By the way for anyone curious, the "degrees of freedom" of some statistic, say, a sum across the x's, is n because this number has n ways or parameters (the x's themselves) by which it can vary Using this simple notion of "freedom", you can state the dofs of the any statistic that is written in terms of some data points As another example, the sum across the x's squared also has n dofs
Comment from : alkalait


Euroliite
You said that the biased variance was an underestimate, so is it possible to overestimate?
Comment from : Euroliite


Euroliite
Perhaps that tends to be overdoing it?
Comment from : Euroliite


glavgad
we cant But we divide by n-1 even if we have 10000 samples, what difference n-1 will make?
Comment from : glavgad


glavgad
I dont get it Yes the error will be smaller, but why we dont divide by n-2, or n-3 or n-4 , etc
Comment from : glavgad


alkalait
Thanks for this video Sal Though intuitive and true, some viewers might find this approach (to dealing with the "bias" in the estimator) heuristic For instance, one might argue "why not n-2 and so on" If you decide to invest a bit more in this stats playlist, I hope you'll get to deeper concepts like degrees of freedom of estimators, which lie at the heart of the concept of this video Please don't take this as criticism; the video is in the right direction :)
Comment from : alkalait


Piecakesman
thank you so fucking much for this!!!
Comment from : Piecakesman


WGBraves24
n-1 D:
Comment from : WGBraves24


Affan

Comment from : Affan


MrLullumbonum
FIRST!
Comment from : MrLullumbonum



Related Review and intuition why we divide by n-1 for the unbiased sample | Khan Academy videos

Garena DDTank:Combo 2000 Tốc Độ Sẽ Kinh Khủng Như Thế Nào?Best Cướp Turn Cân Team Lật Kèo Garena DDTank:Combo 2000 Tốc Độ Sẽ Kinh Khủng Như Thế Nào?Best Cướp Turn Cân Team Lật Kèo
РѕС‚ : Review Game N.B.H
Download Full Episodes | The Most Watched videos of all time
Coin Toss Probability || An unbiased coin is tossed 5 times || Prepare a sample space Coin Toss Probability || An unbiased coin is tossed 5 times || Prepare a sample space
РѕС‚ : আমার পরিসংখ্যান My Statistics
Download Full Episodes | The Most Watched videos of all time
#107: Scikit-learn 104:Unsupervised Learning 8: Intuition for Clustering #107: Scikit-learn 104:Unsupervised Learning 8: Intuition for Clustering
РѕС‚ : learndataa
Download Full Episodes | The Most Watched videos of all time
Unglaublich - Das schnelle Geld - Trailer - Intuition / Vorahnung Unglaublich - Das schnelle Geld - Trailer - Intuition / Vorahnung
РѕС‚ : Remote Viewing
Download Full Episodes | The Most Watched videos of all time
Latin and Greek roots and affixes | Reading | Khan Academy Latin and Greek roots and affixes | Reading | Khan Academy
РѕС‚ : Khan Academy
Download Full Episodes | The Most Watched videos of all time
Mujhay Qabool Nahin Episode 21 - [Eng Sub] - Ahsan Khan - Madiha Imam - Sami Khan - 13th Sep 2023 Mujhay Qabool Nahin Episode 21 - [Eng Sub] - Ahsan Khan - Madiha Imam - Sami Khan - 13th Sep 2023
РѕС‚ : HAR PAL GEO
Download Full Episodes | The Most Watched videos of all time
Kalank Episode 18 - [Eng Sub] - Hira Mani - Junaid Khan - Nazish Jahangir - Sami Khan - 13th Sep 23 Kalank Episode 18 - [Eng Sub] - Hira Mani - Junaid Khan - Nazish Jahangir - Sami Khan - 13th Sep 23
РѕС‚ : HAR PAL GEO
Download Full Episodes | The Most Watched videos of all time
Unbiased Estimators (Why n-1 ???) : Data Science Basics Unbiased Estimators (Why n-1 ???) : Data Science Basics
РѕС‚ : ritvikmath
Download Full Episodes | The Most Watched videos of all time
Circular flow of income and expenditures | Macroeconomics | Khan Academy Circular flow of income and expenditures | Macroeconomics | Khan Academy
РѕС‚ : Khan Academy
Download Full Episodes | The Most Watched videos of all time
Observational learning: Bobo doll experiment and social cognitive theory | MCAT | Khan Academy Observational learning: Bobo doll experiment and social cognitive theory | MCAT | Khan Academy
РѕС‚ : khanacademymedicine
Download Full Episodes | The Most Watched videos of all time