Convergence in probability and convergence in distribution
$begingroup$
Im a little confused about the difference of these two concepts, especially the convergence of probability. I understand that $X_{n} overset{p}{to} Z $ if $Pr(|X_{n} - Z|>epsilon)=0$ for any $epsilon >0$ when $n rightarrow infty$.
I just need some clarification on what the subscript $n$ means and what $Z$ means. Is $n$ the sample size? is $Z$ a specific value, or another random variable? If it is another random variable, then wouldn't that mean that convergence in probability implies convergence in distribution? Also, Could you please give me some examples of things that are convergent in distribution but not in probability?
econometrics statistics
New contributor
$endgroup$
add a comment |
$begingroup$
Im a little confused about the difference of these two concepts, especially the convergence of probability. I understand that $X_{n} overset{p}{to} Z $ if $Pr(|X_{n} - Z|>epsilon)=0$ for any $epsilon >0$ when $n rightarrow infty$.
I just need some clarification on what the subscript $n$ means and what $Z$ means. Is $n$ the sample size? is $Z$ a specific value, or another random variable? If it is another random variable, then wouldn't that mean that convergence in probability implies convergence in distribution? Also, Could you please give me some examples of things that are convergent in distribution but not in probability?
econometrics statistics
New contributor
$endgroup$
$begingroup$
See: quora.com/…
$endgroup$
– afreelunch
13 hours ago
add a comment |
$begingroup$
Im a little confused about the difference of these two concepts, especially the convergence of probability. I understand that $X_{n} overset{p}{to} Z $ if $Pr(|X_{n} - Z|>epsilon)=0$ for any $epsilon >0$ when $n rightarrow infty$.
I just need some clarification on what the subscript $n$ means and what $Z$ means. Is $n$ the sample size? is $Z$ a specific value, or another random variable? If it is another random variable, then wouldn't that mean that convergence in probability implies convergence in distribution? Also, Could you please give me some examples of things that are convergent in distribution but not in probability?
econometrics statistics
New contributor
$endgroup$
Im a little confused about the difference of these two concepts, especially the convergence of probability. I understand that $X_{n} overset{p}{to} Z $ if $Pr(|X_{n} - Z|>epsilon)=0$ for any $epsilon >0$ when $n rightarrow infty$.
I just need some clarification on what the subscript $n$ means and what $Z$ means. Is $n$ the sample size? is $Z$ a specific value, or another random variable? If it is another random variable, then wouldn't that mean that convergence in probability implies convergence in distribution? Also, Could you please give me some examples of things that are convergent in distribution but not in probability?
econometrics statistics
econometrics statistics
New contributor
New contributor
New contributor
asked 14 hours ago
Martin Martin
403
403
New contributor
New contributor
$begingroup$
See: quora.com/…
$endgroup$
– afreelunch
13 hours ago
add a comment |
$begingroup$
See: quora.com/…
$endgroup$
– afreelunch
13 hours ago
$begingroup$
See: quora.com/…
$endgroup$
– afreelunch
13 hours ago
$begingroup$
See: quora.com/…
$endgroup$
– afreelunch
13 hours ago
add a comment |
1 Answer
1
active
oldest
votes
$begingroup$
I will attempt to explain the distinction using the simplest example: the sample mean. Suppose we have an iid sample of random variables ${X_i}_{i=1}^n$. Then define the sample mean as $bar{X}_n$. As the sample size grows, our value of the sample mean changes, hence the subscript $n$ to emphasize that our sample mean depends on the sample size.
Noting that $bar{X}_n$ itself is a random variable, we can define a sequence of random variables, where elements of the sequence are indexed by different samples (sample size is growing), i.e. ${bar{X}_n}_{n=1}^{infty}$. The weak law of large numbers (WLLN) tells us that so long as $E(X_1^2)<infty$, that
$$plimbar{X}_n = mu,$$
or equivalently
$$bar{X}_n rightarrow_P mu,$$
where $mu=E(X_1)$. This means, for sufficiently large sample size, we can get our sample mean arbitrarily close to the true mean. Formally,
$$forall epsilon>0, exists n_{epsilon} in mathbb{N}: forall n>n_{epsilon}, P(|bar{X}_n - mu| <epsilon)=1. $$
In other words, if we want our estimate to be within $epsilon$ of the true value, there exists a sample size, $n_{epsilon}$, such that for any sample at least that large, our estimate will be within $epsilon$ of the true value with probability 1. Convergence in probability gives us confidence our estimators perform well with large samples.
Convergence in distribution tell us something very different and is primarily used for hypothesis testing. Under the same distributional assumptions described above, CLT gives us that
$$sqrt{n}(bar{X}_n-mu) rightarrow_D N(0,E(X_1^2)).$$
Convergence in distribution means that the cdf of the left-hand size converges at all continuity points to the cdf of the right-hand side, i.e.
$$lim_{n rightarrow infty} F_n(x) = F(x),$$
where $F_n(x)$ is the cdf of $sqrt{n}(bar{X}_n-mu)$ and $F(x)$ is the cdf for a $N(0,E(X_1^2))$ distribution. Knowing the limiting distribution allows us to test hypotheses about the sample mean (or whatever estimate we are generating).
New contributor
$endgroup$
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "591"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Martin is a new contributor. Be nice, and check out our Code of Conduct.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2feconomics.stackexchange.com%2fquestions%2f27300%2fconvergence-in-probability-and-convergence-in-distribution%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
I will attempt to explain the distinction using the simplest example: the sample mean. Suppose we have an iid sample of random variables ${X_i}_{i=1}^n$. Then define the sample mean as $bar{X}_n$. As the sample size grows, our value of the sample mean changes, hence the subscript $n$ to emphasize that our sample mean depends on the sample size.
Noting that $bar{X}_n$ itself is a random variable, we can define a sequence of random variables, where elements of the sequence are indexed by different samples (sample size is growing), i.e. ${bar{X}_n}_{n=1}^{infty}$. The weak law of large numbers (WLLN) tells us that so long as $E(X_1^2)<infty$, that
$$plimbar{X}_n = mu,$$
or equivalently
$$bar{X}_n rightarrow_P mu,$$
where $mu=E(X_1)$. This means, for sufficiently large sample size, we can get our sample mean arbitrarily close to the true mean. Formally,
$$forall epsilon>0, exists n_{epsilon} in mathbb{N}: forall n>n_{epsilon}, P(|bar{X}_n - mu| <epsilon)=1. $$
In other words, if we want our estimate to be within $epsilon$ of the true value, there exists a sample size, $n_{epsilon}$, such that for any sample at least that large, our estimate will be within $epsilon$ of the true value with probability 1. Convergence in probability gives us confidence our estimators perform well with large samples.
Convergence in distribution tell us something very different and is primarily used for hypothesis testing. Under the same distributional assumptions described above, CLT gives us that
$$sqrt{n}(bar{X}_n-mu) rightarrow_D N(0,E(X_1^2)).$$
Convergence in distribution means that the cdf of the left-hand size converges at all continuity points to the cdf of the right-hand side, i.e.
$$lim_{n rightarrow infty} F_n(x) = F(x),$$
where $F_n(x)$ is the cdf of $sqrt{n}(bar{X}_n-mu)$ and $F(x)$ is the cdf for a $N(0,E(X_1^2))$ distribution. Knowing the limiting distribution allows us to test hypotheses about the sample mean (or whatever estimate we are generating).
New contributor
$endgroup$
add a comment |
$begingroup$
I will attempt to explain the distinction using the simplest example: the sample mean. Suppose we have an iid sample of random variables ${X_i}_{i=1}^n$. Then define the sample mean as $bar{X}_n$. As the sample size grows, our value of the sample mean changes, hence the subscript $n$ to emphasize that our sample mean depends on the sample size.
Noting that $bar{X}_n$ itself is a random variable, we can define a sequence of random variables, where elements of the sequence are indexed by different samples (sample size is growing), i.e. ${bar{X}_n}_{n=1}^{infty}$. The weak law of large numbers (WLLN) tells us that so long as $E(X_1^2)<infty$, that
$$plimbar{X}_n = mu,$$
or equivalently
$$bar{X}_n rightarrow_P mu,$$
where $mu=E(X_1)$. This means, for sufficiently large sample size, we can get our sample mean arbitrarily close to the true mean. Formally,
$$forall epsilon>0, exists n_{epsilon} in mathbb{N}: forall n>n_{epsilon}, P(|bar{X}_n - mu| <epsilon)=1. $$
In other words, if we want our estimate to be within $epsilon$ of the true value, there exists a sample size, $n_{epsilon}$, such that for any sample at least that large, our estimate will be within $epsilon$ of the true value with probability 1. Convergence in probability gives us confidence our estimators perform well with large samples.
Convergence in distribution tell us something very different and is primarily used for hypothesis testing. Under the same distributional assumptions described above, CLT gives us that
$$sqrt{n}(bar{X}_n-mu) rightarrow_D N(0,E(X_1^2)).$$
Convergence in distribution means that the cdf of the left-hand size converges at all continuity points to the cdf of the right-hand side, i.e.
$$lim_{n rightarrow infty} F_n(x) = F(x),$$
where $F_n(x)$ is the cdf of $sqrt{n}(bar{X}_n-mu)$ and $F(x)$ is the cdf for a $N(0,E(X_1^2))$ distribution. Knowing the limiting distribution allows us to test hypotheses about the sample mean (or whatever estimate we are generating).
New contributor
$endgroup$
add a comment |
$begingroup$
I will attempt to explain the distinction using the simplest example: the sample mean. Suppose we have an iid sample of random variables ${X_i}_{i=1}^n$. Then define the sample mean as $bar{X}_n$. As the sample size grows, our value of the sample mean changes, hence the subscript $n$ to emphasize that our sample mean depends on the sample size.
Noting that $bar{X}_n$ itself is a random variable, we can define a sequence of random variables, where elements of the sequence are indexed by different samples (sample size is growing), i.e. ${bar{X}_n}_{n=1}^{infty}$. The weak law of large numbers (WLLN) tells us that so long as $E(X_1^2)<infty$, that
$$plimbar{X}_n = mu,$$
or equivalently
$$bar{X}_n rightarrow_P mu,$$
where $mu=E(X_1)$. This means, for sufficiently large sample size, we can get our sample mean arbitrarily close to the true mean. Formally,
$$forall epsilon>0, exists n_{epsilon} in mathbb{N}: forall n>n_{epsilon}, P(|bar{X}_n - mu| <epsilon)=1. $$
In other words, if we want our estimate to be within $epsilon$ of the true value, there exists a sample size, $n_{epsilon}$, such that for any sample at least that large, our estimate will be within $epsilon$ of the true value with probability 1. Convergence in probability gives us confidence our estimators perform well with large samples.
Convergence in distribution tell us something very different and is primarily used for hypothesis testing. Under the same distributional assumptions described above, CLT gives us that
$$sqrt{n}(bar{X}_n-mu) rightarrow_D N(0,E(X_1^2)).$$
Convergence in distribution means that the cdf of the left-hand size converges at all continuity points to the cdf of the right-hand side, i.e.
$$lim_{n rightarrow infty} F_n(x) = F(x),$$
where $F_n(x)$ is the cdf of $sqrt{n}(bar{X}_n-mu)$ and $F(x)$ is the cdf for a $N(0,E(X_1^2))$ distribution. Knowing the limiting distribution allows us to test hypotheses about the sample mean (or whatever estimate we are generating).
New contributor
$endgroup$
I will attempt to explain the distinction using the simplest example: the sample mean. Suppose we have an iid sample of random variables ${X_i}_{i=1}^n$. Then define the sample mean as $bar{X}_n$. As the sample size grows, our value of the sample mean changes, hence the subscript $n$ to emphasize that our sample mean depends on the sample size.
Noting that $bar{X}_n$ itself is a random variable, we can define a sequence of random variables, where elements of the sequence are indexed by different samples (sample size is growing), i.e. ${bar{X}_n}_{n=1}^{infty}$. The weak law of large numbers (WLLN) tells us that so long as $E(X_1^2)<infty$, that
$$plimbar{X}_n = mu,$$
or equivalently
$$bar{X}_n rightarrow_P mu,$$
where $mu=E(X_1)$. This means, for sufficiently large sample size, we can get our sample mean arbitrarily close to the true mean. Formally,
$$forall epsilon>0, exists n_{epsilon} in mathbb{N}: forall n>n_{epsilon}, P(|bar{X}_n - mu| <epsilon)=1. $$
In other words, if we want our estimate to be within $epsilon$ of the true value, there exists a sample size, $n_{epsilon}$, such that for any sample at least that large, our estimate will be within $epsilon$ of the true value with probability 1. Convergence in probability gives us confidence our estimators perform well with large samples.
Convergence in distribution tell us something very different and is primarily used for hypothesis testing. Under the same distributional assumptions described above, CLT gives us that
$$sqrt{n}(bar{X}_n-mu) rightarrow_D N(0,E(X_1^2)).$$
Convergence in distribution means that the cdf of the left-hand size converges at all continuity points to the cdf of the right-hand side, i.e.
$$lim_{n rightarrow infty} F_n(x) = F(x),$$
where $F_n(x)$ is the cdf of $sqrt{n}(bar{X}_n-mu)$ and $F(x)$ is the cdf for a $N(0,E(X_1^2))$ distribution. Knowing the limiting distribution allows us to test hypotheses about the sample mean (or whatever estimate we are generating).
New contributor
edited 14 hours ago
New contributor
answered 14 hours ago
dlnBdlnB
3688
3688
New contributor
New contributor
add a comment |
add a comment |
Martin is a new contributor. Be nice, and check out our Code of Conduct.
Martin is a new contributor. Be nice, and check out our Code of Conduct.
Martin is a new contributor. Be nice, and check out our Code of Conduct.
Martin is a new contributor. Be nice, and check out our Code of Conduct.
Thanks for contributing an answer to Economics Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2feconomics.stackexchange.com%2fquestions%2f27300%2fconvergence-in-probability-and-convergence-in-distribution%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
$begingroup$
See: quora.com/…
$endgroup$
– afreelunch
13 hours ago