Why would a Clustered Index Seek return a higher “Actual Number of Rows” than there are rows in the...
up vote
6
down vote
favorite
I'm troubleshooting an issue related to SQL Server (Azure SQL Database technically) occasionally choosing a bad execution plan, presumably due to skewed stats. sp_updatestats
fixes it every time, until a few hours or days later when a bad plan gets cached again.
Looking at the "bad" plan, I noticed something that strikes me as odd: there is a Clustered Index Seek on a table that currently has about 1.7 million rows. The "Estimated Number of Rows" for this operation is about 1200, which is definitely in line with the average row count I would expect from that operation in this case, but the "Actual Number of Rows" is in excess of 60 million! Following the fat line from this leaf node, various downstream operations such as joins and sorts are being performed on all 60 million, causing excessive slowness, spills to tempdb, and other badness.
I must be misunderstanding what a Clustered Index Seek actually does, because I wouldn't think it's possible for it to "output" more rows than are in the underlying table. What could cause this? And better yet, any pointers on how to fix it?
[Bonus points for including something like "sp_updatestats
fixes it every time but can't figure out how to fix it permanently? Go read this article." This has been a general problem for us on a few different fronts lately.]
sql-server optimization execution-plan azure-sql-database
add a comment |
up vote
6
down vote
favorite
I'm troubleshooting an issue related to SQL Server (Azure SQL Database technically) occasionally choosing a bad execution plan, presumably due to skewed stats. sp_updatestats
fixes it every time, until a few hours or days later when a bad plan gets cached again.
Looking at the "bad" plan, I noticed something that strikes me as odd: there is a Clustered Index Seek on a table that currently has about 1.7 million rows. The "Estimated Number of Rows" for this operation is about 1200, which is definitely in line with the average row count I would expect from that operation in this case, but the "Actual Number of Rows" is in excess of 60 million! Following the fat line from this leaf node, various downstream operations such as joins and sorts are being performed on all 60 million, causing excessive slowness, spills to tempdb, and other badness.
I must be misunderstanding what a Clustered Index Seek actually does, because I wouldn't think it's possible for it to "output" more rows than are in the underlying table. What could cause this? And better yet, any pointers on how to fix it?
[Bonus points for including something like "sp_updatestats
fixes it every time but can't figure out how to fix it permanently? Go read this article." This has been a general problem for us on a few different fronts lately.]
sql-server optimization execution-plan azure-sql-database
1
Can you post the plan on Paste the plan please?
– George.Palacios
Dec 12 at 16:23
Certainly: brentozar.com/pastetheplan/?id=BktRIhC1V
– Todd Menier
Dec 12 at 16:27
60m row index seek in question is on ProductCatalog. I do see there's an index scan in the plan as well and I may look into it, but the "good" plan contains that too and time-wise it looks to be a non-factor in both cases.
– Todd Menier
Dec 12 at 16:30
add a comment |
up vote
6
down vote
favorite
up vote
6
down vote
favorite
I'm troubleshooting an issue related to SQL Server (Azure SQL Database technically) occasionally choosing a bad execution plan, presumably due to skewed stats. sp_updatestats
fixes it every time, until a few hours or days later when a bad plan gets cached again.
Looking at the "bad" plan, I noticed something that strikes me as odd: there is a Clustered Index Seek on a table that currently has about 1.7 million rows. The "Estimated Number of Rows" for this operation is about 1200, which is definitely in line with the average row count I would expect from that operation in this case, but the "Actual Number of Rows" is in excess of 60 million! Following the fat line from this leaf node, various downstream operations such as joins and sorts are being performed on all 60 million, causing excessive slowness, spills to tempdb, and other badness.
I must be misunderstanding what a Clustered Index Seek actually does, because I wouldn't think it's possible for it to "output" more rows than are in the underlying table. What could cause this? And better yet, any pointers on how to fix it?
[Bonus points for including something like "sp_updatestats
fixes it every time but can't figure out how to fix it permanently? Go read this article." This has been a general problem for us on a few different fronts lately.]
sql-server optimization execution-plan azure-sql-database
I'm troubleshooting an issue related to SQL Server (Azure SQL Database technically) occasionally choosing a bad execution plan, presumably due to skewed stats. sp_updatestats
fixes it every time, until a few hours or days later when a bad plan gets cached again.
Looking at the "bad" plan, I noticed something that strikes me as odd: there is a Clustered Index Seek on a table that currently has about 1.7 million rows. The "Estimated Number of Rows" for this operation is about 1200, which is definitely in line with the average row count I would expect from that operation in this case, but the "Actual Number of Rows" is in excess of 60 million! Following the fat line from this leaf node, various downstream operations such as joins and sorts are being performed on all 60 million, causing excessive slowness, spills to tempdb, and other badness.
I must be misunderstanding what a Clustered Index Seek actually does, because I wouldn't think it's possible for it to "output" more rows than are in the underlying table. What could cause this? And better yet, any pointers on how to fix it?
[Bonus points for including something like "sp_updatestats
fixes it every time but can't figure out how to fix it permanently? Go read this article." This has been a general problem for us on a few different fronts lately.]
sql-server optimization execution-plan azure-sql-database
sql-server optimization execution-plan azure-sql-database
edited Dec 12 at 16:36
asked Dec 12 at 16:23
Todd Menier
31239
31239
1
Can you post the plan on Paste the plan please?
– George.Palacios
Dec 12 at 16:23
Certainly: brentozar.com/pastetheplan/?id=BktRIhC1V
– Todd Menier
Dec 12 at 16:27
60m row index seek in question is on ProductCatalog. I do see there's an index scan in the plan as well and I may look into it, but the "good" plan contains that too and time-wise it looks to be a non-factor in both cases.
– Todd Menier
Dec 12 at 16:30
add a comment |
1
Can you post the plan on Paste the plan please?
– George.Palacios
Dec 12 at 16:23
Certainly: brentozar.com/pastetheplan/?id=BktRIhC1V
– Todd Menier
Dec 12 at 16:27
60m row index seek in question is on ProductCatalog. I do see there's an index scan in the plan as well and I may look into it, but the "good" plan contains that too and time-wise it looks to be a non-factor in both cases.
– Todd Menier
Dec 12 at 16:30
1
1
Can you post the plan on Paste the plan please?
– George.Palacios
Dec 12 at 16:23
Can you post the plan on Paste the plan please?
– George.Palacios
Dec 12 at 16:23
Certainly: brentozar.com/pastetheplan/?id=BktRIhC1V
– Todd Menier
Dec 12 at 16:27
Certainly: brentozar.com/pastetheplan/?id=BktRIhC1V
– Todd Menier
Dec 12 at 16:27
60m row index seek in question is on ProductCatalog. I do see there's an index scan in the plan as well and I may look into it, but the "good" plan contains that too and time-wise it looks to be a non-factor in both cases.
– Todd Menier
Dec 12 at 16:30
60m row index seek in question is on ProductCatalog. I do see there's an index scan in the plan as well and I may look into it, but the "good" plan contains that too and time-wise it looks to be a non-factor in both cases.
– Todd Menier
Dec 12 at 16:30
add a comment |
1 Answer
1
active
oldest
votes
up vote
6
down vote
accepted
The Seek returns more rows because it is on the inner (bottom) side of a Nested Loop. Every row returned by the outer operation results in a new Seek operation. So you're not getting 60m rows from a single Seek, but from over 9000 of them (number of executions).
Also of note: when looking at estimations, the total number of rows estimated will be Estimated Number of Executions multiplied by Estimated Number of Rows
Ah, that makes perfect sense. 60 million / 9000 = a much more reasonable number. :)
– Todd Menier
Dec 12 at 16:49
Thanks. I'm accepting this since it directly answers the "why", but I sure I wish knew where to go from here. I'm joining 9 tables, all of which look to be indexed properly. The plan that goes off the rails is (to simplify) joining tables A and B first, resulting in 60 million rows, and later joining C, which brings it down to zero. I basically want to ensure C is considered earlier, preferably without "hacks" like index hints or forced execution plans.
– Todd Menier
Dec 12 at 18:41
@ToddMenier That might be worth asking as a separate question then. There is a large community here of query tuners who will love to help. Please note that you will want to share the view definition and probably a "good" plan as well.
– Forrest
Dec 12 at 18:53
1
@ToddMenier - you need to see if you can get accurate estimates forSELECT * FROM Buyer B JOIN CatalogBuyer CB ON CB.CompanyID = B.BuyerID JOIN Party P ON P.CompanyID = CB.CompanyID WHERE B.SellerID = 10424 AND P.Type IN (2,3)
the plan estimates58
and has actual9,090
leading into that nested loops join you point out. The most reliable method of getting accurate estimates would be to materialize that result into a temp table and join onto that instead though this would mean you rewriting the query to not use the view. Or try updating stats or creating filtered/multi column stats.
– Martin Smith
Dec 12 at 19:13
add a comment |
Your Answer
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "182"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdba.stackexchange.com%2fquestions%2f224793%2fwhy-would-a-clustered-index-seek-return-a-higher-actual-number-of-rows-than-th%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
6
down vote
accepted
The Seek returns more rows because it is on the inner (bottom) side of a Nested Loop. Every row returned by the outer operation results in a new Seek operation. So you're not getting 60m rows from a single Seek, but from over 9000 of them (number of executions).
Also of note: when looking at estimations, the total number of rows estimated will be Estimated Number of Executions multiplied by Estimated Number of Rows
Ah, that makes perfect sense. 60 million / 9000 = a much more reasonable number. :)
– Todd Menier
Dec 12 at 16:49
Thanks. I'm accepting this since it directly answers the "why", but I sure I wish knew where to go from here. I'm joining 9 tables, all of which look to be indexed properly. The plan that goes off the rails is (to simplify) joining tables A and B first, resulting in 60 million rows, and later joining C, which brings it down to zero. I basically want to ensure C is considered earlier, preferably without "hacks" like index hints or forced execution plans.
– Todd Menier
Dec 12 at 18:41
@ToddMenier That might be worth asking as a separate question then. There is a large community here of query tuners who will love to help. Please note that you will want to share the view definition and probably a "good" plan as well.
– Forrest
Dec 12 at 18:53
1
@ToddMenier - you need to see if you can get accurate estimates forSELECT * FROM Buyer B JOIN CatalogBuyer CB ON CB.CompanyID = B.BuyerID JOIN Party P ON P.CompanyID = CB.CompanyID WHERE B.SellerID = 10424 AND P.Type IN (2,3)
the plan estimates58
and has actual9,090
leading into that nested loops join you point out. The most reliable method of getting accurate estimates would be to materialize that result into a temp table and join onto that instead though this would mean you rewriting the query to not use the view. Or try updating stats or creating filtered/multi column stats.
– Martin Smith
Dec 12 at 19:13
add a comment |
up vote
6
down vote
accepted
The Seek returns more rows because it is on the inner (bottom) side of a Nested Loop. Every row returned by the outer operation results in a new Seek operation. So you're not getting 60m rows from a single Seek, but from over 9000 of them (number of executions).
Also of note: when looking at estimations, the total number of rows estimated will be Estimated Number of Executions multiplied by Estimated Number of Rows
Ah, that makes perfect sense. 60 million / 9000 = a much more reasonable number. :)
– Todd Menier
Dec 12 at 16:49
Thanks. I'm accepting this since it directly answers the "why", but I sure I wish knew where to go from here. I'm joining 9 tables, all of which look to be indexed properly. The plan that goes off the rails is (to simplify) joining tables A and B first, resulting in 60 million rows, and later joining C, which brings it down to zero. I basically want to ensure C is considered earlier, preferably without "hacks" like index hints or forced execution plans.
– Todd Menier
Dec 12 at 18:41
@ToddMenier That might be worth asking as a separate question then. There is a large community here of query tuners who will love to help. Please note that you will want to share the view definition and probably a "good" plan as well.
– Forrest
Dec 12 at 18:53
1
@ToddMenier - you need to see if you can get accurate estimates forSELECT * FROM Buyer B JOIN CatalogBuyer CB ON CB.CompanyID = B.BuyerID JOIN Party P ON P.CompanyID = CB.CompanyID WHERE B.SellerID = 10424 AND P.Type IN (2,3)
the plan estimates58
and has actual9,090
leading into that nested loops join you point out. The most reliable method of getting accurate estimates would be to materialize that result into a temp table and join onto that instead though this would mean you rewriting the query to not use the view. Or try updating stats or creating filtered/multi column stats.
– Martin Smith
Dec 12 at 19:13
add a comment |
up vote
6
down vote
accepted
up vote
6
down vote
accepted
The Seek returns more rows because it is on the inner (bottom) side of a Nested Loop. Every row returned by the outer operation results in a new Seek operation. So you're not getting 60m rows from a single Seek, but from over 9000 of them (number of executions).
Also of note: when looking at estimations, the total number of rows estimated will be Estimated Number of Executions multiplied by Estimated Number of Rows
The Seek returns more rows because it is on the inner (bottom) side of a Nested Loop. Every row returned by the outer operation results in a new Seek operation. So you're not getting 60m rows from a single Seek, but from over 9000 of them (number of executions).
Also of note: when looking at estimations, the total number of rows estimated will be Estimated Number of Executions multiplied by Estimated Number of Rows
edited Dec 12 at 16:53
answered Dec 12 at 16:41
Forrest
1,861517
1,861517
Ah, that makes perfect sense. 60 million / 9000 = a much more reasonable number. :)
– Todd Menier
Dec 12 at 16:49
Thanks. I'm accepting this since it directly answers the "why", but I sure I wish knew where to go from here. I'm joining 9 tables, all of which look to be indexed properly. The plan that goes off the rails is (to simplify) joining tables A and B first, resulting in 60 million rows, and later joining C, which brings it down to zero. I basically want to ensure C is considered earlier, preferably without "hacks" like index hints or forced execution plans.
– Todd Menier
Dec 12 at 18:41
@ToddMenier That might be worth asking as a separate question then. There is a large community here of query tuners who will love to help. Please note that you will want to share the view definition and probably a "good" plan as well.
– Forrest
Dec 12 at 18:53
1
@ToddMenier - you need to see if you can get accurate estimates forSELECT * FROM Buyer B JOIN CatalogBuyer CB ON CB.CompanyID = B.BuyerID JOIN Party P ON P.CompanyID = CB.CompanyID WHERE B.SellerID = 10424 AND P.Type IN (2,3)
the plan estimates58
and has actual9,090
leading into that nested loops join you point out. The most reliable method of getting accurate estimates would be to materialize that result into a temp table and join onto that instead though this would mean you rewriting the query to not use the view. Or try updating stats or creating filtered/multi column stats.
– Martin Smith
Dec 12 at 19:13
add a comment |
Ah, that makes perfect sense. 60 million / 9000 = a much more reasonable number. :)
– Todd Menier
Dec 12 at 16:49
Thanks. I'm accepting this since it directly answers the "why", but I sure I wish knew where to go from here. I'm joining 9 tables, all of which look to be indexed properly. The plan that goes off the rails is (to simplify) joining tables A and B first, resulting in 60 million rows, and later joining C, which brings it down to zero. I basically want to ensure C is considered earlier, preferably without "hacks" like index hints or forced execution plans.
– Todd Menier
Dec 12 at 18:41
@ToddMenier That might be worth asking as a separate question then. There is a large community here of query tuners who will love to help. Please note that you will want to share the view definition and probably a "good" plan as well.
– Forrest
Dec 12 at 18:53
1
@ToddMenier - you need to see if you can get accurate estimates forSELECT * FROM Buyer B JOIN CatalogBuyer CB ON CB.CompanyID = B.BuyerID JOIN Party P ON P.CompanyID = CB.CompanyID WHERE B.SellerID = 10424 AND P.Type IN (2,3)
the plan estimates58
and has actual9,090
leading into that nested loops join you point out. The most reliable method of getting accurate estimates would be to materialize that result into a temp table and join onto that instead though this would mean you rewriting the query to not use the view. Or try updating stats or creating filtered/multi column stats.
– Martin Smith
Dec 12 at 19:13
Ah, that makes perfect sense. 60 million / 9000 = a much more reasonable number. :)
– Todd Menier
Dec 12 at 16:49
Ah, that makes perfect sense. 60 million / 9000 = a much more reasonable number. :)
– Todd Menier
Dec 12 at 16:49
Thanks. I'm accepting this since it directly answers the "why", but I sure I wish knew where to go from here. I'm joining 9 tables, all of which look to be indexed properly. The plan that goes off the rails is (to simplify) joining tables A and B first, resulting in 60 million rows, and later joining C, which brings it down to zero. I basically want to ensure C is considered earlier, preferably without "hacks" like index hints or forced execution plans.
– Todd Menier
Dec 12 at 18:41
Thanks. I'm accepting this since it directly answers the "why", but I sure I wish knew where to go from here. I'm joining 9 tables, all of which look to be indexed properly. The plan that goes off the rails is (to simplify) joining tables A and B first, resulting in 60 million rows, and later joining C, which brings it down to zero. I basically want to ensure C is considered earlier, preferably without "hacks" like index hints or forced execution plans.
– Todd Menier
Dec 12 at 18:41
@ToddMenier That might be worth asking as a separate question then. There is a large community here of query tuners who will love to help. Please note that you will want to share the view definition and probably a "good" plan as well.
– Forrest
Dec 12 at 18:53
@ToddMenier That might be worth asking as a separate question then. There is a large community here of query tuners who will love to help. Please note that you will want to share the view definition and probably a "good" plan as well.
– Forrest
Dec 12 at 18:53
1
1
@ToddMenier - you need to see if you can get accurate estimates for
SELECT * FROM Buyer B JOIN CatalogBuyer CB ON CB.CompanyID = B.BuyerID JOIN Party P ON P.CompanyID = CB.CompanyID WHERE B.SellerID = 10424 AND P.Type IN (2,3)
the plan estimates 58
and has actual 9,090
leading into that nested loops join you point out. The most reliable method of getting accurate estimates would be to materialize that result into a temp table and join onto that instead though this would mean you rewriting the query to not use the view. Or try updating stats or creating filtered/multi column stats.– Martin Smith
Dec 12 at 19:13
@ToddMenier - you need to see if you can get accurate estimates for
SELECT * FROM Buyer B JOIN CatalogBuyer CB ON CB.CompanyID = B.BuyerID JOIN Party P ON P.CompanyID = CB.CompanyID WHERE B.SellerID = 10424 AND P.Type IN (2,3)
the plan estimates 58
and has actual 9,090
leading into that nested loops join you point out. The most reliable method of getting accurate estimates would be to materialize that result into a temp table and join onto that instead though this would mean you rewriting the query to not use the view. Or try updating stats or creating filtered/multi column stats.– Martin Smith
Dec 12 at 19:13
add a comment |
Thanks for contributing an answer to Database Administrators Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Some of your past answers have not been well-received, and you're in danger of being blocked from answering.
Please pay close attention to the following guidance:
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdba.stackexchange.com%2fquestions%2f224793%2fwhy-would-a-clustered-index-seek-return-a-higher-actual-number-of-rows-than-th%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
1
Can you post the plan on Paste the plan please?
– George.Palacios
Dec 12 at 16:23
Certainly: brentozar.com/pastetheplan/?id=BktRIhC1V
– Todd Menier
Dec 12 at 16:27
60m row index seek in question is on ProductCatalog. I do see there's an index scan in the plan as well and I may look into it, but the "good" plan contains that too and time-wise it looks to be a non-factor in both cases.
– Todd Menier
Dec 12 at 16:30