CAN maintaining higher level of fragmentation result in data corruption over
time? OR they are totally unrelated.
I would like to know because rebuilding indexes take resources and that is
an issue for a 24 x 7 environment. In other words, should we keep data device
safely fragmented for sometime if we have some hard disk space available that
allows us to maintain some abnormal growth, allowing us not to rebuild
indexes?
Can someone with SQL Server Internal Data Storage Knowledge please reply?
Also, please let me know if my question is not very clear enough and you
need further explanation.
Regards,
MZeeshan
Fragmentation in and of itself should not cause data corruption. Most
corruption these days is due to hardware issues and not the database itself.
I don't get what you are stating about the extra space and abnormal growth.
Are you staying you have lots of page splits and that is causing your data
size to increase? If so you do not have the clustered index on the right
column or you need to adjust your fill factor.
Andrew J. Kelly SQL MVP
"MZeeshan" <mzeeshan@.community.nospam> wrote in message
news:DEC26292-C91A-4DD6-B001-E362FC320E63@.microsoft.com...
> CAN maintaining higher level of fragmentation result in data corruption
> over
> time? OR they are totally unrelated.
> I would like to know because rebuilding indexes take resources and that is
> an issue for a 24 x 7 environment. In other words, should we keep data
> device
> safely fragmented for sometime if we have some hard disk space available
> that
> allows us to maintain some abnormal growth, allowing us not to rebuild
> indexes?
> Can someone with SQL Server Internal Data Storage Knowledge please reply?
> Also, please let me know if my question is not very clear enough and you
> need further explanation.
> --
> Regards,
> MZeeshan
|||Inline...
Paul Randal
Dev Lead, Microsoft SQL Server Storage Engine
This posting is provided "AS IS" with no warranties, and confers no rights.
"MZeeshan" <mzeeshan@.community.nospam> wrote in message
news:DEC26292-C91A-4DD6-B001-E362FC320E63@.microsoft.com...
> CAN maintaining higher level of fragmentation result in data corruption
over
> time? OR they are totally unrelated.
>
Absolutely not - they are unrelated. Corruption is caused by hardware
problems (and highly-infrequent bugs in various layers of software and
firmware, obviously)
> I would like to know because rebuilding indexes take resources and that is
> an issue for a 24 x 7 environment. In other words, should we keep data
device
> safely fragmented for sometime if we have some hard disk space available
that
> allows us to maintain some abnormal growth, allowing us not to rebuild
> indexes?
> Can someone with SQL Server Internal Data Storage Knowledge please reply?
I'm your man. What are you trying to achieve? Minimal space usage, minimal
perf drop from disk/index fragmentation? Are you correlating a perf drop
with increasing fragmentation, or are you just worried about it?
> Also, please let me know if my question is not very clear enough and you
> need further explanation.
> --
> Regards,
> MZeeshan
Showing posts with label fragmentation. Show all posts
Showing posts with label fragmentation. Show all posts
Friday, March 23, 2012
Higher level of fragmentation and data corruption.
CAN maintaining higher level of fragmentation result in data corruption over
time? OR they are totally unrelated.
I would like to know because rebuilding indexes take resources and that is
an issue for a 24 x 7 environment. In other words, should we keep data device
safely fragmented for sometime if we have some hard disk space available that
allows us to maintain some abnormal growth, allowing us not to rebuild
indexes?
Can someone with SQL Server Internal Data Storage Knowledge please reply?
Also, please let me know if my question is not very clear enough and you
need further explanation.
--
Regards,
MZeeshanFragmentation in and of itself should not cause data corruption. Most
corruption these days is due to hardware issues and not the database itself.
I don't get what you are stating about the extra space and abnormal growth.
Are you staying you have lots of page splits and that is causing your data
size to increase? If so you do not have the clustered index on the right
column or you need to adjust your fill factor.
--
Andrew J. Kelly SQL MVP
"MZeeshan" <mzeeshan@.community.nospam> wrote in message
news:DEC26292-C91A-4DD6-B001-E362FC320E63@.microsoft.com...
> CAN maintaining higher level of fragmentation result in data corruption
> over
> time? OR they are totally unrelated.
> I would like to know because rebuilding indexes take resources and that is
> an issue for a 24 x 7 environment. In other words, should we keep data
> device
> safely fragmented for sometime if we have some hard disk space available
> that
> allows us to maintain some abnormal growth, allowing us not to rebuild
> indexes?
> Can someone with SQL Server Internal Data Storage Knowledge please reply?
> Also, please let me know if my question is not very clear enough and you
> need further explanation.
> --
> Regards,
> MZeeshan|||Inline...
--
Paul Randal
Dev Lead, Microsoft SQL Server Storage Engine
This posting is provided "AS IS" with no warranties, and confers no rights.
"MZeeshan" <mzeeshan@.community.nospam> wrote in message
news:DEC26292-C91A-4DD6-B001-E362FC320E63@.microsoft.com...
> CAN maintaining higher level of fragmentation result in data corruption
over
> time? OR they are totally unrelated.
>
Absolutely not - they are unrelated. Corruption is caused by hardware
problems (and highly-infrequent bugs in various layers of software and
firmware, obviously)
> I would like to know because rebuilding indexes take resources and that is
> an issue for a 24 x 7 environment. In other words, should we keep data
device
> safely fragmented for sometime if we have some hard disk space available
that
> allows us to maintain some abnormal growth, allowing us not to rebuild
> indexes?
> Can someone with SQL Server Internal Data Storage Knowledge please reply?
I'm your man. What are you trying to achieve? Minimal space usage, minimal
perf drop from disk/index fragmentation? Are you correlating a perf drop
with increasing fragmentation, or are you just worried about it?
> Also, please let me know if my question is not very clear enough and you
> need further explanation.
> --
> Regards,
> MZeeshan
time? OR they are totally unrelated.
I would like to know because rebuilding indexes take resources and that is
an issue for a 24 x 7 environment. In other words, should we keep data device
safely fragmented for sometime if we have some hard disk space available that
allows us to maintain some abnormal growth, allowing us not to rebuild
indexes?
Can someone with SQL Server Internal Data Storage Knowledge please reply?
Also, please let me know if my question is not very clear enough and you
need further explanation.
--
Regards,
MZeeshanFragmentation in and of itself should not cause data corruption. Most
corruption these days is due to hardware issues and not the database itself.
I don't get what you are stating about the extra space and abnormal growth.
Are you staying you have lots of page splits and that is causing your data
size to increase? If so you do not have the clustered index on the right
column or you need to adjust your fill factor.
--
Andrew J. Kelly SQL MVP
"MZeeshan" <mzeeshan@.community.nospam> wrote in message
news:DEC26292-C91A-4DD6-B001-E362FC320E63@.microsoft.com...
> CAN maintaining higher level of fragmentation result in data corruption
> over
> time? OR they are totally unrelated.
> I would like to know because rebuilding indexes take resources and that is
> an issue for a 24 x 7 environment. In other words, should we keep data
> device
> safely fragmented for sometime if we have some hard disk space available
> that
> allows us to maintain some abnormal growth, allowing us not to rebuild
> indexes?
> Can someone with SQL Server Internal Data Storage Knowledge please reply?
> Also, please let me know if my question is not very clear enough and you
> need further explanation.
> --
> Regards,
> MZeeshan|||Inline...
--
Paul Randal
Dev Lead, Microsoft SQL Server Storage Engine
This posting is provided "AS IS" with no warranties, and confers no rights.
"MZeeshan" <mzeeshan@.community.nospam> wrote in message
news:DEC26292-C91A-4DD6-B001-E362FC320E63@.microsoft.com...
> CAN maintaining higher level of fragmentation result in data corruption
over
> time? OR they are totally unrelated.
>
Absolutely not - they are unrelated. Corruption is caused by hardware
problems (and highly-infrequent bugs in various layers of software and
firmware, obviously)
> I would like to know because rebuilding indexes take resources and that is
> an issue for a 24 x 7 environment. In other words, should we keep data
device
> safely fragmented for sometime if we have some hard disk space available
that
> allows us to maintain some abnormal growth, allowing us not to rebuild
> indexes?
> Can someone with SQL Server Internal Data Storage Knowledge please reply?
I'm your man. What are you trying to achieve? Minimal space usage, minimal
perf drop from disk/index fragmentation? Are you correlating a perf drop
with increasing fragmentation, or are you just worried about it?
> Also, please let me know if my question is not very clear enough and you
> need further explanation.
> --
> Regards,
> MZeeshan
Labels:
corruption,
database,
fragmentation,
level,
maintaining,
microsoft,
mysql,
oracle,
server,
sql,
time,
totally,
unrelated
Higher level of fragmentation and data corruption.
CAN maintaining higher level of fragmentation result in data corruption over
time? OR they are totally unrelated.
I would like to know because rebuilding indexes take resources and that is
an issue for a 24 x 7 environment. In other words, should we keep data devic
e
safely fragmented for sometime if we have some hard disk space available tha
t
allows us to maintain some abnormal growth, allowing us not to rebuild
indexes?
Can someone with SQL Server Internal Data Storage Knowledge please reply?
Also, please let me know if my question is not very clear enough and you
need further explanation.
Regards,
MZeeshanFragmentation in and of itself should not cause data corruption. Most
corruption these days is due to hardware issues and not the database itself.
I don't get what you are stating about the extra space and abnormal growth.
Are you staying you have lots of page splits and that is causing your data
size to increase? If so you do not have the clustered index on the right
column or you need to adjust your fill factor.
Andrew J. Kelly SQL MVP
"MZeeshan" <mzeeshan@.community.nospam> wrote in message
news:DEC26292-C91A-4DD6-B001-E362FC320E63@.microsoft.com...
> CAN maintaining higher level of fragmentation result in data corruption
> over
> time? OR they are totally unrelated.
> I would like to know because rebuilding indexes take resources and that is
> an issue for a 24 x 7 environment. In other words, should we keep data
> device
> safely fragmented for sometime if we have some hard disk space available
> that
> allows us to maintain some abnormal growth, allowing us not to rebuild
> indexes?
> Can someone with SQL Server Internal Data Storage Knowledge please reply?
> Also, please let me know if my question is not very clear enough and you
> need further explanation.
> --
> Regards,
> MZeeshan|||Inline...
Paul Randal
Dev Lead, Microsoft SQL Server Storage Engine
This posting is provided "AS IS" with no warranties, and confers no rights.
"MZeeshan" <mzeeshan@.community.nospam> wrote in message
news:DEC26292-C91A-4DD6-B001-E362FC320E63@.microsoft.com...
> CAN maintaining higher level of fragmentation result in data corruption
over
> time? OR they are totally unrelated.
>
Absolutely not - they are unrelated. Corruption is caused by hardware
problems (and highly-infrequent bugs in various layers of software and
firmware, obviously)
> I would like to know because rebuilding indexes take resources and that is
> an issue for a 24 x 7 environment. In other words, should we keep data
device
> safely fragmented for sometime if we have some hard disk space available
that
> allows us to maintain some abnormal growth, allowing us not to rebuild
> indexes?
> Can someone with SQL Server Internal Data Storage Knowledge please reply?
I'm your man. What are you trying to achieve? Minimal space usage, minimal
perf drop from disk/index fragmentation? Are you correlating a perf drop
with increasing fragmentation, or are you just worried about it?
> Also, please let me know if my question is not very clear enough and you
> need further explanation.
> --
> Regards,
> MZeeshan
time? OR they are totally unrelated.
I would like to know because rebuilding indexes take resources and that is
an issue for a 24 x 7 environment. In other words, should we keep data devic
e
safely fragmented for sometime if we have some hard disk space available tha
t
allows us to maintain some abnormal growth, allowing us not to rebuild
indexes?
Can someone with SQL Server Internal Data Storage Knowledge please reply?
Also, please let me know if my question is not very clear enough and you
need further explanation.
Regards,
MZeeshanFragmentation in and of itself should not cause data corruption. Most
corruption these days is due to hardware issues and not the database itself.
I don't get what you are stating about the extra space and abnormal growth.
Are you staying you have lots of page splits and that is causing your data
size to increase? If so you do not have the clustered index on the right
column or you need to adjust your fill factor.
Andrew J. Kelly SQL MVP
"MZeeshan" <mzeeshan@.community.nospam> wrote in message
news:DEC26292-C91A-4DD6-B001-E362FC320E63@.microsoft.com...
> CAN maintaining higher level of fragmentation result in data corruption
> over
> time? OR they are totally unrelated.
> I would like to know because rebuilding indexes take resources and that is
> an issue for a 24 x 7 environment. In other words, should we keep data
> device
> safely fragmented for sometime if we have some hard disk space available
> that
> allows us to maintain some abnormal growth, allowing us not to rebuild
> indexes?
> Can someone with SQL Server Internal Data Storage Knowledge please reply?
> Also, please let me know if my question is not very clear enough and you
> need further explanation.
> --
> Regards,
> MZeeshan|||Inline...
Paul Randal
Dev Lead, Microsoft SQL Server Storage Engine
This posting is provided "AS IS" with no warranties, and confers no rights.
"MZeeshan" <mzeeshan@.community.nospam> wrote in message
news:DEC26292-C91A-4DD6-B001-E362FC320E63@.microsoft.com...
> CAN maintaining higher level of fragmentation result in data corruption
over
> time? OR they are totally unrelated.
>
Absolutely not - they are unrelated. Corruption is caused by hardware
problems (and highly-infrequent bugs in various layers of software and
firmware, obviously)
> I would like to know because rebuilding indexes take resources and that is
> an issue for a 24 x 7 environment. In other words, should we keep data
device
> safely fragmented for sometime if we have some hard disk space available
that
> allows us to maintain some abnormal growth, allowing us not to rebuild
> indexes?
> Can someone with SQL Server Internal Data Storage Knowledge please reply?
I'm your man. What are you trying to achieve? Minimal space usage, minimal
perf drop from disk/index fragmentation? Are you correlating a perf drop
with increasing fragmentation, or are you just worried about it?
> Also, please let me know if my question is not very clear enough and you
> need further explanation.
> --
> Regards,
> MZeeshan
Labels:
corruption,
database,
fragmentation,
level,
maintaining,
microsoft,
mysql,
oracle,
overtime,
server,
sql,
totally,
unrelated
Wednesday, March 21, 2012
High fragmentation with 0 rows in the table
I have a job in SQL 7 with sp3 that goes in a loop and inserts about 2 to 6
rows into an empty table and then deletes them and keeps going in the loop
for about 6 hours. If the table is not truncated before the job starts again
the following night, the job performance degrades considerably. It turns out
that after the job is done running, there are hundreds of data pages
allocated for this table with high extent fragmentation.
My question is why these pages are not de-allocated when there are no rows
in the table to begin with? If I truncate the table and add 6 rows manually,
then only one data page is allocated to the table. The table has about 6
columns with 4 columns of data type smallint, one decimal and a couple of
char(5).
Hi,
There could be several reasons. Are you commiting your transactions after
each loop? Also, the delete statement is a logged transaction, that means
everytime your script deletes the data, it is written in the transaction log
that causes space (data pages) to be allocated. Not sure how your script is
coded, so can't really be too specific.
The truncate statement is NOT logged so when you issue it, it wipes out the
table clean. That is why you are seeing this behaviour.
HTH
DeeJay Puar
MCDBA
"Adam" wrote:
> I have a job in SQL 7 with sp3 that goes in a loop and inserts about 2 to 6
> rows into an empty table and then deletes them and keeps going in the loop
> for about 6 hours. If the table is not truncated before the job starts again
> the following night, the job performance degrades considerably. It turns out
> that after the job is done running, there are hundreds of data pages
> allocated for this table with high extent fragmentation.
> My question is why these pages are not de-allocated when there are no rows
> in the table to begin with? If I truncate the table and add 6 rows manually,
> then only one data page is allocated to the table. The table has about 6
> columns with 4 columns of data type smallint, one decimal and a couple of
> char(5).
>
|||do you have any indexes on this table, specificaly a clustered one, or is
this a heap ?
and how is the "delete" done, with a 'delete tablename' ?
"Adam" <Adam@.discussions.microsoft.com> wrote in message
news:58830C32-198D-4DFF-9149-DBA76F8FB444@.microsoft.com...
>I have a job in SQL 7 with sp3 that goes in a loop and inserts about 2 to 6
> rows into an empty table and then deletes them and keeps going in the loop
> for about 6 hours. If the table is not truncated before the job starts
> again
> the following night, the job performance degrades considerably. It turns
> out
> that after the job is done running, there are hundreds of data pages
> allocated for this table with high extent fragmentation.
> My question is why these pages are not de-allocated when there are no rows
> in the table to begin with? If I truncate the table and add 6 rows
> manually,
> then only one data page is allocated to the table. The table has about 6
> columns with 4 columns of data type smallint, one decimal and a couple of
> char(5).
>
|||On Tue, 13 Dec 2005 15:12:23 -0500, "David J. Cartwright"
<davidcartwright@.hotmail.com> wrote:
>do you have any indexes on this table, specificaly a clustered one, or is
>this a heap ?
>and how is the "delete" done, with a 'delete tablename' ?
What he said.
Would a simple "truncate table" be a quick fix?
(not that it's so hard to defrag anyway)
J.
|||I am not 100% if the app is doing a commit after each delete but even if that
was not the case, after the final commit at the end of the 6th hour,
shouldn't the empty pages get deallocated?
"DeeJay Puar" wrote:
[vbcol=seagreen]
> Hi,
> There could be several reasons. Are you commiting your transactions after
> each loop? Also, the delete statement is a logged transaction, that means
> everytime your script deletes the data, it is written in the transaction log
> that causes space (data pages) to be allocated. Not sure how your script is
> coded, so can't really be too specific.
> The truncate statement is NOT logged so when you issue it, it wipes out the
> table clean. That is why you are seeing this behaviour.
> HTH
> DeeJay Puar
> MCDBA
> "Adam" wrote:
|||At the moment, there are no indexes and yes, my next plan of attack is to put
a clustered index to see how that pans out. I don't know for sure if that
will solve the problem 100% because now, the delete would also have to do
more IOs to maintain the clustered index. Also, a table with the clustered
index will get fragmented over time.
The SQL is doing "delete table".
"David J. Cartwright" wrote:
> do you have any indexes on this table, specificaly a clustered one, or is
> this a heap ?
> and how is the "delete" done, with a 'delete tablename' ?
> "Adam" <Adam@.discussions.microsoft.com> wrote in message
> news:58830C32-198D-4DFF-9149-DBA76F8FB444@.microsoft.com...
>
>
|||The current fix in fact is to truncate the table before the batch starts,
however, if SQL Server is accumulating empty pages with 0 rows in the table
then the performance starts to degrade over time while the 6 hour job is
running. If I can figure out the reason and the solution to why SQL Server is
not deallocating empty pages, then I can stop it from taking the time for
doing table scan through these empty pages and can tweek many other nightly
jobs that I am sure are facing a similar situation.
thoughts?
"jxstern" wrote:
> On Tue, 13 Dec 2005 15:12:23 -0500, "David J. Cartwright"
> <davidcartwright@.hotmail.com> wrote:
> What he said.
> Would a simple "truncate table" be a quick fix?
> (not that it's so hard to defrag anyway)
> J.
>
>
|||On Thu, 15 Dec 2005 06:21:02 -0800, "Adam"
<Adam@.discussions.microsoft.com> wrote:
>At the moment, there are no indexes and yes, my next plan of attack is to put
>a clustered index to see how that pans out.
It should.
If you delete from a table without a clustered index, SQLServer leaves
the pages deallocated until ... The Ghost wakes up and reclaims them,
which it does when it feels like it, and then it takes as long as it
takes.
Fun, huh?
Add the clustered index, if for no other reason than this.
Google the newsgroup archives or the web for more about the ghost,
there's very little about it in MSDN or BOL.
Josh
rows into an empty table and then deletes them and keeps going in the loop
for about 6 hours. If the table is not truncated before the job starts again
the following night, the job performance degrades considerably. It turns out
that after the job is done running, there are hundreds of data pages
allocated for this table with high extent fragmentation.
My question is why these pages are not de-allocated when there are no rows
in the table to begin with? If I truncate the table and add 6 rows manually,
then only one data page is allocated to the table. The table has about 6
columns with 4 columns of data type smallint, one decimal and a couple of
char(5).
Hi,
There could be several reasons. Are you commiting your transactions after
each loop? Also, the delete statement is a logged transaction, that means
everytime your script deletes the data, it is written in the transaction log
that causes space (data pages) to be allocated. Not sure how your script is
coded, so can't really be too specific.
The truncate statement is NOT logged so when you issue it, it wipes out the
table clean. That is why you are seeing this behaviour.
HTH
DeeJay Puar
MCDBA
"Adam" wrote:
> I have a job in SQL 7 with sp3 that goes in a loop and inserts about 2 to 6
> rows into an empty table and then deletes them and keeps going in the loop
> for about 6 hours. If the table is not truncated before the job starts again
> the following night, the job performance degrades considerably. It turns out
> that after the job is done running, there are hundreds of data pages
> allocated for this table with high extent fragmentation.
> My question is why these pages are not de-allocated when there are no rows
> in the table to begin with? If I truncate the table and add 6 rows manually,
> then only one data page is allocated to the table. The table has about 6
> columns with 4 columns of data type smallint, one decimal and a couple of
> char(5).
>
|||do you have any indexes on this table, specificaly a clustered one, or is
this a heap ?
and how is the "delete" done, with a 'delete tablename' ?
"Adam" <Adam@.discussions.microsoft.com> wrote in message
news:58830C32-198D-4DFF-9149-DBA76F8FB444@.microsoft.com...
>I have a job in SQL 7 with sp3 that goes in a loop and inserts about 2 to 6
> rows into an empty table and then deletes them and keeps going in the loop
> for about 6 hours. If the table is not truncated before the job starts
> again
> the following night, the job performance degrades considerably. It turns
> out
> that after the job is done running, there are hundreds of data pages
> allocated for this table with high extent fragmentation.
> My question is why these pages are not de-allocated when there are no rows
> in the table to begin with? If I truncate the table and add 6 rows
> manually,
> then only one data page is allocated to the table. The table has about 6
> columns with 4 columns of data type smallint, one decimal and a couple of
> char(5).
>
|||On Tue, 13 Dec 2005 15:12:23 -0500, "David J. Cartwright"
<davidcartwright@.hotmail.com> wrote:
>do you have any indexes on this table, specificaly a clustered one, or is
>this a heap ?
>and how is the "delete" done, with a 'delete tablename' ?
What he said.
Would a simple "truncate table" be a quick fix?
(not that it's so hard to defrag anyway)
J.
|||I am not 100% if the app is doing a commit after each delete but even if that
was not the case, after the final commit at the end of the 6th hour,
shouldn't the empty pages get deallocated?
"DeeJay Puar" wrote:
[vbcol=seagreen]
> Hi,
> There could be several reasons. Are you commiting your transactions after
> each loop? Also, the delete statement is a logged transaction, that means
> everytime your script deletes the data, it is written in the transaction log
> that causes space (data pages) to be allocated. Not sure how your script is
> coded, so can't really be too specific.
> The truncate statement is NOT logged so when you issue it, it wipes out the
> table clean. That is why you are seeing this behaviour.
> HTH
> DeeJay Puar
> MCDBA
> "Adam" wrote:
|||At the moment, there are no indexes and yes, my next plan of attack is to put
a clustered index to see how that pans out. I don't know for sure if that
will solve the problem 100% because now, the delete would also have to do
more IOs to maintain the clustered index. Also, a table with the clustered
index will get fragmented over time.
The SQL is doing "delete table".
"David J. Cartwright" wrote:
> do you have any indexes on this table, specificaly a clustered one, or is
> this a heap ?
> and how is the "delete" done, with a 'delete tablename' ?
> "Adam" <Adam@.discussions.microsoft.com> wrote in message
> news:58830C32-198D-4DFF-9149-DBA76F8FB444@.microsoft.com...
>
>
|||The current fix in fact is to truncate the table before the batch starts,
however, if SQL Server is accumulating empty pages with 0 rows in the table
then the performance starts to degrade over time while the 6 hour job is
running. If I can figure out the reason and the solution to why SQL Server is
not deallocating empty pages, then I can stop it from taking the time for
doing table scan through these empty pages and can tweek many other nightly
jobs that I am sure are facing a similar situation.
thoughts?
"jxstern" wrote:
> On Tue, 13 Dec 2005 15:12:23 -0500, "David J. Cartwright"
> <davidcartwright@.hotmail.com> wrote:
> What he said.
> Would a simple "truncate table" be a quick fix?
> (not that it's so hard to defrag anyway)
> J.
>
>
|||On Thu, 15 Dec 2005 06:21:02 -0800, "Adam"
<Adam@.discussions.microsoft.com> wrote:
>At the moment, there are no indexes and yes, my next plan of attack is to put
>a clustered index to see how that pans out.
It should.
If you delete from a table without a clustered index, SQLServer leaves
the pages deallocated until ... The Ghost wakes up and reclaims them,
which it does when it feels like it, and then it takes as long as it
takes.
Fun, huh?
Add the clustered index, if for no other reason than this.
Google the newsgroup archives or the web for more about the ghost,
there's very little about it in MSDN or BOL.
Josh
High fragmentation with 0 rows in the table
I have a job in SQL 7 with sp3 that goes in a loop and inserts about 2 to 6
rows into an empty table and then deletes them and keeps going in the loop
for about 6 hours. If the table is not truncated before the job starts again
the following night, the job performance degrades considerably. It turns out
that after the job is done running, there are hundreds of data pages
allocated for this table with high extent fragmentation.
My question is why these pages are not de-allocated when there are no rows
in the table to begin with? If I truncate the table and add 6 rows manually,
then only one data page is allocated to the table. The table has about 6
columns with 4 columns of data type smallint, one decimal and a couple of
char(5).Hi,
There could be several reasons. Are you commiting your transactions after
each loop? Also, the delete statement is a logged transaction, that means
everytime your script deletes the data, it is written in the transaction log
that causes space (data pages) to be allocated. Not sure how your script is
coded, so can't really be too specific.
The truncate statement is NOT logged so when you issue it, it wipes out the
table clean. That is why you are seeing this behaviour.
HTH
DeeJay Puar
MCDBA
"Adam" wrote:
> I have a job in SQL 7 with sp3 that goes in a loop and inserts about 2 to 6
> rows into an empty table and then deletes them and keeps going in the loop
> for about 6 hours. If the table is not truncated before the job starts again
> the following night, the job performance degrades considerably. It turns out
> that after the job is done running, there are hundreds of data pages
> allocated for this table with high extent fragmentation.
> My question is why these pages are not de-allocated when there are no rows
> in the table to begin with? If I truncate the table and add 6 rows manually,
> then only one data page is allocated to the table. The table has about 6
> columns with 4 columns of data type smallint, one decimal and a couple of
> char(5).
>|||do you have any indexes on this table, specificaly a clustered one, or is
this a heap ?
and how is the "delete" done, with a 'delete tablename' ?
"Adam" <Adam@.discussions.microsoft.com> wrote in message
news:58830C32-198D-4DFF-9149-DBA76F8FB444@.microsoft.com...
>I have a job in SQL 7 with sp3 that goes in a loop and inserts about 2 to 6
> rows into an empty table and then deletes them and keeps going in the loop
> for about 6 hours. If the table is not truncated before the job starts
> again
> the following night, the job performance degrades considerably. It turns
> out
> that after the job is done running, there are hundreds of data pages
> allocated for this table with high extent fragmentation.
> My question is why these pages are not de-allocated when there are no rows
> in the table to begin with? If I truncate the table and add 6 rows
> manually,
> then only one data page is allocated to the table. The table has about 6
> columns with 4 columns of data type smallint, one decimal and a couple of
> char(5).
>|||On Tue, 13 Dec 2005 15:12:23 -0500, "David J. Cartwright"
<davidcartwright@.hotmail.com> wrote:
>do you have any indexes on this table, specificaly a clustered one, or is
>this a heap ?
>and how is the "delete" done, with a 'delete tablename' ?
What he said.
Would a simple "truncate table" be a quick fix?
(not that it's so hard to defrag anyway)
J.|||I am not 100% if the app is doing a commit after each delete but even if that
was not the case, after the final commit at the end of the 6th hour,
shouldn't the empty pages get deallocated?
"DeeJay Puar" wrote:
> Hi,
> There could be several reasons. Are you commiting your transactions after
> each loop? Also, the delete statement is a logged transaction, that means
> everytime your script deletes the data, it is written in the transaction log
> that causes space (data pages) to be allocated. Not sure how your script is
> coded, so can't really be too specific.
> The truncate statement is NOT logged so when you issue it, it wipes out the
> table clean. That is why you are seeing this behaviour.
> HTH
> DeeJay Puar
> MCDBA
> "Adam" wrote:
> > I have a job in SQL 7 with sp3 that goes in a loop and inserts about 2 to 6
> > rows into an empty table and then deletes them and keeps going in the loop
> > for about 6 hours. If the table is not truncated before the job starts again
> > the following night, the job performance degrades considerably. It turns out
> > that after the job is done running, there are hundreds of data pages
> > allocated for this table with high extent fragmentation.
> >
> > My question is why these pages are not de-allocated when there are no rows
> > in the table to begin with? If I truncate the table and add 6 rows manually,
> > then only one data page is allocated to the table. The table has about 6
> > columns with 4 columns of data type smallint, one decimal and a couple of
> > char(5).
> >|||At the moment, there are no indexes and yes, my next plan of attack is to put
a clustered index to see how that pans out. I don't know for sure if that
will solve the problem 100% because now, the delete would also have to do
more IOs to maintain the clustered index. Also, a table with the clustered
index will get fragmented over time.
The SQL is doing "delete table".
"David J. Cartwright" wrote:
> do you have any indexes on this table, specificaly a clustered one, or is
> this a heap ?
> and how is the "delete" done, with a 'delete tablename' ?
> "Adam" <Adam@.discussions.microsoft.com> wrote in message
> news:58830C32-198D-4DFF-9149-DBA76F8FB444@.microsoft.com...
> >I have a job in SQL 7 with sp3 that goes in a loop and inserts about 2 to 6
> > rows into an empty table and then deletes them and keeps going in the loop
> > for about 6 hours. If the table is not truncated before the job starts
> > again
> > the following night, the job performance degrades considerably. It turns
> > out
> > that after the job is done running, there are hundreds of data pages
> > allocated for this table with high extent fragmentation.
> >
> > My question is why these pages are not de-allocated when there are no rows
> > in the table to begin with? If I truncate the table and add 6 rows
> > manually,
> > then only one data page is allocated to the table. The table has about 6
> > columns with 4 columns of data type smallint, one decimal and a couple of
> > char(5).
> >
>
>|||The current fix in fact is to truncate the table before the batch starts,
however, if SQL Server is accumulating empty pages with 0 rows in the table
then the performance starts to degrade over time while the 6 hour job is
running. If I can figure out the reason and the solution to why SQL Server is
not deallocating empty pages, then I can stop it from taking the time for
doing table scan through these empty pages and can tweek many other nightly
jobs that I am sure are facing a similar situation.
thoughts?
"jxstern" wrote:
> On Tue, 13 Dec 2005 15:12:23 -0500, "David J. Cartwright"
> <davidcartwright@.hotmail.com> wrote:
> >do you have any indexes on this table, specificaly a clustered one, or is
> >this a heap ?
> >and how is the "delete" done, with a 'delete tablename' ?
> What he said.
> Would a simple "truncate table" be a quick fix?
> (not that it's so hard to defrag anyway)
> J.
>
>|||On Thu, 15 Dec 2005 06:21:02 -0800, "Adam"
<Adam@.discussions.microsoft.com> wrote:
>At the moment, there are no indexes and yes, my next plan of attack is to put
>a clustered index to see how that pans out.
It should.
If you delete from a table without a clustered index, SQLServer leaves
the pages deallocated until ... The Ghost wakes up and reclaims them,
which it does when it feels like it, and then it takes as long as it
takes.
Fun, huh?
Add the clustered index, if for no other reason than this.
Google the newsgroup archives or the web for more about the ghost,
there's very little about it in MSDN or BOL.
Josh
rows into an empty table and then deletes them and keeps going in the loop
for about 6 hours. If the table is not truncated before the job starts again
the following night, the job performance degrades considerably. It turns out
that after the job is done running, there are hundreds of data pages
allocated for this table with high extent fragmentation.
My question is why these pages are not de-allocated when there are no rows
in the table to begin with? If I truncate the table and add 6 rows manually,
then only one data page is allocated to the table. The table has about 6
columns with 4 columns of data type smallint, one decimal and a couple of
char(5).Hi,
There could be several reasons. Are you commiting your transactions after
each loop? Also, the delete statement is a logged transaction, that means
everytime your script deletes the data, it is written in the transaction log
that causes space (data pages) to be allocated. Not sure how your script is
coded, so can't really be too specific.
The truncate statement is NOT logged so when you issue it, it wipes out the
table clean. That is why you are seeing this behaviour.
HTH
DeeJay Puar
MCDBA
"Adam" wrote:
> I have a job in SQL 7 with sp3 that goes in a loop and inserts about 2 to 6
> rows into an empty table and then deletes them and keeps going in the loop
> for about 6 hours. If the table is not truncated before the job starts again
> the following night, the job performance degrades considerably. It turns out
> that after the job is done running, there are hundreds of data pages
> allocated for this table with high extent fragmentation.
> My question is why these pages are not de-allocated when there are no rows
> in the table to begin with? If I truncate the table and add 6 rows manually,
> then only one data page is allocated to the table. The table has about 6
> columns with 4 columns of data type smallint, one decimal and a couple of
> char(5).
>|||do you have any indexes on this table, specificaly a clustered one, or is
this a heap ?
and how is the "delete" done, with a 'delete tablename' ?
"Adam" <Adam@.discussions.microsoft.com> wrote in message
news:58830C32-198D-4DFF-9149-DBA76F8FB444@.microsoft.com...
>I have a job in SQL 7 with sp3 that goes in a loop and inserts about 2 to 6
> rows into an empty table and then deletes them and keeps going in the loop
> for about 6 hours. If the table is not truncated before the job starts
> again
> the following night, the job performance degrades considerably. It turns
> out
> that after the job is done running, there are hundreds of data pages
> allocated for this table with high extent fragmentation.
> My question is why these pages are not de-allocated when there are no rows
> in the table to begin with? If I truncate the table and add 6 rows
> manually,
> then only one data page is allocated to the table. The table has about 6
> columns with 4 columns of data type smallint, one decimal and a couple of
> char(5).
>|||On Tue, 13 Dec 2005 15:12:23 -0500, "David J. Cartwright"
<davidcartwright@.hotmail.com> wrote:
>do you have any indexes on this table, specificaly a clustered one, or is
>this a heap ?
>and how is the "delete" done, with a 'delete tablename' ?
What he said.
Would a simple "truncate table" be a quick fix?
(not that it's so hard to defrag anyway)
J.|||I am not 100% if the app is doing a commit after each delete but even if that
was not the case, after the final commit at the end of the 6th hour,
shouldn't the empty pages get deallocated?
"DeeJay Puar" wrote:
> Hi,
> There could be several reasons. Are you commiting your transactions after
> each loop? Also, the delete statement is a logged transaction, that means
> everytime your script deletes the data, it is written in the transaction log
> that causes space (data pages) to be allocated. Not sure how your script is
> coded, so can't really be too specific.
> The truncate statement is NOT logged so when you issue it, it wipes out the
> table clean. That is why you are seeing this behaviour.
> HTH
> DeeJay Puar
> MCDBA
> "Adam" wrote:
> > I have a job in SQL 7 with sp3 that goes in a loop and inserts about 2 to 6
> > rows into an empty table and then deletes them and keeps going in the loop
> > for about 6 hours. If the table is not truncated before the job starts again
> > the following night, the job performance degrades considerably. It turns out
> > that after the job is done running, there are hundreds of data pages
> > allocated for this table with high extent fragmentation.
> >
> > My question is why these pages are not de-allocated when there are no rows
> > in the table to begin with? If I truncate the table and add 6 rows manually,
> > then only one data page is allocated to the table. The table has about 6
> > columns with 4 columns of data type smallint, one decimal and a couple of
> > char(5).
> >|||At the moment, there are no indexes and yes, my next plan of attack is to put
a clustered index to see how that pans out. I don't know for sure if that
will solve the problem 100% because now, the delete would also have to do
more IOs to maintain the clustered index. Also, a table with the clustered
index will get fragmented over time.
The SQL is doing "delete table".
"David J. Cartwright" wrote:
> do you have any indexes on this table, specificaly a clustered one, or is
> this a heap ?
> and how is the "delete" done, with a 'delete tablename' ?
> "Adam" <Adam@.discussions.microsoft.com> wrote in message
> news:58830C32-198D-4DFF-9149-DBA76F8FB444@.microsoft.com...
> >I have a job in SQL 7 with sp3 that goes in a loop and inserts about 2 to 6
> > rows into an empty table and then deletes them and keeps going in the loop
> > for about 6 hours. If the table is not truncated before the job starts
> > again
> > the following night, the job performance degrades considerably. It turns
> > out
> > that after the job is done running, there are hundreds of data pages
> > allocated for this table with high extent fragmentation.
> >
> > My question is why these pages are not de-allocated when there are no rows
> > in the table to begin with? If I truncate the table and add 6 rows
> > manually,
> > then only one data page is allocated to the table. The table has about 6
> > columns with 4 columns of data type smallint, one decimal and a couple of
> > char(5).
> >
>
>|||The current fix in fact is to truncate the table before the batch starts,
however, if SQL Server is accumulating empty pages with 0 rows in the table
then the performance starts to degrade over time while the 6 hour job is
running. If I can figure out the reason and the solution to why SQL Server is
not deallocating empty pages, then I can stop it from taking the time for
doing table scan through these empty pages and can tweek many other nightly
jobs that I am sure are facing a similar situation.
thoughts?
"jxstern" wrote:
> On Tue, 13 Dec 2005 15:12:23 -0500, "David J. Cartwright"
> <davidcartwright@.hotmail.com> wrote:
> >do you have any indexes on this table, specificaly a clustered one, or is
> >this a heap ?
> >and how is the "delete" done, with a 'delete tablename' ?
> What he said.
> Would a simple "truncate table" be a quick fix?
> (not that it's so hard to defrag anyway)
> J.
>
>|||On Thu, 15 Dec 2005 06:21:02 -0800, "Adam"
<Adam@.discussions.microsoft.com> wrote:
>At the moment, there are no indexes and yes, my next plan of attack is to put
>a clustered index to see how that pans out.
It should.
If you delete from a table without a clustered index, SQLServer leaves
the pages deallocated until ... The Ghost wakes up and reclaims them,
which it does when it feels like it, and then it takes as long as it
takes.
Fun, huh?
Add the clustered index, if for no other reason than this.
Google the newsgroup archives or the web for more about the ghost,
there's very little about it in MSDN or BOL.
Josh
High fragmentation with 0 rows in the table
I have a job in SQL 7 with sp3 that goes in a loop and inserts about 2 to 6
rows into an empty table and then deletes them and keeps going in the loop
for about 6 hours. If the table is not truncated before the job starts again
the following night, the job performance degrades considerably. It turns out
that after the job is done running, there are hundreds of data pages
allocated for this table with high extent fragmentation.
My question is why these pages are not de-allocated when there are no rows
in the table to begin with? If I truncate the table and add 6 rows manually,
then only one data page is allocated to the table. The table has about 6
columns with 4 columns of data type smallint, one decimal and a couple of
char(5).Hi,
There could be several reasons. Are you commiting your transactions after
each loop? Also, the delete statement is a logged transaction, that means
everytime your script deletes the data, it is written in the transaction log
that causes space (data pages) to be allocated. Not sure how your script is
coded, so can't really be too specific.
The truncate statement is NOT logged so when you issue it, it wipes out the
table clean. That is why you are seeing this behaviour.
HTH
DeeJay Puar
MCDBA
"Adam" wrote:
> I have a job in SQL 7 with sp3 that goes in a loop and inserts about 2 to
6
> rows into an empty table and then deletes them and keeps going in the loop
> for about 6 hours. If the table is not truncated before the job starts aga
in
> the following night, the job performance degrades considerably. It turns o
ut
> that after the job is done running, there are hundreds of data pages
> allocated for this table with high extent fragmentation.
> My question is why these pages are not de-allocated when there are no rows
> in the table to begin with? If I truncate the table and add 6 rows manuall
y,
> then only one data page is allocated to the table. The table has about 6
> columns with 4 columns of data type smallint, one decimal and a couple of
> char(5).
>|||do you have any indexes on this table, specificaly a clustered one, or is
this a heap ?
and how is the "delete" done, with a 'delete tablename' ?
"Adam" <Adam@.discussions.microsoft.com> wrote in message
news:58830C32-198D-4DFF-9149-DBA76F8FB444@.microsoft.com...
>I have a job in SQL 7 with sp3 that goes in a loop and inserts about 2 to 6
> rows into an empty table and then deletes them and keeps going in the loop
> for about 6 hours. If the table is not truncated before the job starts
> again
> the following night, the job performance degrades considerably. It turns
> out
> that after the job is done running, there are hundreds of data pages
> allocated for this table with high extent fragmentation.
> My question is why these pages are not de-allocated when there are no rows
> in the table to begin with? If I truncate the table and add 6 rows
> manually,
> then only one data page is allocated to the table. The table has about 6
> columns with 4 columns of data type smallint, one decimal and a couple of
> char(5).
>|||On Tue, 13 Dec 2005 15:12:23 -0500, "David J. Cartwright"
<davidcartwright@.hotmail.com> wrote:
>do you have any indexes on this table, specificaly a clustered one, or is
>this a heap ?
>and how is the "delete" done, with a 'delete tablename' ?
What he said.
Would a simple "truncate table" be a quick fix?
(not that it's so hard to defrag anyway)
J.|||I am not 100% if the app is doing a commit after each delete but even if tha
t
was not the case, after the final commit at the end of the 6th hour,
shouldn't the empty pages get deallocated?
"DeeJay Puar" wrote:
[vbcol=seagreen]
> Hi,
> There could be several reasons. Are you commiting your transactions after
> each loop? Also, the delete statement is a logged transaction, that means
> everytime your script deletes the data, it is written in the transaction l
og
> that causes space (data pages) to be allocated. Not sure how your script i
s
> coded, so can't really be too specific.
> The truncate statement is NOT logged so when you issue it, it wipes out th
e
> table clean. That is why you are seeing this behaviour.
> HTH
> DeeJay Puar
> MCDBA
> "Adam" wrote:
>|||At the moment, there are no indexes and yes, my next plan of attack is to pu
t
a clustered index to see how that pans out. I don't know for sure if that
will solve the problem 100% because now, the delete would also have to do
more IOs to maintain the clustered index. Also, a table with the clustered
index will get fragmented over time.
The SQL is doing "delete table".
"David J. Cartwright" wrote:
> do you have any indexes on this table, specificaly a clustered one, or is
> this a heap ?
> and how is the "delete" done, with a 'delete tablename' ?
> "Adam" <Adam@.discussions.microsoft.com> wrote in message
> news:58830C32-198D-4DFF-9149-DBA76F8FB444@.microsoft.com...
>
>|||The current fix in fact is to truncate the table before the batch starts,
however, if SQL Server is accumulating empty pages with 0 rows in the table
then the performance starts to degrade over time while the 6 hour job is
running. If I can figure out the reason and the solution to why SQL Server i
s
not deallocating empty pages, then I can stop it from taking the time for
doing table scan through these empty pages and can tweek many other nightly
jobs that I am sure are facing a similar situation.
thoughts?
"jxstern" wrote:
> On Tue, 13 Dec 2005 15:12:23 -0500, "David J. Cartwright"
> <davidcartwright@.hotmail.com> wrote:
> What he said.
> Would a simple "truncate table" be a quick fix?
> (not that it's so hard to defrag anyway)
> J.
>
>|||On Thu, 15 Dec 2005 06:21:02 -0800, "Adam"
<Adam@.discussions.microsoft.com> wrote:
>At the moment, there are no indexes and yes, my next plan of attack is to p
ut
>a clustered index to see how that pans out.
It should.
If you delete from a table without a clustered index, SQLServer leaves
the pages deallocated until ... The Ghost wakes up and reclaims them,
which it does when it feels like it, and then it takes as long as it
takes.
Fun, huh?
Add the clustered index, if for no other reason than this.
Google the newsgroup archives or the web for more about the ghost,
there's very little about it in MSDN or BOL.
Josh
rows into an empty table and then deletes them and keeps going in the loop
for about 6 hours. If the table is not truncated before the job starts again
the following night, the job performance degrades considerably. It turns out
that after the job is done running, there are hundreds of data pages
allocated for this table with high extent fragmentation.
My question is why these pages are not de-allocated when there are no rows
in the table to begin with? If I truncate the table and add 6 rows manually,
then only one data page is allocated to the table. The table has about 6
columns with 4 columns of data type smallint, one decimal and a couple of
char(5).Hi,
There could be several reasons. Are you commiting your transactions after
each loop? Also, the delete statement is a logged transaction, that means
everytime your script deletes the data, it is written in the transaction log
that causes space (data pages) to be allocated. Not sure how your script is
coded, so can't really be too specific.
The truncate statement is NOT logged so when you issue it, it wipes out the
table clean. That is why you are seeing this behaviour.
HTH
DeeJay Puar
MCDBA
"Adam" wrote:
> I have a job in SQL 7 with sp3 that goes in a loop and inserts about 2 to
6
> rows into an empty table and then deletes them and keeps going in the loop
> for about 6 hours. If the table is not truncated before the job starts aga
in
> the following night, the job performance degrades considerably. It turns o
ut
> that after the job is done running, there are hundreds of data pages
> allocated for this table with high extent fragmentation.
> My question is why these pages are not de-allocated when there are no rows
> in the table to begin with? If I truncate the table and add 6 rows manuall
y,
> then only one data page is allocated to the table. The table has about 6
> columns with 4 columns of data type smallint, one decimal and a couple of
> char(5).
>|||do you have any indexes on this table, specificaly a clustered one, or is
this a heap ?
and how is the "delete" done, with a 'delete tablename' ?
"Adam" <Adam@.discussions.microsoft.com> wrote in message
news:58830C32-198D-4DFF-9149-DBA76F8FB444@.microsoft.com...
>I have a job in SQL 7 with sp3 that goes in a loop and inserts about 2 to 6
> rows into an empty table and then deletes them and keeps going in the loop
> for about 6 hours. If the table is not truncated before the job starts
> again
> the following night, the job performance degrades considerably. It turns
> out
> that after the job is done running, there are hundreds of data pages
> allocated for this table with high extent fragmentation.
> My question is why these pages are not de-allocated when there are no rows
> in the table to begin with? If I truncate the table and add 6 rows
> manually,
> then only one data page is allocated to the table. The table has about 6
> columns with 4 columns of data type smallint, one decimal and a couple of
> char(5).
>|||On Tue, 13 Dec 2005 15:12:23 -0500, "David J. Cartwright"
<davidcartwright@.hotmail.com> wrote:
>do you have any indexes on this table, specificaly a clustered one, or is
>this a heap ?
>and how is the "delete" done, with a 'delete tablename' ?
What he said.
Would a simple "truncate table" be a quick fix?
(not that it's so hard to defrag anyway)
J.|||I am not 100% if the app is doing a commit after each delete but even if tha
t
was not the case, after the final commit at the end of the 6th hour,
shouldn't the empty pages get deallocated?
"DeeJay Puar" wrote:
[vbcol=seagreen]
> Hi,
> There could be several reasons. Are you commiting your transactions after
> each loop? Also, the delete statement is a logged transaction, that means
> everytime your script deletes the data, it is written in the transaction l
og
> that causes space (data pages) to be allocated. Not sure how your script i
s
> coded, so can't really be too specific.
> The truncate statement is NOT logged so when you issue it, it wipes out th
e
> table clean. That is why you are seeing this behaviour.
> HTH
> DeeJay Puar
> MCDBA
> "Adam" wrote:
>|||At the moment, there are no indexes and yes, my next plan of attack is to pu
t
a clustered index to see how that pans out. I don't know for sure if that
will solve the problem 100% because now, the delete would also have to do
more IOs to maintain the clustered index. Also, a table with the clustered
index will get fragmented over time.
The SQL is doing "delete table".
"David J. Cartwright" wrote:
> do you have any indexes on this table, specificaly a clustered one, or is
> this a heap ?
> and how is the "delete" done, with a 'delete tablename' ?
> "Adam" <Adam@.discussions.microsoft.com> wrote in message
> news:58830C32-198D-4DFF-9149-DBA76F8FB444@.microsoft.com...
>
>|||The current fix in fact is to truncate the table before the batch starts,
however, if SQL Server is accumulating empty pages with 0 rows in the table
then the performance starts to degrade over time while the 6 hour job is
running. If I can figure out the reason and the solution to why SQL Server i
s
not deallocating empty pages, then I can stop it from taking the time for
doing table scan through these empty pages and can tweek many other nightly
jobs that I am sure are facing a similar situation.
thoughts?
"jxstern" wrote:
> On Tue, 13 Dec 2005 15:12:23 -0500, "David J. Cartwright"
> <davidcartwright@.hotmail.com> wrote:
> What he said.
> Would a simple "truncate table" be a quick fix?
> (not that it's so hard to defrag anyway)
> J.
>
>|||On Thu, 15 Dec 2005 06:21:02 -0800, "Adam"
<Adam@.discussions.microsoft.com> wrote:
>At the moment, there are no indexes and yes, my next plan of attack is to p
ut
>a clustered index to see how that pans out.
It should.
If you delete from a table without a clustered index, SQLServer leaves
the pages deallocated until ... The Ghost wakes up and reclaims them,
which it does when it feels like it, and then it takes as long as it
takes.
Fun, huh?
Add the clustered index, if for no other reason than this.
Google the newsgroup archives or the web for more about the ghost,
there's very little about it in MSDN or BOL.
Josh
Monday, March 19, 2012
High Extent Scan Fragmentation
We have a certain table that returns a high Extent Scan Fragmentation
number from a dbcc showcontig.
Even right after a DBCC REINDEX, the table comes back with an Extent
Scan Fragmentation number of 40 - 50%.
The table does have a clustered index -- although it is a identity
field.
The table is on a db built on a RAID 5 array. When I take the exact
same table and data, and restore it to my laptop (single hard drive),
a DBCC REINDEX will drop the Extent Scan Fragmentation to 0%.
Any ideas what could be driving the Extent Scan Fragmentation up on
the other server? Could it be the fact that it's on a RAID drive?
Thanksyes.
I read an article that suggested "Extent Scan Fragmentation" was not a
reliable number to look at when data spanned multiple disks.
I believe the more reliable number to use is scan density.
Hope this helps
Greg Jackson
PDX, Oregon|||Hi,
Just wanted to send a minor correction (I think). Here is
what BOL says:
Understanding Logical Scan Fragmentation and Extent Scan
Fragmentation values:
Logical Scan Fragmentation and, to a lesser extent, Extent
Scan Fragmentation values give the best indication of a
table's fragmentation level. Both these values should be
as close to zero as possible (although a value from 0%
through 10% may be acceptable). It should be noted that
the Extent Scan Fragmentation value will be high if the
index spans multiple files. Both methods of reducing
fragmentation can be used to reduce these values.
IMHO, RAID level has no effect on fragementation. Do you
have an Index/table that has multiple files? Also, how are
you moving the data to your laptop? Backing up and restore
a database file anywhere (RAID 0, 1, 2, 5, etc) does NOT
remove fragmentation. You can test this by restoring the
database to a different RAID 5 server and compare the DBCC
output. It will be the same.
hth
DeeJay
>--Original Message--
>yes.
>I read an article that suggested "Extent Scan
Fragmentation" was not a
>reliable number to look at when data spanned multiple
disks.
>I believe the more reliable number to use is scan density.
>Hope this helps
>
>Greg Jackson
>PDX, Oregon
>
>.
>|||The Extent Scan Fragmentation algorithm does not work on multiple files -
this is documented. You shoudl use Logical Scan Fragmentation. Please read
the whitepaper below which will explain everything to you. Let me know if
you have any more questions.
http://www.microsoft.com/technet/prodtechnol/sql/2000/maintain/ss2kidbp.mspx
Regards
--
Paul Randal
Dev Lead, Microsoft SQL Server Storage Engine
This posting is provided "AS IS" with no warranties, and confers no rights.
"DeeJay Puar" <deejaypuar@.yahoo.com> wrote in message
news:1d6ce01c453ac$22a84570$a001280a@.phx.gbl...
> Hi,
> Just wanted to send a minor correction (I think). Here is
> what BOL says:
> Understanding Logical Scan Fragmentation and Extent Scan
> Fragmentation values:
> Logical Scan Fragmentation and, to a lesser extent, Extent
> Scan Fragmentation values give the best indication of a
> table's fragmentation level. Both these values should be
> as close to zero as possible (although a value from 0%
> through 10% may be acceptable). It should be noted that
> the Extent Scan Fragmentation value will be high if the
> index spans multiple files. Both methods of reducing
> fragmentation can be used to reduce these values.
> IMHO, RAID level has no effect on fragementation. Do you
> have an Index/table that has multiple files? Also, how are
> you moving the data to your laptop? Backing up and restore
> a database file anywhere (RAID 0, 1, 2, 5, etc) does NOT
> remove fragmentation. You can test this by restoring the
> database to a different RAID 5 server and compare the DBCC
> output. It will be the same.
> hth
> DeeJay
>
> >--Original Message--
> >yes.
> >
> >I read an article that suggested "Extent Scan
> Fragmentation" was not a
> >reliable number to look at when data spanned multiple
> disks.
> >
> >I believe the more reliable number to use is scan density.
> >
> >Hope this helps
> >
> >
> >Greg Jackson
> >PDX, Oregon
> >
> >
> >.
> >|||Hi Paul,
This table and it's indexes are not built on multiple files. Both the
table and it's indexes reside on the same RAID array, and all on the
PRIMARY file.
Everything i've found so far says that a dbcc dbreindex should fix the
problem with extent scan fragmentation as long as a clustered index
exists on the table. We've tried everything and I'm stumped.
Actually it's not just one table, but several that show high (40-50%)
extent scan fragmentation, even though we do dbcc dbreindexes on them
every night. The Logical Scan Fragmentation goes down as we'd expect,
but the dbreindex has very minimal (maybe 2 or 3%) effect on the
extent scan fragmentation.
Thanks for your response and any further clues!
"Paul S Randal [MS]" <prandal@.online.microsoft.com> wrote in message news:<OcSydv9UEHA.1292@.TK2MSFTNGP10.phx.gbl>...
> The Extent Scan Fragmentation algorithm does not work on multiple files -
> this is documented. You shoudl use Logical Scan Fragmentation. Please read
> the whitepaper below which will explain everything to you. Let me know if
> you have any more questions.
> http://www.microsoft.com/technet/prodtechnol/sql/2000/maintain/ss2kidbp.mspx
> Regards
> --
> Paul Randal
> Dev Lead, Microsoft SQL Server Storage Engine
> This posting is provided "AS IS" with no warranties, and confers no rights.
> "DeeJay Puar" <deejaypuar@.yahoo.com> wrote in message
> news:1d6ce01c453ac$22a84570$a001280a@.phx.gbl...
> > Hi,
> >
> > Just wanted to send a minor correction (I think). Here is
> > what BOL says:
> >
> > Understanding Logical Scan Fragmentation and Extent Scan
> > Fragmentation values:
> >
> > Logical Scan Fragmentation and, to a lesser extent, Extent
> > Scan Fragmentation values give the best indication of a
> > table's fragmentation level. Both these values should be
> > as close to zero as possible (although a value from 0%
> > through 10% may be acceptable). It should be noted that
> > the Extent Scan Fragmentation value will be high if the
> > index spans multiple files. Both methods of reducing
> > fragmentation can be used to reduce these values.
> >
> > IMHO, RAID level has no effect on fragementation. Do you
> > have an Index/table that has multiple files? Also, how are
> > you moving the data to your laptop? Backing up and restore
> > a database file anywhere (RAID 0, 1, 2, 5, etc) does NOT
> > remove fragmentation. You can test this by restoring the
> > database to a different RAID 5 server and compare the DBCC
> > output. It will be the same.
> >
> > hth
> >
> > DeeJay
> >
> >
> > >--Original Message--
> > >yes.
> > >
> > >I read an article that suggested "Extent Scan
> Fragmentation" was not a
> > >reliable number to look at when data spanned multiple
> disks.
> > >
> > >I believe the more reliable number to use is scan density.
> > >
> > >Hope this helps
> > >
> > >
> > >Greg Jackson
> > >PDX, Oregon
> > >
> > >
> > >.
> > >|||Hi DeeJay,
> IMHO, RAID level has no effect on fragementation. Do you
> have an Index/table that has multiple files?
No, the table and its indexes all exist on the PRIMARY filegroup.
> Backing up and restore
> a database file anywhere (RAID 0, 1, 2, 5, etc) does NOT
> remove fragmentation.
Right, I agree. What I'm puzzled by is this: We decided to run a very
simple test to try and figure out if this was a hardware or software
problem. We wrote a script that creates a very simple table and fills
it with data (30,000 rows -- table is about 10 mg). This table ends
up having extent scan fragmentation because of the 30,000 inserts. On
my laptop (1 single harddrive), a dbcc dbreindex clears up this extent
scan fragmentation right away, no problem. On the production server
with the RAID array, the dbcc reindex has almost no effect on the
extent scan fragmentation.
"DeeJay Puar" <deejaypuar@.yahoo.com> wrote in message news:<1d6ce01c453ac$22a84570$a001280a@.phx.gbl>...
> Hi,
> Just wanted to send a minor correction (I think). Here is
> what BOL says:
> Understanding Logical Scan Fragmentation and Extent Scan
> Fragmentation values:
> Logical Scan Fragmentation and, to a lesser extent, Extent
> Scan Fragmentation values give the best indication of a
> table's fragmentation level. Both these values should be
> as close to zero as possible (although a value from 0%
> through 10% may be acceptable). It should be noted that
> the Extent Scan Fragmentation value will be high if the
> index spans multiple files. Both methods of reducing
> fragmentation can be used to reduce these values.
> IMHO, RAID level has no effect on fragementation. Do you
> have an Index/table that has multiple files? Also, how are
> you moving the data to your laptop? Backing up and restore
> a database file anywhere (RAID 0, 1, 2, 5, etc) does NOT
> remove fragmentation. You can test this by restoring the
> database to a different RAID 5 server and compare the DBCC
> output. It will be the same.
> hth
> DeeJay
>
> >--Original Message--
> >yes.
> >
> >I read an article that suggested "Extent Scan
> Fragmentation" was not a
> >reliable number to look at when data spanned multiple
> disks.
> >
> >I believe the more reliable number to use is scan density.
> >
> >Hope this helps
> >
> >
> >Greg Jackson
> >PDX, Oregon
> >
> >
> >.
> >|||Hi Brett,
Before Paul, posted the article link, I read it a couple
weeks ago and I re-read it. Here is something else I found:
DBCC INDEXDEFRAG does not help to untangle indexes that
have become interleaved within a data file. Likewise, DBCC
INDEXDEFRAG does not correct extent fragmentation on
indexes. Interleaving occurs when index extents (a group
of eight index pages) for an index are not completely
contiguous within the data file, leaving extents from one
or more indexes intermingled in the file. Interleaving can
occur even when there is no logical fragmentation, because
all index pages are not necessarily contiguous, even when
logical ordering matches physical ordering.
Even with this limitation, the tests showed that DBCC
INDEXDEFRAG can be as effective at improving performance
of workloads as DBCC DBREINDEX. In fact, the tests showed
that even if you can rebuild indexes so that there is
minimal interleaving, this does not have a significant
effect on performance. Reducing the logical fragmentation
levels had a much greater impact on workload performance.
This is why it is recommended that you focus on logical
fragmentation and page density levels when examining
fragmentation on your indexes.
My Notes:
How big is the table in production? If its 10 MB like your
test, it should just work fine. Also, the 'higher' extent
scan fragementation seems to occur when there is
interleaving. This could happen if your production table
is very large. I do not think it is RAID level related,
its the amount of data and how it is spread across on the
disk.
Also, did you size the database properly in production or
did it autogrow during its use?
DeeJay
>--Original Message--
>Hi DeeJay,
>> IMHO, RAID level has no effect on fragementation. Do
you
>> have an Index/table that has multiple files?
>No, the table and its indexes all exist on the PRIMARY
filegroup.
>
>> Backing up and restore
>> a database file anywhere (RAID 0, 1, 2, 5, etc) does
NOT
>> remove fragmentation.
>Right, I agree. What I'm puzzled by is this: We decided
to run a very
>simple test to try and figure out if this was a hardware
or software
>problem. We wrote a script that creates a very simple
table and fills
>it with data (30,000 rows -- table is about 10 mg). This
table ends
>up having extent scan fragmentation because of the 30,000
inserts. On
>my laptop (1 single harddrive), a dbcc dbreindex clears
up this extent
>scan fragmentation right away, no problem. On the
production server
>with the RAID array, the dbcc reindex has almost no
effect on the
>extent scan fragmentation.
>
>
>"DeeJay Puar" <deejaypuar@.yahoo.com> wrote in message
news:<1d6ce01c453ac$22a84570$a001280a@.phx.gbl>...
>> Hi,
>> Just wanted to send a minor correction (I think). Here
is
>> what BOL says:
>> Understanding Logical Scan Fragmentation and Extent
Scan
>> Fragmentation values:
>> Logical Scan Fragmentation and, to a lesser extent,
Extent
>> Scan Fragmentation values give the best indication of a
>> table's fragmentation level. Both these values should
be
>> as close to zero as possible (although a value from 0%
>> through 10% may be acceptable). It should be noted that
>> the Extent Scan Fragmentation value will be high if the
>> index spans multiple files. Both methods of reducing
>> fragmentation can be used to reduce these values.
>> IMHO, RAID level has no effect on fragementation. Do
you
>> have an Index/table that has multiple files? Also, how
are
>> you moving the data to your laptop? Backing up and
restore
>> a database file anywhere (RAID 0, 1, 2, 5, etc) does
NOT
>> remove fragmentation. You can test this by restoring
the
>> database to a different RAID 5 server and compare the
DBCC
>> output. It will be the same.
>> hth
>> DeeJay
>>
>> >--Original Message--
>> >yes.
>> >
>> >I read an article that suggested "Extent Scan
>> Fragmentation" was not a
>> >reliable number to look at when data spanned multiple
>> disks.
>> >
>> >I believe the more reliable number to use is scan
density.
>> >
>> >Hope this helps
>> >
>> >
>> >Greg Jackson
>> >PDX, Oregon
>> >
>> >
>> >.
>> >
>.
>|||I'd guess the free space in the file is fragmented then, preventing
contiguous extents being allocated when the index is rebuilt - although this
much continuously present extent fragmentation really smacks of multiple
files. Are you sure there is only one file in the filegroup (I noticed you
used primary file and primary filegroup in different responses)? Can you
post the output of dbcc checkalloc (as text, not as an attachment)?
DBCC DBREINDEX will only fix the problem if there is contiguous free space
in the file equal to the size of the index being rebuilt.
Btw, why are you reindexing every night?
Thanks
--
Paul Randal
Dev Lead, Microsoft SQL Server Storage Engine
This posting is provided "AS IS" with no warranties, and confers no rights.
"brett" <brettolges@.gmail.com> wrote in message
news:6de7e854.0406170436.21ae2e17@.posting.google.com...
> Hi Paul,
> This table and it's indexes are not built on multiple files. Both the
> table and it's indexes reside on the same RAID array, and all on the
> PRIMARY file.
> Everything i've found so far says that a dbcc dbreindex should fix the
> problem with extent scan fragmentation as long as a clustered index
> exists on the table. We've tried everything and I'm stumped.
> Actually it's not just one table, but several that show high (40-50%)
> extent scan fragmentation, even though we do dbcc dbreindexes on them
> every night. The Logical Scan Fragmentation goes down as we'd expect,
> but the dbreindex has very minimal (maybe 2 or 3%) effect on the
> extent scan fragmentation.
> Thanks for your response and any further clues!
>
> "Paul S Randal [MS]" <prandal@.online.microsoft.com> wrote in message
news:<OcSydv9UEHA.1292@.TK2MSFTNGP10.phx.gbl>...
> > The Extent Scan Fragmentation algorithm does not work on multiple
files -
> > this is documented. You shoudl use Logical Scan Fragmentation. Please
read
> > the whitepaper below which will explain everything to you. Let me know
if
> > you have any more questions.
> >
> >
http://www.microsoft.com/technet/prodtechnol/sql/2000/maintain/ss2kidbp.mspx
> >
> > Regards
> >
> > --
> > Paul Randal
> > Dev Lead, Microsoft SQL Server Storage Engine
> >
> > This posting is provided "AS IS" with no warranties, and confers no
rights.
> >
> > "DeeJay Puar" <deejaypuar@.yahoo.com> wrote in message
> > news:1d6ce01c453ac$22a84570$a001280a@.phx.gbl...
> > > Hi,
> > >
> > > Just wanted to send a minor correction (I think). Here is
> > > what BOL says:
> > >
> > > Understanding Logical Scan Fragmentation and Extent Scan
> > > Fragmentation values:
> > >
> > > Logical Scan Fragmentation and, to a lesser extent, Extent
> > > Scan Fragmentation values give the best indication of a
> > > table's fragmentation level. Both these values should be
> > > as close to zero as possible (although a value from 0%
> > > through 10% may be acceptable). It should be noted that
> > > the Extent Scan Fragmentation value will be high if the
> > > index spans multiple files. Both methods of reducing
> > > fragmentation can be used to reduce these values.
> > >
> > > IMHO, RAID level has no effect on fragementation. Do you
> > > have an Index/table that has multiple files? Also, how are
> > > you moving the data to your laptop? Backing up and restore
> > > a database file anywhere (RAID 0, 1, 2, 5, etc) does NOT
> > > remove fragmentation. You can test this by restoring the
> > > database to a different RAID 5 server and compare the DBCC
> > > output. It will be the same.
> > >
> > > hth
> > >
> > > DeeJay
> > >
> > >
> > > >--Original Message--
> > > >yes.
> > > >
> > > >I read an article that suggested "Extent Scan
> > Fragmentation" was not a
> > > >reliable number to look at when data spanned multiple
> > disks.
> > > >
> > > >I believe the more reliable number to use is scan density.
> > > >
> > > >Hope this helps
> > > >
> > > >
> > > >Greg Jackson
> > > >PDX, Oregon
> > > >
> > > >
> > > >.
> > > >|||Hi Paul,
Thanks for your help on this.
As far as the dbreindex every night, I guess that's just being done in
an attempt to squeeze every bit of performance possible out of the db.
I'm actually researching this on behalf of a client. I think he
understands the dbreindex every night is probably overkill -- but he's
experimenting trying to get the best performance possible.
The white paper you pointed us to seems to imply that reindexing may
not even have a very significant effect on performance, depending on
the characteristics of the database. So we realize we may be barking
up the wrong tree, but the fact that the fragmentation can not be
eliminated seems strange, so we're wondering if it could be a clue
into some hardware problem or configuration problem we may be able to
fix.
I'm going to double check and make sure there's just one file in the
filegroup.
Until I verify that, here's what we're working on:
To try and eliminate as many variables as we could, we set up a simple
test that we could take and run on diff't servers.
The test just creates a dummy table and fills it with data with lots
of single insert statements (to try and create fragmentation). Then
we check the ext. scan frag before and after the dbcc dbreindex.
Here's the test itself...
==============================================================================
use pubs
go
if exists (select * from dbo.sysobjects where id =object_id(N'[dbo].[FragTest]') and OBJECTPROPERTY(id, N'IsUserTable')
= 1)
drop table [dbo].[FragTest]
GO
CREATE TABLE [dbo].[FragTest] (
[FragTestId] [int] IDENTITY (1, 1) NOT NULL ,
[Field1] [varchar] (10) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[Field2] [varchar] (50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL
) ON [PRIMARY]
GO
CREATE CLUSTERED INDEX [idxFragTestId] ON
[dbo].[FragTest]([FragTestId]) ON [PRIMARY]
GO
ALTER TABLE [dbo].[FragTest] ADD
CONSTRAINT [PK_FragTest] PRIMARY KEY NONCLUSTERED
(
[FragTestId]
) ON [PRIMARY]
GO
CREATE INDEX [idxField1] ON [dbo].[FragTest]([Field1]) ON [PRIMARY]
GO
set nocount on
declare @.i as integer
set @.i = 0
while @.i < 30000
begin
insert FragTest (Field1,Field2) values
('aaa','aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa')
insert FragTest (Field1,Field2) values
('bbb','bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb')
insert FragTest (Field1,Field2) values
('ccc','ccccccccccccccccccccccccccccccccccccccccccccccc')
insert FragTest (Field1,Field2) values
('ddd','dddddddddddddddddddddddddddddddddddddddddddddd')
insert FragTest (Field1,Field2) values
('eee','eeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee')
insert FragTest (Field1,Field2) values
('fff','fffffffffffffffffffffffffffffffffffffffffffffff')
insert FragTest (Field1,Field2) values
('ggg','gggggggggggggggggggggggggggggggggggggggggggggg')
insert FragTest (Field1,Field2) values
('hhh','hhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhh')
insert FragTest (Field1,Field2) values
('iii','iiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiii')
insert FragTest (Field1,Field2) values
('jjj','jjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjj')
set @.i = @.i + 1
end
dbcc showcontig('FragTest')
DBCC DBREINDEX (FragTest, '', 100)
dbcc showcontig('FragTest')
=======================================================================
Here are the results we got on the 'problem' server...
Before DBCC REINDEX:
DBCC SHOWCONTIG scanning 'FragTest' table...
Table: 'FragTest' (1187767955); index ID: 1, database ID: 7
TABLE level scan performed.
- Pages Scanned........................: 2632
- Extents Scanned.......................: 334
- Extent Switches.......................: 333
- Avg. Pages per Extent..................: 7.9
- Scan Density [Best Count:Actual Count]......: 98.50% [329:334]
- Logical Scan Fragmentation ..............: 0.04%
- Extent Scan Fragmentation ...............: 59.88%
- Avg. Bytes Free per Page................: 48.9
- Avg. Page Density (full)................: 99.40%
DBCC execution completed. If DBCC printed error messages, contact your
system administrator.
After DBCC REINDEX:
DBCC SHOWCONTIG scanning 'FragTest' table...
Table: 'FragTest' (1187767955); index ID: 1, database ID: 7
TABLE level scan performed.
- Pages Scanned........................: 2632
- Extents Scanned.......................: 330
- Extent Switches.......................: 329
- Avg. Pages per Extent..................: 8.0
- Scan Density [Best Count:Actual Count]......: 99.70% [329:330]
- Logical Scan Fragmentation ..............: 0.00%
- Extent Scan Fragmentation ...............: 31.82%
- Avg. Bytes Free per Page................: 48.9
- Avg. Page Density (full)................: 99.40%
DBCC execution completed. If DBCC printed error messages, contact your
system administrator.
"Paul S Randal [MS]" <prandal@.online.microsoft.com> wrote in message news:<u3zdscOVEHA.3540@.TK2MSFTNGP11.phx.gbl>...
> I'd guess the free space in the file is fragmented then, preventing
> contiguous extents being allocated when the index is rebuilt - although this
> much continuously present extent fragmentation really smacks of multiple
> files. Are you sure there is only one file in the filegroup (I noticed you
> used primary file and primary filegroup in different responses)? Can you
> post the output of dbcc checkalloc (as text, not as an attachment)?
> DBCC DBREINDEX will only fix the problem if there is contiguous free space
> in the file equal to the size of the index being rebuilt.
> Btw, why are you reindexing every night?
> Thanks
> --
> Paul Randal
> Dev Lead, Microsoft SQL Server Storage Engine
> This posting is provided "AS IS" with no warranties, and confers no rights.
> "brett" <brettolges@.gmail.com> wrote in message
> news:6de7e854.0406170436.21ae2e17@.posting.google.com...
> > Hi Paul,
> >
> > This table and it's indexes are not built on multiple files. Both the
> > table and it's indexes reside on the same RAID array, and all on the
> > PRIMARY file.
> >
> > Everything i've found so far says that a dbcc dbreindex should fix the
> > problem with extent scan fragmentation as long as a clustered index
> > exists on the table. We've tried everything and I'm stumped.
> >
> > Actually it's not just one table, but several that show high (40-50%)
> > extent scan fragmentation, even though we do dbcc dbreindexes on them
> > every night. The Logical Scan Fragmentation goes down as we'd expect,
> > but the dbreindex has very minimal (maybe 2 or 3%) effect on the
> > extent scan fragmentation.
> >
> > Thanks for your response and any further clues!
> >
> >
> >
> > "Paul S Randal [MS]" <prandal@.online.microsoft.com> wrote in message
> news:<OcSydv9UEHA.1292@.TK2MSFTNGP10.phx.gbl>...
> >
> > > The Extent Scan Fragmentation algorithm does not work on multiple
> files -
> > > this is documented. You shoudl use Logical Scan Fragmentation. Please
> read
> > > the whitepaper below which will explain everything to you. Let me know
> if
> > > you have any more questions.
> > >
> > >
> http://www.microsoft.com/technet/prodtechnol/sql/2000/maintain/ss2kidbp.mspx
> > >
> > > Regards
> > >
> > > --
> > > Paul Randal
> > > Dev Lead, Microsoft SQL Server Storage Engine
> > >
> > > This posting is provided "AS IS" with no warranties, and confers no
> rights.
> > >
> > > "DeeJay Puar" <deejaypuar@.yahoo.com> wrote in message
> > > news:1d6ce01c453ac$22a84570$a001280a@.phx.gbl...
> > > > Hi,
> > > >
> > > > Just wanted to send a minor correction (I think). Here is
> > > > what BOL says:
> > > >
> > > > Understanding Logical Scan Fragmentation and Extent Scan
> > > > Fragmentation values:
> > > >
> > > > Logical Scan Fragmentation and, to a lesser extent, Extent
> > > > Scan Fragmentation values give the best indication of a
> > > > table's fragmentation level. Both these values should be
> > > > as close to zero as possible (although a value from 0%
> > > > through 10% may be acceptable). It should be noted that
> > > > the Extent Scan Fragmentation value will be high if the
> > > > index spans multiple files. Both methods of reducing
> > > > fragmentation can be used to reduce these values.
> > > >
> > > > IMHO, RAID level has no effect on fragementation. Do you
> > > > have an Index/table that has multiple files? Also, how are
> > > > you moving the data to your laptop? Backing up and restore
> > > > a database file anywhere (RAID 0, 1, 2, 5, etc) does NOT
> > > > remove fragmentation. You can test this by restoring the
> > > > database to a different RAID 5 server and compare the DBCC
> > > > output. It will be the same.
> > > >
> > > > hth
> > > >
> > > > DeeJay
> > > >
> > > >
> > > > >--Original Message--
> > > > >yes.
> > > > >
> > > > >I read an article that suggested "Extent Scan
> Fragmentation" was not a
> > > > >reliable number to look at when data spanned multiple
> disks.
> > > > >
> > > > >I believe the more reliable number to use is scan density.
> > > > >
> > > > >Hope this helps
> > > > >
> > > > >
> > > > >Greg Jackson
> > > > >PDX, Oregon
> > > > >
> > > > >
> > > > >.
> > > > >|||This looks like you've got multiple files. You're running showcontig on the
clustered index over an identity column - which is going to monatonically
increase with each insert, so the inserts are ordered in the index
perfectly - that's why there's no logical scan fragmentation. The allocation
system allocates one extent at a time when doing regular inserts, and
round-robins them through the available files, thus creating extent scan
fragmentation. I'd guess there are 3 or 4 data files from the data below.
When you rebuild the index, it's probably allocating 4 extents at a time per
file, thus reducing the extent scan fragmentation reported. Even though the
extent scan fragmentation algorithm doesn't work very well on multiple
files, it will still respond to bigger groups of contiguous extents per
file.
What are the results of running your test on a database you've setup and can
prove to yourself only has a single data file?
Thanks
--
Paul Randal
Dev Lead, Microsoft SQL Server Storage Engine
This posting is provided "AS IS" with no warranties, and confers no rights.
"brett" <brettolges@.gmail.com> wrote in message
news:6de7e854.0406180528.4dc7243@.posting.google.com...
> Hi Paul,
> Thanks for your help on this.
> As far as the dbreindex every night, I guess that's just being done in
> an attempt to squeeze every bit of performance possible out of the db.
> I'm actually researching this on behalf of a client. I think he
> understands the dbreindex every night is probably overkill -- but he's
> experimenting trying to get the best performance possible.
> The white paper you pointed us to seems to imply that reindexing may
> not even have a very significant effect on performance, depending on
> the characteristics of the database. So we realize we may be barking
> up the wrong tree, but the fact that the fragmentation can not be
> eliminated seems strange, so we're wondering if it could be a clue
> into some hardware problem or configuration problem we may be able to
> fix.
> I'm going to double check and make sure there's just one file in the
> filegroup.
> Until I verify that, here's what we're working on:
> To try and eliminate as many variables as we could, we set up a simple
> test that we could take and run on diff't servers.
> The test just creates a dummy table and fills it with data with lots
> of single insert statements (to try and create fragmentation). Then
> we check the ext. scan frag before and after the dbcc dbreindex.
>
> Here's the test itself...
>
==============================================================================> use pubs
> go
> if exists (select * from dbo.sysobjects where id => object_id(N'[dbo].[FragTest]') and OBJECTPROPERTY(id, N'IsUserTable')
> = 1)
> drop table [dbo].[FragTest]
> GO
> CREATE TABLE [dbo].[FragTest] (
> [FragTestId] [int] IDENTITY (1, 1) NOT NULL ,
> [Field1] [varchar] (10) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
> [Field2] [varchar] (50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL
> ) ON [PRIMARY]
> GO
> CREATE CLUSTERED INDEX [idxFragTestId] ON
> [dbo].[FragTest]([FragTestId]) ON [PRIMARY]
> GO
> ALTER TABLE [dbo].[FragTest] ADD
> CONSTRAINT [PK_FragTest] PRIMARY KEY NONCLUSTERED
> (
> [FragTestId]
> ) ON [PRIMARY]
> GO
> CREATE INDEX [idxField1] ON [dbo].[FragTest]([Field1]) ON [PRIMARY]
> GO
>
> set nocount on
> declare @.i as integer
> set @.i = 0
> while @.i < 30000
> begin
> insert FragTest (Field1,Field2) values
> ('aaa','aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa')
> insert FragTest (Field1,Field2) values
> ('bbb','bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb')
> insert FragTest (Field1,Field2) values
> ('ccc','ccccccccccccccccccccccccccccccccccccccccccccccc')
> insert FragTest (Field1,Field2) values
> ('ddd','dddddddddddddddddddddddddddddddddddddddddddddd')
> insert FragTest (Field1,Field2) values
> ('eee','eeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee')
> insert FragTest (Field1,Field2) values
> ('fff','fffffffffffffffffffffffffffffffffffffffffffffff')
> insert FragTest (Field1,Field2) values
> ('ggg','gggggggggggggggggggggggggggggggggggggggggggggg')
> insert FragTest (Field1,Field2) values
> ('hhh','hhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhh')
> insert FragTest (Field1,Field2) values
> ('iii','iiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiii')
> insert FragTest (Field1,Field2) values
> ('jjj','jjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjj')
> set @.i = @.i + 1
> end
>
> dbcc showcontig('FragTest')
> DBCC DBREINDEX (FragTest, '', 100)
> dbcc showcontig('FragTest')
>
> =======================================================================>
> Here are the results we got on the 'problem' server...
>
> Before DBCC REINDEX:
> DBCC SHOWCONTIG scanning 'FragTest' table...
> Table: 'FragTest' (1187767955); index ID: 1, database ID: 7
> TABLE level scan performed.
> - Pages Scanned........................: 2632
> - Extents Scanned.......................: 334
> - Extent Switches.......................: 333
> - Avg. Pages per Extent..................: 7.9
> - Scan Density [Best Count:Actual Count]......: 98.50% [329:334]
> - Logical Scan Fragmentation ..............: 0.04%
> - Extent Scan Fragmentation ...............: 59.88%
> - Avg. Bytes Free per Page................: 48.9
> - Avg. Page Density (full)................: 99.40%
> DBCC execution completed. If DBCC printed error messages, contact your
> system administrator.
>
> After DBCC REINDEX:
> DBCC SHOWCONTIG scanning 'FragTest' table...
> Table: 'FragTest' (1187767955); index ID: 1, database ID: 7
> TABLE level scan performed.
> - Pages Scanned........................: 2632
> - Extents Scanned.......................: 330
> - Extent Switches.......................: 329
> - Avg. Pages per Extent..................: 8.0
> - Scan Density [Best Count:Actual Count]......: 99.70% [329:330]
> - Logical Scan Fragmentation ..............: 0.00%
> - Extent Scan Fragmentation ...............: 31.82%
> - Avg. Bytes Free per Page................: 48.9
> - Avg. Page Density (full)................: 99.40%
> DBCC execution completed. If DBCC printed error messages, contact your
> system administrator.
>
>
>
>
> "Paul S Randal [MS]" <prandal@.online.microsoft.com> wrote in message
news:<u3zdscOVEHA.3540@.TK2MSFTNGP11.phx.gbl>...
> > I'd guess the free space in the file is fragmented then, preventing
> > contiguous extents being allocated when the index is rebuilt - although
this
> > much continuously present extent fragmentation really smacks of multiple
> > files. Are you sure there is only one file in the filegroup (I noticed
you
> > used primary file and primary filegroup in different responses)? Can you
> > post the output of dbcc checkalloc (as text, not as an attachment)?
> >
> > DBCC DBREINDEX will only fix the problem if there is contiguous free
space
> > in the file equal to the size of the index being rebuilt.
> >
> > Btw, why are you reindexing every night?
> >
> > Thanks
> >
> > --
> > Paul Randal
> > Dev Lead, Microsoft SQL Server Storage Engine
> >
> > This posting is provided "AS IS" with no warranties, and confers no
rights.
> >
> > "brett" <brettolges@.gmail.com> wrote in message
> > news:6de7e854.0406170436.21ae2e17@.posting.google.com...
> > > Hi Paul,
> > >
> > > This table and it's indexes are not built on multiple files. Both the
> > > table and it's indexes reside on the same RAID array, and all on the
> > > PRIMARY file.
> > >
> > > Everything i've found so far says that a dbcc dbreindex should fix the
> > > problem with extent scan fragmentation as long as a clustered index
> > > exists on the table. We've tried everything and I'm stumped.
> > >
> > > Actually it's not just one table, but several that show high (40-50%)
> > > extent scan fragmentation, even though we do dbcc dbreindexes on them
> > > every night. The Logical Scan Fragmentation goes down as we'd expect,
> > > but the dbreindex has very minimal (maybe 2 or 3%) effect on the
> > > extent scan fragmentation.
> > >
> > > Thanks for your response and any further clues!
> > >
> > >
> > >
> > > "Paul S Randal [MS]" <prandal@.online.microsoft.com> wrote in message
> > news:<OcSydv9UEHA.1292@.TK2MSFTNGP10.phx.gbl>...
> > >
> > > > The Extent Scan Fragmentation algorithm does not work on multiple
> > files -
> > > > this is documented. You shoudl use Logical Scan Fragmentation.
Please
> > read
> > > > the whitepaper below which will explain everything to you. Let me
know
> > if
> > > > you have any more questions.
> > > >
> > > >
> >
http://www.microsoft.com/technet/prodtechnol/sql/2000/maintain/ss2kidbp.mspx
> > > >
> > > > Regards
> > > >
> > > > --
> > > > Paul Randal
> > > > Dev Lead, Microsoft SQL Server Storage Engine
> > > >
> > > > This posting is provided "AS IS" with no warranties, and confers no
> > rights.
> > > >
> > > > "DeeJay Puar" <deejaypuar@.yahoo.com> wrote in message
> > > > news:1d6ce01c453ac$22a84570$a001280a@.phx.gbl...
> > > > > Hi,
> > > > >
> > > > > Just wanted to send a minor correction (I think). Here is
> > > > > what BOL says:
> > > > >
> > > > > Understanding Logical Scan Fragmentation and Extent Scan
> > > > > Fragmentation values:
> > > > >
> > > > > Logical Scan Fragmentation and, to a lesser extent, Extent
> > > > > Scan Fragmentation values give the best indication of a
> > > > > table's fragmentation level. Both these values should be
> > > > > as close to zero as possible (although a value from 0%
> > > > > through 10% may be acceptable). It should be noted that
> > > > > the Extent Scan Fragmentation value will be high if the
> > > > > index spans multiple files. Both methods of reducing
> > > > > fragmentation can be used to reduce these values.
> > > > >
> > > > > IMHO, RAID level has no effect on fragementation. Do you
> > > > > have an Index/table that has multiple files? Also, how are
> > > > > you moving the data to your laptop? Backing up and restore
> > > > > a database file anywhere (RAID 0, 1, 2, 5, etc) does NOT
> > > > > remove fragmentation. You can test this by restoring the
> > > > > database to a different RAID 5 server and compare the DBCC
> > > > > output. It will be the same.
> > > > >
> > > > > hth
> > > > >
> > > > > DeeJay
> > > > >
> > > > >
> > > > > >--Original Message--
> > > > > >yes.
> > > > > >
> > > > > >I read an article that suggested "Extent Scan
> > Fragmentation" was not a
> > > > > >reliable number to look at when data spanned multiple
> > disks.
> > > > > >
> > > > > >I believe the more reliable number to use is scan density.
> > > > > >
> > > > > >Hope this helps
> > > > > >
> > > > > >
> > > > > >Greg Jackson
> > > > > >PDX, Oregon
> > > > > >
> > > > > >
> > > > > >.
> > > > > >
number from a dbcc showcontig.
Even right after a DBCC REINDEX, the table comes back with an Extent
Scan Fragmentation number of 40 - 50%.
The table does have a clustered index -- although it is a identity
field.
The table is on a db built on a RAID 5 array. When I take the exact
same table and data, and restore it to my laptop (single hard drive),
a DBCC REINDEX will drop the Extent Scan Fragmentation to 0%.
Any ideas what could be driving the Extent Scan Fragmentation up on
the other server? Could it be the fact that it's on a RAID drive?
Thanksyes.
I read an article that suggested "Extent Scan Fragmentation" was not a
reliable number to look at when data spanned multiple disks.
I believe the more reliable number to use is scan density.
Hope this helps
Greg Jackson
PDX, Oregon|||Hi,
Just wanted to send a minor correction (I think). Here is
what BOL says:
Understanding Logical Scan Fragmentation and Extent Scan
Fragmentation values:
Logical Scan Fragmentation and, to a lesser extent, Extent
Scan Fragmentation values give the best indication of a
table's fragmentation level. Both these values should be
as close to zero as possible (although a value from 0%
through 10% may be acceptable). It should be noted that
the Extent Scan Fragmentation value will be high if the
index spans multiple files. Both methods of reducing
fragmentation can be used to reduce these values.
IMHO, RAID level has no effect on fragementation. Do you
have an Index/table that has multiple files? Also, how are
you moving the data to your laptop? Backing up and restore
a database file anywhere (RAID 0, 1, 2, 5, etc) does NOT
remove fragmentation. You can test this by restoring the
database to a different RAID 5 server and compare the DBCC
output. It will be the same.
hth
DeeJay
>--Original Message--
>yes.
>I read an article that suggested "Extent Scan
Fragmentation" was not a
>reliable number to look at when data spanned multiple
disks.
>I believe the more reliable number to use is scan density.
>Hope this helps
>
>Greg Jackson
>PDX, Oregon
>
>.
>|||The Extent Scan Fragmentation algorithm does not work on multiple files -
this is documented. You shoudl use Logical Scan Fragmentation. Please read
the whitepaper below which will explain everything to you. Let me know if
you have any more questions.
http://www.microsoft.com/technet/prodtechnol/sql/2000/maintain/ss2kidbp.mspx
Regards
--
Paul Randal
Dev Lead, Microsoft SQL Server Storage Engine
This posting is provided "AS IS" with no warranties, and confers no rights.
"DeeJay Puar" <deejaypuar@.yahoo.com> wrote in message
news:1d6ce01c453ac$22a84570$a001280a@.phx.gbl...
> Hi,
> Just wanted to send a minor correction (I think). Here is
> what BOL says:
> Understanding Logical Scan Fragmentation and Extent Scan
> Fragmentation values:
> Logical Scan Fragmentation and, to a lesser extent, Extent
> Scan Fragmentation values give the best indication of a
> table's fragmentation level. Both these values should be
> as close to zero as possible (although a value from 0%
> through 10% may be acceptable). It should be noted that
> the Extent Scan Fragmentation value will be high if the
> index spans multiple files. Both methods of reducing
> fragmentation can be used to reduce these values.
> IMHO, RAID level has no effect on fragementation. Do you
> have an Index/table that has multiple files? Also, how are
> you moving the data to your laptop? Backing up and restore
> a database file anywhere (RAID 0, 1, 2, 5, etc) does NOT
> remove fragmentation. You can test this by restoring the
> database to a different RAID 5 server and compare the DBCC
> output. It will be the same.
> hth
> DeeJay
>
> >--Original Message--
> >yes.
> >
> >I read an article that suggested "Extent Scan
> Fragmentation" was not a
> >reliable number to look at when data spanned multiple
> disks.
> >
> >I believe the more reliable number to use is scan density.
> >
> >Hope this helps
> >
> >
> >Greg Jackson
> >PDX, Oregon
> >
> >
> >.
> >|||Hi Paul,
This table and it's indexes are not built on multiple files. Both the
table and it's indexes reside on the same RAID array, and all on the
PRIMARY file.
Everything i've found so far says that a dbcc dbreindex should fix the
problem with extent scan fragmentation as long as a clustered index
exists on the table. We've tried everything and I'm stumped.
Actually it's not just one table, but several that show high (40-50%)
extent scan fragmentation, even though we do dbcc dbreindexes on them
every night. The Logical Scan Fragmentation goes down as we'd expect,
but the dbreindex has very minimal (maybe 2 or 3%) effect on the
extent scan fragmentation.
Thanks for your response and any further clues!
"Paul S Randal [MS]" <prandal@.online.microsoft.com> wrote in message news:<OcSydv9UEHA.1292@.TK2MSFTNGP10.phx.gbl>...
> The Extent Scan Fragmentation algorithm does not work on multiple files -
> this is documented. You shoudl use Logical Scan Fragmentation. Please read
> the whitepaper below which will explain everything to you. Let me know if
> you have any more questions.
> http://www.microsoft.com/technet/prodtechnol/sql/2000/maintain/ss2kidbp.mspx
> Regards
> --
> Paul Randal
> Dev Lead, Microsoft SQL Server Storage Engine
> This posting is provided "AS IS" with no warranties, and confers no rights.
> "DeeJay Puar" <deejaypuar@.yahoo.com> wrote in message
> news:1d6ce01c453ac$22a84570$a001280a@.phx.gbl...
> > Hi,
> >
> > Just wanted to send a minor correction (I think). Here is
> > what BOL says:
> >
> > Understanding Logical Scan Fragmentation and Extent Scan
> > Fragmentation values:
> >
> > Logical Scan Fragmentation and, to a lesser extent, Extent
> > Scan Fragmentation values give the best indication of a
> > table's fragmentation level. Both these values should be
> > as close to zero as possible (although a value from 0%
> > through 10% may be acceptable). It should be noted that
> > the Extent Scan Fragmentation value will be high if the
> > index spans multiple files. Both methods of reducing
> > fragmentation can be used to reduce these values.
> >
> > IMHO, RAID level has no effect on fragementation. Do you
> > have an Index/table that has multiple files? Also, how are
> > you moving the data to your laptop? Backing up and restore
> > a database file anywhere (RAID 0, 1, 2, 5, etc) does NOT
> > remove fragmentation. You can test this by restoring the
> > database to a different RAID 5 server and compare the DBCC
> > output. It will be the same.
> >
> > hth
> >
> > DeeJay
> >
> >
> > >--Original Message--
> > >yes.
> > >
> > >I read an article that suggested "Extent Scan
> Fragmentation" was not a
> > >reliable number to look at when data spanned multiple
> disks.
> > >
> > >I believe the more reliable number to use is scan density.
> > >
> > >Hope this helps
> > >
> > >
> > >Greg Jackson
> > >PDX, Oregon
> > >
> > >
> > >.
> > >|||Hi DeeJay,
> IMHO, RAID level has no effect on fragementation. Do you
> have an Index/table that has multiple files?
No, the table and its indexes all exist on the PRIMARY filegroup.
> Backing up and restore
> a database file anywhere (RAID 0, 1, 2, 5, etc) does NOT
> remove fragmentation.
Right, I agree. What I'm puzzled by is this: We decided to run a very
simple test to try and figure out if this was a hardware or software
problem. We wrote a script that creates a very simple table and fills
it with data (30,000 rows -- table is about 10 mg). This table ends
up having extent scan fragmentation because of the 30,000 inserts. On
my laptop (1 single harddrive), a dbcc dbreindex clears up this extent
scan fragmentation right away, no problem. On the production server
with the RAID array, the dbcc reindex has almost no effect on the
extent scan fragmentation.
"DeeJay Puar" <deejaypuar@.yahoo.com> wrote in message news:<1d6ce01c453ac$22a84570$a001280a@.phx.gbl>...
> Hi,
> Just wanted to send a minor correction (I think). Here is
> what BOL says:
> Understanding Logical Scan Fragmentation and Extent Scan
> Fragmentation values:
> Logical Scan Fragmentation and, to a lesser extent, Extent
> Scan Fragmentation values give the best indication of a
> table's fragmentation level. Both these values should be
> as close to zero as possible (although a value from 0%
> through 10% may be acceptable). It should be noted that
> the Extent Scan Fragmentation value will be high if the
> index spans multiple files. Both methods of reducing
> fragmentation can be used to reduce these values.
> IMHO, RAID level has no effect on fragementation. Do you
> have an Index/table that has multiple files? Also, how are
> you moving the data to your laptop? Backing up and restore
> a database file anywhere (RAID 0, 1, 2, 5, etc) does NOT
> remove fragmentation. You can test this by restoring the
> database to a different RAID 5 server and compare the DBCC
> output. It will be the same.
> hth
> DeeJay
>
> >--Original Message--
> >yes.
> >
> >I read an article that suggested "Extent Scan
> Fragmentation" was not a
> >reliable number to look at when data spanned multiple
> disks.
> >
> >I believe the more reliable number to use is scan density.
> >
> >Hope this helps
> >
> >
> >Greg Jackson
> >PDX, Oregon
> >
> >
> >.
> >|||Hi Brett,
Before Paul, posted the article link, I read it a couple
weeks ago and I re-read it. Here is something else I found:
DBCC INDEXDEFRAG does not help to untangle indexes that
have become interleaved within a data file. Likewise, DBCC
INDEXDEFRAG does not correct extent fragmentation on
indexes. Interleaving occurs when index extents (a group
of eight index pages) for an index are not completely
contiguous within the data file, leaving extents from one
or more indexes intermingled in the file. Interleaving can
occur even when there is no logical fragmentation, because
all index pages are not necessarily contiguous, even when
logical ordering matches physical ordering.
Even with this limitation, the tests showed that DBCC
INDEXDEFRAG can be as effective at improving performance
of workloads as DBCC DBREINDEX. In fact, the tests showed
that even if you can rebuild indexes so that there is
minimal interleaving, this does not have a significant
effect on performance. Reducing the logical fragmentation
levels had a much greater impact on workload performance.
This is why it is recommended that you focus on logical
fragmentation and page density levels when examining
fragmentation on your indexes.
My Notes:
How big is the table in production? If its 10 MB like your
test, it should just work fine. Also, the 'higher' extent
scan fragementation seems to occur when there is
interleaving. This could happen if your production table
is very large. I do not think it is RAID level related,
its the amount of data and how it is spread across on the
disk.
Also, did you size the database properly in production or
did it autogrow during its use?
DeeJay
>--Original Message--
>Hi DeeJay,
>> IMHO, RAID level has no effect on fragementation. Do
you
>> have an Index/table that has multiple files?
>No, the table and its indexes all exist on the PRIMARY
filegroup.
>
>> Backing up and restore
>> a database file anywhere (RAID 0, 1, 2, 5, etc) does
NOT
>> remove fragmentation.
>Right, I agree. What I'm puzzled by is this: We decided
to run a very
>simple test to try and figure out if this was a hardware
or software
>problem. We wrote a script that creates a very simple
table and fills
>it with data (30,000 rows -- table is about 10 mg). This
table ends
>up having extent scan fragmentation because of the 30,000
inserts. On
>my laptop (1 single harddrive), a dbcc dbreindex clears
up this extent
>scan fragmentation right away, no problem. On the
production server
>with the RAID array, the dbcc reindex has almost no
effect on the
>extent scan fragmentation.
>
>
>"DeeJay Puar" <deejaypuar@.yahoo.com> wrote in message
news:<1d6ce01c453ac$22a84570$a001280a@.phx.gbl>...
>> Hi,
>> Just wanted to send a minor correction (I think). Here
is
>> what BOL says:
>> Understanding Logical Scan Fragmentation and Extent
Scan
>> Fragmentation values:
>> Logical Scan Fragmentation and, to a lesser extent,
Extent
>> Scan Fragmentation values give the best indication of a
>> table's fragmentation level. Both these values should
be
>> as close to zero as possible (although a value from 0%
>> through 10% may be acceptable). It should be noted that
>> the Extent Scan Fragmentation value will be high if the
>> index spans multiple files. Both methods of reducing
>> fragmentation can be used to reduce these values.
>> IMHO, RAID level has no effect on fragementation. Do
you
>> have an Index/table that has multiple files? Also, how
are
>> you moving the data to your laptop? Backing up and
restore
>> a database file anywhere (RAID 0, 1, 2, 5, etc) does
NOT
>> remove fragmentation. You can test this by restoring
the
>> database to a different RAID 5 server and compare the
DBCC
>> output. It will be the same.
>> hth
>> DeeJay
>>
>> >--Original Message--
>> >yes.
>> >
>> >I read an article that suggested "Extent Scan
>> Fragmentation" was not a
>> >reliable number to look at when data spanned multiple
>> disks.
>> >
>> >I believe the more reliable number to use is scan
density.
>> >
>> >Hope this helps
>> >
>> >
>> >Greg Jackson
>> >PDX, Oregon
>> >
>> >
>> >.
>> >
>.
>|||I'd guess the free space in the file is fragmented then, preventing
contiguous extents being allocated when the index is rebuilt - although this
much continuously present extent fragmentation really smacks of multiple
files. Are you sure there is only one file in the filegroup (I noticed you
used primary file and primary filegroup in different responses)? Can you
post the output of dbcc checkalloc (as text, not as an attachment)?
DBCC DBREINDEX will only fix the problem if there is contiguous free space
in the file equal to the size of the index being rebuilt.
Btw, why are you reindexing every night?
Thanks
--
Paul Randal
Dev Lead, Microsoft SQL Server Storage Engine
This posting is provided "AS IS" with no warranties, and confers no rights.
"brett" <brettolges@.gmail.com> wrote in message
news:6de7e854.0406170436.21ae2e17@.posting.google.com...
> Hi Paul,
> This table and it's indexes are not built on multiple files. Both the
> table and it's indexes reside on the same RAID array, and all on the
> PRIMARY file.
> Everything i've found so far says that a dbcc dbreindex should fix the
> problem with extent scan fragmentation as long as a clustered index
> exists on the table. We've tried everything and I'm stumped.
> Actually it's not just one table, but several that show high (40-50%)
> extent scan fragmentation, even though we do dbcc dbreindexes on them
> every night. The Logical Scan Fragmentation goes down as we'd expect,
> but the dbreindex has very minimal (maybe 2 or 3%) effect on the
> extent scan fragmentation.
> Thanks for your response and any further clues!
>
> "Paul S Randal [MS]" <prandal@.online.microsoft.com> wrote in message
news:<OcSydv9UEHA.1292@.TK2MSFTNGP10.phx.gbl>...
> > The Extent Scan Fragmentation algorithm does not work on multiple
files -
> > this is documented. You shoudl use Logical Scan Fragmentation. Please
read
> > the whitepaper below which will explain everything to you. Let me know
if
> > you have any more questions.
> >
> >
http://www.microsoft.com/technet/prodtechnol/sql/2000/maintain/ss2kidbp.mspx
> >
> > Regards
> >
> > --
> > Paul Randal
> > Dev Lead, Microsoft SQL Server Storage Engine
> >
> > This posting is provided "AS IS" with no warranties, and confers no
rights.
> >
> > "DeeJay Puar" <deejaypuar@.yahoo.com> wrote in message
> > news:1d6ce01c453ac$22a84570$a001280a@.phx.gbl...
> > > Hi,
> > >
> > > Just wanted to send a minor correction (I think). Here is
> > > what BOL says:
> > >
> > > Understanding Logical Scan Fragmentation and Extent Scan
> > > Fragmentation values:
> > >
> > > Logical Scan Fragmentation and, to a lesser extent, Extent
> > > Scan Fragmentation values give the best indication of a
> > > table's fragmentation level. Both these values should be
> > > as close to zero as possible (although a value from 0%
> > > through 10% may be acceptable). It should be noted that
> > > the Extent Scan Fragmentation value will be high if the
> > > index spans multiple files. Both methods of reducing
> > > fragmentation can be used to reduce these values.
> > >
> > > IMHO, RAID level has no effect on fragementation. Do you
> > > have an Index/table that has multiple files? Also, how are
> > > you moving the data to your laptop? Backing up and restore
> > > a database file anywhere (RAID 0, 1, 2, 5, etc) does NOT
> > > remove fragmentation. You can test this by restoring the
> > > database to a different RAID 5 server and compare the DBCC
> > > output. It will be the same.
> > >
> > > hth
> > >
> > > DeeJay
> > >
> > >
> > > >--Original Message--
> > > >yes.
> > > >
> > > >I read an article that suggested "Extent Scan
> > Fragmentation" was not a
> > > >reliable number to look at when data spanned multiple
> > disks.
> > > >
> > > >I believe the more reliable number to use is scan density.
> > > >
> > > >Hope this helps
> > > >
> > > >
> > > >Greg Jackson
> > > >PDX, Oregon
> > > >
> > > >
> > > >.
> > > >|||Hi Paul,
Thanks for your help on this.
As far as the dbreindex every night, I guess that's just being done in
an attempt to squeeze every bit of performance possible out of the db.
I'm actually researching this on behalf of a client. I think he
understands the dbreindex every night is probably overkill -- but he's
experimenting trying to get the best performance possible.
The white paper you pointed us to seems to imply that reindexing may
not even have a very significant effect on performance, depending on
the characteristics of the database. So we realize we may be barking
up the wrong tree, but the fact that the fragmentation can not be
eliminated seems strange, so we're wondering if it could be a clue
into some hardware problem or configuration problem we may be able to
fix.
I'm going to double check and make sure there's just one file in the
filegroup.
Until I verify that, here's what we're working on:
To try and eliminate as many variables as we could, we set up a simple
test that we could take and run on diff't servers.
The test just creates a dummy table and fills it with data with lots
of single insert statements (to try and create fragmentation). Then
we check the ext. scan frag before and after the dbcc dbreindex.
Here's the test itself...
==============================================================================
use pubs
go
if exists (select * from dbo.sysobjects where id =object_id(N'[dbo].[FragTest]') and OBJECTPROPERTY(id, N'IsUserTable')
= 1)
drop table [dbo].[FragTest]
GO
CREATE TABLE [dbo].[FragTest] (
[FragTestId] [int] IDENTITY (1, 1) NOT NULL ,
[Field1] [varchar] (10) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[Field2] [varchar] (50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL
) ON [PRIMARY]
GO
CREATE CLUSTERED INDEX [idxFragTestId] ON
[dbo].[FragTest]([FragTestId]) ON [PRIMARY]
GO
ALTER TABLE [dbo].[FragTest] ADD
CONSTRAINT [PK_FragTest] PRIMARY KEY NONCLUSTERED
(
[FragTestId]
) ON [PRIMARY]
GO
CREATE INDEX [idxField1] ON [dbo].[FragTest]([Field1]) ON [PRIMARY]
GO
set nocount on
declare @.i as integer
set @.i = 0
while @.i < 30000
begin
insert FragTest (Field1,Field2) values
('aaa','aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa')
insert FragTest (Field1,Field2) values
('bbb','bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb')
insert FragTest (Field1,Field2) values
('ccc','ccccccccccccccccccccccccccccccccccccccccccccccc')
insert FragTest (Field1,Field2) values
('ddd','dddddddddddddddddddddddddddddddddddddddddddddd')
insert FragTest (Field1,Field2) values
('eee','eeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee')
insert FragTest (Field1,Field2) values
('fff','fffffffffffffffffffffffffffffffffffffffffffffff')
insert FragTest (Field1,Field2) values
('ggg','gggggggggggggggggggggggggggggggggggggggggggggg')
insert FragTest (Field1,Field2) values
('hhh','hhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhh')
insert FragTest (Field1,Field2) values
('iii','iiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiii')
insert FragTest (Field1,Field2) values
('jjj','jjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjj')
set @.i = @.i + 1
end
dbcc showcontig('FragTest')
DBCC DBREINDEX (FragTest, '', 100)
dbcc showcontig('FragTest')
=======================================================================
Here are the results we got on the 'problem' server...
Before DBCC REINDEX:
DBCC SHOWCONTIG scanning 'FragTest' table...
Table: 'FragTest' (1187767955); index ID: 1, database ID: 7
TABLE level scan performed.
- Pages Scanned........................: 2632
- Extents Scanned.......................: 334
- Extent Switches.......................: 333
- Avg. Pages per Extent..................: 7.9
- Scan Density [Best Count:Actual Count]......: 98.50% [329:334]
- Logical Scan Fragmentation ..............: 0.04%
- Extent Scan Fragmentation ...............: 59.88%
- Avg. Bytes Free per Page................: 48.9
- Avg. Page Density (full)................: 99.40%
DBCC execution completed. If DBCC printed error messages, contact your
system administrator.
After DBCC REINDEX:
DBCC SHOWCONTIG scanning 'FragTest' table...
Table: 'FragTest' (1187767955); index ID: 1, database ID: 7
TABLE level scan performed.
- Pages Scanned........................: 2632
- Extents Scanned.......................: 330
- Extent Switches.......................: 329
- Avg. Pages per Extent..................: 8.0
- Scan Density [Best Count:Actual Count]......: 99.70% [329:330]
- Logical Scan Fragmentation ..............: 0.00%
- Extent Scan Fragmentation ...............: 31.82%
- Avg. Bytes Free per Page................: 48.9
- Avg. Page Density (full)................: 99.40%
DBCC execution completed. If DBCC printed error messages, contact your
system administrator.
"Paul S Randal [MS]" <prandal@.online.microsoft.com> wrote in message news:<u3zdscOVEHA.3540@.TK2MSFTNGP11.phx.gbl>...
> I'd guess the free space in the file is fragmented then, preventing
> contiguous extents being allocated when the index is rebuilt - although this
> much continuously present extent fragmentation really smacks of multiple
> files. Are you sure there is only one file in the filegroup (I noticed you
> used primary file and primary filegroup in different responses)? Can you
> post the output of dbcc checkalloc (as text, not as an attachment)?
> DBCC DBREINDEX will only fix the problem if there is contiguous free space
> in the file equal to the size of the index being rebuilt.
> Btw, why are you reindexing every night?
> Thanks
> --
> Paul Randal
> Dev Lead, Microsoft SQL Server Storage Engine
> This posting is provided "AS IS" with no warranties, and confers no rights.
> "brett" <brettolges@.gmail.com> wrote in message
> news:6de7e854.0406170436.21ae2e17@.posting.google.com...
> > Hi Paul,
> >
> > This table and it's indexes are not built on multiple files. Both the
> > table and it's indexes reside on the same RAID array, and all on the
> > PRIMARY file.
> >
> > Everything i've found so far says that a dbcc dbreindex should fix the
> > problem with extent scan fragmentation as long as a clustered index
> > exists on the table. We've tried everything and I'm stumped.
> >
> > Actually it's not just one table, but several that show high (40-50%)
> > extent scan fragmentation, even though we do dbcc dbreindexes on them
> > every night. The Logical Scan Fragmentation goes down as we'd expect,
> > but the dbreindex has very minimal (maybe 2 or 3%) effect on the
> > extent scan fragmentation.
> >
> > Thanks for your response and any further clues!
> >
> >
> >
> > "Paul S Randal [MS]" <prandal@.online.microsoft.com> wrote in message
> news:<OcSydv9UEHA.1292@.TK2MSFTNGP10.phx.gbl>...
> >
> > > The Extent Scan Fragmentation algorithm does not work on multiple
> files -
> > > this is documented. You shoudl use Logical Scan Fragmentation. Please
> read
> > > the whitepaper below which will explain everything to you. Let me know
> if
> > > you have any more questions.
> > >
> > >
> http://www.microsoft.com/technet/prodtechnol/sql/2000/maintain/ss2kidbp.mspx
> > >
> > > Regards
> > >
> > > --
> > > Paul Randal
> > > Dev Lead, Microsoft SQL Server Storage Engine
> > >
> > > This posting is provided "AS IS" with no warranties, and confers no
> rights.
> > >
> > > "DeeJay Puar" <deejaypuar@.yahoo.com> wrote in message
> > > news:1d6ce01c453ac$22a84570$a001280a@.phx.gbl...
> > > > Hi,
> > > >
> > > > Just wanted to send a minor correction (I think). Here is
> > > > what BOL says:
> > > >
> > > > Understanding Logical Scan Fragmentation and Extent Scan
> > > > Fragmentation values:
> > > >
> > > > Logical Scan Fragmentation and, to a lesser extent, Extent
> > > > Scan Fragmentation values give the best indication of a
> > > > table's fragmentation level. Both these values should be
> > > > as close to zero as possible (although a value from 0%
> > > > through 10% may be acceptable). It should be noted that
> > > > the Extent Scan Fragmentation value will be high if the
> > > > index spans multiple files. Both methods of reducing
> > > > fragmentation can be used to reduce these values.
> > > >
> > > > IMHO, RAID level has no effect on fragementation. Do you
> > > > have an Index/table that has multiple files? Also, how are
> > > > you moving the data to your laptop? Backing up and restore
> > > > a database file anywhere (RAID 0, 1, 2, 5, etc) does NOT
> > > > remove fragmentation. You can test this by restoring the
> > > > database to a different RAID 5 server and compare the DBCC
> > > > output. It will be the same.
> > > >
> > > > hth
> > > >
> > > > DeeJay
> > > >
> > > >
> > > > >--Original Message--
> > > > >yes.
> > > > >
> > > > >I read an article that suggested "Extent Scan
> Fragmentation" was not a
> > > > >reliable number to look at when data spanned multiple
> disks.
> > > > >
> > > > >I believe the more reliable number to use is scan density.
> > > > >
> > > > >Hope this helps
> > > > >
> > > > >
> > > > >Greg Jackson
> > > > >PDX, Oregon
> > > > >
> > > > >
> > > > >.
> > > > >|||This looks like you've got multiple files. You're running showcontig on the
clustered index over an identity column - which is going to monatonically
increase with each insert, so the inserts are ordered in the index
perfectly - that's why there's no logical scan fragmentation. The allocation
system allocates one extent at a time when doing regular inserts, and
round-robins them through the available files, thus creating extent scan
fragmentation. I'd guess there are 3 or 4 data files from the data below.
When you rebuild the index, it's probably allocating 4 extents at a time per
file, thus reducing the extent scan fragmentation reported. Even though the
extent scan fragmentation algorithm doesn't work very well on multiple
files, it will still respond to bigger groups of contiguous extents per
file.
What are the results of running your test on a database you've setup and can
prove to yourself only has a single data file?
Thanks
--
Paul Randal
Dev Lead, Microsoft SQL Server Storage Engine
This posting is provided "AS IS" with no warranties, and confers no rights.
"brett" <brettolges@.gmail.com> wrote in message
news:6de7e854.0406180528.4dc7243@.posting.google.com...
> Hi Paul,
> Thanks for your help on this.
> As far as the dbreindex every night, I guess that's just being done in
> an attempt to squeeze every bit of performance possible out of the db.
> I'm actually researching this on behalf of a client. I think he
> understands the dbreindex every night is probably overkill -- but he's
> experimenting trying to get the best performance possible.
> The white paper you pointed us to seems to imply that reindexing may
> not even have a very significant effect on performance, depending on
> the characteristics of the database. So we realize we may be barking
> up the wrong tree, but the fact that the fragmentation can not be
> eliminated seems strange, so we're wondering if it could be a clue
> into some hardware problem or configuration problem we may be able to
> fix.
> I'm going to double check and make sure there's just one file in the
> filegroup.
> Until I verify that, here's what we're working on:
> To try and eliminate as many variables as we could, we set up a simple
> test that we could take and run on diff't servers.
> The test just creates a dummy table and fills it with data with lots
> of single insert statements (to try and create fragmentation). Then
> we check the ext. scan frag before and after the dbcc dbreindex.
>
> Here's the test itself...
>
==============================================================================> use pubs
> go
> if exists (select * from dbo.sysobjects where id => object_id(N'[dbo].[FragTest]') and OBJECTPROPERTY(id, N'IsUserTable')
> = 1)
> drop table [dbo].[FragTest]
> GO
> CREATE TABLE [dbo].[FragTest] (
> [FragTestId] [int] IDENTITY (1, 1) NOT NULL ,
> [Field1] [varchar] (10) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
> [Field2] [varchar] (50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL
> ) ON [PRIMARY]
> GO
> CREATE CLUSTERED INDEX [idxFragTestId] ON
> [dbo].[FragTest]([FragTestId]) ON [PRIMARY]
> GO
> ALTER TABLE [dbo].[FragTest] ADD
> CONSTRAINT [PK_FragTest] PRIMARY KEY NONCLUSTERED
> (
> [FragTestId]
> ) ON [PRIMARY]
> GO
> CREATE INDEX [idxField1] ON [dbo].[FragTest]([Field1]) ON [PRIMARY]
> GO
>
> set nocount on
> declare @.i as integer
> set @.i = 0
> while @.i < 30000
> begin
> insert FragTest (Field1,Field2) values
> ('aaa','aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa')
> insert FragTest (Field1,Field2) values
> ('bbb','bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb')
> insert FragTest (Field1,Field2) values
> ('ccc','ccccccccccccccccccccccccccccccccccccccccccccccc')
> insert FragTest (Field1,Field2) values
> ('ddd','dddddddddddddddddddddddddddddddddddddddddddddd')
> insert FragTest (Field1,Field2) values
> ('eee','eeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee')
> insert FragTest (Field1,Field2) values
> ('fff','fffffffffffffffffffffffffffffffffffffffffffffff')
> insert FragTest (Field1,Field2) values
> ('ggg','gggggggggggggggggggggggggggggggggggggggggggggg')
> insert FragTest (Field1,Field2) values
> ('hhh','hhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhh')
> insert FragTest (Field1,Field2) values
> ('iii','iiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiii')
> insert FragTest (Field1,Field2) values
> ('jjj','jjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjj')
> set @.i = @.i + 1
> end
>
> dbcc showcontig('FragTest')
> DBCC DBREINDEX (FragTest, '', 100)
> dbcc showcontig('FragTest')
>
> =======================================================================>
> Here are the results we got on the 'problem' server...
>
> Before DBCC REINDEX:
> DBCC SHOWCONTIG scanning 'FragTest' table...
> Table: 'FragTest' (1187767955); index ID: 1, database ID: 7
> TABLE level scan performed.
> - Pages Scanned........................: 2632
> - Extents Scanned.......................: 334
> - Extent Switches.......................: 333
> - Avg. Pages per Extent..................: 7.9
> - Scan Density [Best Count:Actual Count]......: 98.50% [329:334]
> - Logical Scan Fragmentation ..............: 0.04%
> - Extent Scan Fragmentation ...............: 59.88%
> - Avg. Bytes Free per Page................: 48.9
> - Avg. Page Density (full)................: 99.40%
> DBCC execution completed. If DBCC printed error messages, contact your
> system administrator.
>
> After DBCC REINDEX:
> DBCC SHOWCONTIG scanning 'FragTest' table...
> Table: 'FragTest' (1187767955); index ID: 1, database ID: 7
> TABLE level scan performed.
> - Pages Scanned........................: 2632
> - Extents Scanned.......................: 330
> - Extent Switches.......................: 329
> - Avg. Pages per Extent..................: 8.0
> - Scan Density [Best Count:Actual Count]......: 99.70% [329:330]
> - Logical Scan Fragmentation ..............: 0.00%
> - Extent Scan Fragmentation ...............: 31.82%
> - Avg. Bytes Free per Page................: 48.9
> - Avg. Page Density (full)................: 99.40%
> DBCC execution completed. If DBCC printed error messages, contact your
> system administrator.
>
>
>
>
> "Paul S Randal [MS]" <prandal@.online.microsoft.com> wrote in message
news:<u3zdscOVEHA.3540@.TK2MSFTNGP11.phx.gbl>...
> > I'd guess the free space in the file is fragmented then, preventing
> > contiguous extents being allocated when the index is rebuilt - although
this
> > much continuously present extent fragmentation really smacks of multiple
> > files. Are you sure there is only one file in the filegroup (I noticed
you
> > used primary file and primary filegroup in different responses)? Can you
> > post the output of dbcc checkalloc (as text, not as an attachment)?
> >
> > DBCC DBREINDEX will only fix the problem if there is contiguous free
space
> > in the file equal to the size of the index being rebuilt.
> >
> > Btw, why are you reindexing every night?
> >
> > Thanks
> >
> > --
> > Paul Randal
> > Dev Lead, Microsoft SQL Server Storage Engine
> >
> > This posting is provided "AS IS" with no warranties, and confers no
rights.
> >
> > "brett" <brettolges@.gmail.com> wrote in message
> > news:6de7e854.0406170436.21ae2e17@.posting.google.com...
> > > Hi Paul,
> > >
> > > This table and it's indexes are not built on multiple files. Both the
> > > table and it's indexes reside on the same RAID array, and all on the
> > > PRIMARY file.
> > >
> > > Everything i've found so far says that a dbcc dbreindex should fix the
> > > problem with extent scan fragmentation as long as a clustered index
> > > exists on the table. We've tried everything and I'm stumped.
> > >
> > > Actually it's not just one table, but several that show high (40-50%)
> > > extent scan fragmentation, even though we do dbcc dbreindexes on them
> > > every night. The Logical Scan Fragmentation goes down as we'd expect,
> > > but the dbreindex has very minimal (maybe 2 or 3%) effect on the
> > > extent scan fragmentation.
> > >
> > > Thanks for your response and any further clues!
> > >
> > >
> > >
> > > "Paul S Randal [MS]" <prandal@.online.microsoft.com> wrote in message
> > news:<OcSydv9UEHA.1292@.TK2MSFTNGP10.phx.gbl>...
> > >
> > > > The Extent Scan Fragmentation algorithm does not work on multiple
> > files -
> > > > this is documented. You shoudl use Logical Scan Fragmentation.
Please
> > read
> > > > the whitepaper below which will explain everything to you. Let me
know
> > if
> > > > you have any more questions.
> > > >
> > > >
> >
http://www.microsoft.com/technet/prodtechnol/sql/2000/maintain/ss2kidbp.mspx
> > > >
> > > > Regards
> > > >
> > > > --
> > > > Paul Randal
> > > > Dev Lead, Microsoft SQL Server Storage Engine
> > > >
> > > > This posting is provided "AS IS" with no warranties, and confers no
> > rights.
> > > >
> > > > "DeeJay Puar" <deejaypuar@.yahoo.com> wrote in message
> > > > news:1d6ce01c453ac$22a84570$a001280a@.phx.gbl...
> > > > > Hi,
> > > > >
> > > > > Just wanted to send a minor correction (I think). Here is
> > > > > what BOL says:
> > > > >
> > > > > Understanding Logical Scan Fragmentation and Extent Scan
> > > > > Fragmentation values:
> > > > >
> > > > > Logical Scan Fragmentation and, to a lesser extent, Extent
> > > > > Scan Fragmentation values give the best indication of a
> > > > > table's fragmentation level. Both these values should be
> > > > > as close to zero as possible (although a value from 0%
> > > > > through 10% may be acceptable). It should be noted that
> > > > > the Extent Scan Fragmentation value will be high if the
> > > > > index spans multiple files. Both methods of reducing
> > > > > fragmentation can be used to reduce these values.
> > > > >
> > > > > IMHO, RAID level has no effect on fragementation. Do you
> > > > > have an Index/table that has multiple files? Also, how are
> > > > > you moving the data to your laptop? Backing up and restore
> > > > > a database file anywhere (RAID 0, 1, 2, 5, etc) does NOT
> > > > > remove fragmentation. You can test this by restoring the
> > > > > database to a different RAID 5 server and compare the DBCC
> > > > > output. It will be the same.
> > > > >
> > > > > hth
> > > > >
> > > > > DeeJay
> > > > >
> > > > >
> > > > > >--Original Message--
> > > > > >yes.
> > > > > >
> > > > > >I read an article that suggested "Extent Scan
> > Fragmentation" was not a
> > > > > >reliable number to look at when data spanned multiple
> > disks.
> > > > > >
> > > > > >I believe the more reliable number to use is scan density.
> > > > > >
> > > > > >Hope this helps
> > > > > >
> > > > > >
> > > > > >Greg Jackson
> > > > > >PDX, Oregon
> > > > > >
> > > > > >
> > > > > >.
> > > > > >
High Extent Scan Fragmentation
We have a certain table that returns a high Extent Scan Fragmentation
number from a dbcc showcontig.
Even right after a DBCC REINDEX, the table comes back with an Extent
Scan Fragmentation number of 40 - 50%.
The table does have a clustered index -- although it is a identity
field.
The table is on a db built on a RAID 5 array. When I take the exact
same table and data, and restore it to my laptop (single hard drive),
a DBCC REINDEX will drop the Extent Scan Fragmentation to 0%.
Any ideas what could be driving the Extent Scan Fragmentation up on
the other server? Could it be the fact that it's on a RAID drive?
Thanksyes.
I read an article that suggested "Extent Scan Fragmentation" was not a
reliable number to look at when data spanned multiple disks.
I believe the more reliable number to use is scan density.
Hope this helps
Greg Jackson
PDX, Oregon|||Hi,
Just wanted to send a minor correction (I think). Here is
what BOL says:
Understanding Logical Scan Fragmentation and Extent Scan
Fragmentation values:
Logical Scan Fragmentation and, to a lesser extent, Extent
Scan Fragmentation values give the best indication of a
table's fragmentation level. Both these values should be
as close to zero as possible (although a value from 0%
through 10% may be acceptable). It should be noted that
the Extent Scan Fragmentation value will be high if the
index spans multiple files. Both methods of reducing
fragmentation can be used to reduce these values.
IMHO, RAID level has no effect on fragementation. Do you
have an Index/table that has multiple files? Also, how are
you moving the data to your laptop? Backing up and restore
a database file anywhere (RAID 0, 1, 2, 5, etc) does NOT
remove fragmentation. You can test this by restoring the
database to a different RAID 5 server and compare the DBCC
output. It will be the same.
hth
DeeJay
>--Original Message--
>yes.
>I read an article that suggested "Extent Scan
Fragmentation" was not a
>reliable number to look at when data spanned multiple
disks.
>I believe the more reliable number to use is scan density.
>Hope this helps
>
>Greg Jackson
>PDX, Oregon
>
>.
>|||The Extent Scan Fragmentation algorithm does not work on multiple files -
this is documented. You shoudl use Logical Scan Fragmentation. Please read
the whitepaper below which will explain everything to you. Let me know if
you have any more questions.
http://www.microsoft.com/technet/pr...n/ss2kidbp.mspx
Regards
Paul Randal
Dev Lead, Microsoft SQL Server Storage Engine
This posting is provided "AS IS" with no warranties, and confers no rights.
"DeeJay Puar" <deejaypuar@.yahoo.com> wrote in message
news:1d6ce01c453ac$22a84570$a001280a@.phx
.gbl...[vbcol=seagreen]
> Hi,
> Just wanted to send a minor correction (I think). Here is
> what BOL says:
> Understanding Logical Scan Fragmentation and Extent Scan
> Fragmentation values:
> Logical Scan Fragmentation and, to a lesser extent, Extent
> Scan Fragmentation values give the best indication of a
> table's fragmentation level. Both these values should be
> as close to zero as possible (although a value from 0%
> through 10% may be acceptable). It should be noted that
> the Extent Scan Fragmentation value will be high if the
> index spans multiple files. Both methods of reducing
> fragmentation can be used to reduce these values.
> IMHO, RAID level has no effect on fragementation. Do you
> have an Index/table that has multiple files? Also, how are
> you moving the data to your laptop? Backing up and restore
> a database file anywhere (RAID 0, 1, 2, 5, etc) does NOT
> remove fragmentation. You can test this by restoring the
> database to a different RAID 5 server and compare the DBCC
> output. It will be the same.
> hth
> DeeJay
>
> Fragmentation" was not a
> disks.|||Hi Paul,
This table and it's indexes are not built on multiple files. Both the
table and it's indexes reside on the same RAID array, and all on the
PRIMARY file.
Everything i've found so far says that a dbcc dbreindex should fix the
problem with extent scan fragmentation as long as a clustered index
exists on the table. We've tried everything and I'm stumped.
Actually it's not just one table, but several that show high (40-50%)
extent scan fragmentation, even though we do dbcc dbreindexes on them
every night. The Logical Scan Fragmentation goes down as we'd expect,
but the dbreindex has very minimal (maybe 2 or 3%) effect on the
extent scan fragmentation.
Thanks for your response and any further clues!
"Paul S Randal [MS]" <prandal@.online.microsoft.com> wrote in message new
s:<OcSydv9UEHA.1292@.TK2MSFTNGP10.phx.gbl>...
[vbcol=seagreen]
> The Extent Scan Fragmentation algorithm does not work on multiple files -
> this is documented. You shoudl use Logical Scan Fragmentation. Please read
> the whitepaper below which will explain everything to you. Let me know if
> you have any more questions.
> [url]http://www.microsoft.com/technet/prodtechnol/sql/2000/maintain/ss2kidbp.mspx[/ur
l]
> Regards
> --
> Paul Randal
> Dev Lead, Microsoft SQL Server Storage Engine
> This posting is provided "AS IS" with no warranties, and confers no rights
.
> "DeeJay Puar" <deejaypuar@.yahoo.com> wrote in message
> news:1d6ce01c453ac$22a84570$a001280a@.phx
.gbl...
> Fragmentation" was not a
> disks.|||Hi DeeJay,
> IMHO, RAID level has no effect on fragementation. Do you
> have an Index/table that has multiple files?
No, the table and its indexes all exist on the PRIMARY filegroup.
> Backing up and restore
> a database file anywhere (RAID 0, 1, 2, 5, etc) does NOT
> remove fragmentation.
Right, I agree. What I'm puzzled by is this: We decided to run a very
simple test to try and figure out if this was a hardware or software
problem. We wrote a script that creates a very simple table and fills
it with data (30,000 rows -- table is about 10 mg). This table ends
up having extent scan fragmentation because of the 30,000 inserts. On
my laptop (1 single harddrive), a dbcc dbreindex clears up this extent
scan fragmentation right away, no problem. On the production server
with the RAID array, the dbcc reindex has almost no effect on the
extent scan fragmentation.
"DeeJay Puar" <deejaypuar@.yahoo.com> wrote in message news:<1d6ce01c453ac$22a84570$a001280a@.
phx.gbl>...[vbcol=seagreen]
> Hi,
> Just wanted to send a minor correction (I think). Here is
> what BOL says:
> Understanding Logical Scan Fragmentation and Extent Scan
> Fragmentation values:
> Logical Scan Fragmentation and, to a lesser extent, Extent
> Scan Fragmentation values give the best indication of a
> table's fragmentation level. Both these values should be
> as close to zero as possible (although a value from 0%
> through 10% may be acceptable). It should be noted that
> the Extent Scan Fragmentation value will be high if the
> index spans multiple files. Both methods of reducing
> fragmentation can be used to reduce these values.
> IMHO, RAID level has no effect on fragementation. Do you
> have an Index/table that has multiple files? Also, how are
> you moving the data to your laptop? Backing up and restore
> a database file anywhere (RAID 0, 1, 2, 5, etc) does NOT
> remove fragmentation. You can test this by restoring the
> database to a different RAID 5 server and compare the DBCC
> output. It will be the same.
> hth
> DeeJay
>
> Fragmentation" was not a
> disks.|||Hi Brett,
Before Paul, posted the article link, I read it a couple
weeks ago and I re-read it. Here is something else I found:
DBCC INDEXDEFRAG does not help to untangle indexes that
have become interleaved within a data file. Likewise, DBCC
INDEXDEFRAG does not correct extent fragmentation on
indexes. Interleaving occurs when index extents (a group
of eight index pages) for an index are not completely
contiguous within the data file, leaving extents from one
or more indexes intermingled in the file. Interleaving can
occur even when there is no logical fragmentation, because
all index pages are not necessarily contiguous, even when
logical ordering matches physical ordering.
Even with this limitation, the tests showed that DBCC
INDEXDEFRAG can be as effective at improving performance
of workloads as DBCC DBREINDEX. In fact, the tests showed
that even if you can rebuild indexes so that there is
minimal interleaving, this does not have a significant
effect on performance. Reducing the logical fragmentation
levels had a much greater impact on workload performance.
This is why it is recommended that you focus on logical
fragmentation and page density levels when examining
fragmentation on your indexes.
My Notes:
How big is the table in production? If its 10 MB like your
test, it should just work fine. Also, the 'higher' extent
scan fragementation seems to occur when there is
interleaving. This could happen if your production table
is very large. I do not think it is RAID level related,
its the amount of data and how it is spread across on the
disk.
Also, did you size the database properly in production or
did it autogrow during its use?
DeeJay
>--Original Message--
>Hi DeeJay,
>
you[vbcol=seagreen]
>No, the table and its indexes all exist on the PRIMARY
filegroup.
>
NOT[vbcol=seagreen]
>Right, I agree. What I'm puzzled by is this: We decided
to run a very
>simple test to try and figure out if this was a hardware
or software
>problem. We wrote a script that creates a very simple
table and fills
>it with data (30,000 rows -- table is about 10 mg). This
table ends
>up having extent scan fragmentation because of the 30,000
inserts. On
>my laptop (1 single harddrive), a dbcc dbreindex clears
up this extent
>scan fragmentation right away, no problem. On the
production server
>with the RAID array, the dbcc reindex has almost no
effect on the
>extent scan fragmentation.
>
>
>"DeeJay Puar" <deejaypuar@.yahoo.com> wrote in message
news:<1d6ce01c453ac$22a84570$a001280a@.phx.gbl>...
is[vbcol=seagreen]
Scan[vbcol=seagreen]
Extent[vbcol=seagreen]
be[vbcol=seagreen]
you[vbcol=seagreen]
are[vbcol=seagreen]
restore[vbcol=seagreen]
NOT[vbcol=seagreen]
the[vbcol=seagreen]
DBCC[vbcol=seagreen]
density.[vbcol=seagreen]
>.
>|||I'd guess the free space in the file is fragmented then, preventing
contiguous extents being allocated when the index is rebuilt - although this
much continuously present extent fragmentation really smacks of multiple
files. Are you sure there is only one file in the filegroup (I noticed you
used primary file and primary filegroup in different responses)? Can you
post the output of dbcc checkalloc (as text, not as an attachment)?
DBCC DBREINDEX will only fix the problem if there is contiguous free space
in the file equal to the size of the index being rebuilt.
Btw, why are you reindexing every night?
Thanks
Paul Randal
Dev Lead, Microsoft SQL Server Storage Engine
This posting is provided "AS IS" with no warranties, and confers no rights.
"brett" <brettolges@.gmail.com> wrote in message
news:6de7e854.0406170436.21ae2e17@.posting.google.com...
> Hi Paul,
> This table and it's indexes are not built on multiple files. Both the
> table and it's indexes reside on the same RAID array, and all on the
> PRIMARY file.
> Everything i've found so far says that a dbcc dbreindex should fix the
> problem with extent scan fragmentation as long as a clustered index
> exists on the table. We've tried everything and I'm stumped.
> Actually it's not just one table, but several that show high (40-50%)
> extent scan fragmentation, even though we do dbcc dbreindexes on them
> every night. The Logical Scan Fragmentation goes down as we'd expect,
> but the dbreindex has very minimal (maybe 2 or 3%) effect on the
> extent scan fragmentation.
> Thanks for your response and any further clues!
>
> "Paul S Randal [MS]" <prandal@.online.microsoft.com> wrote in message
news:<OcSydv9UEHA.1292@.TK2MSFTNGP10.phx.gbl>...[vbcol=seagreen]
>
files -[vbcol=seagreen]
read[vbcol=seagreen]
if[vbcol=seagreen]
http://www.microsoft.com/technet/pr...n/ss2kidbp.mspx[vbcol=seagreen]
rights.[vbcol=seagreen]|||Hi Paul,
Thanks for your help on this.
As far as the dbreindex every night, I guess that's just being done in
an attempt to squeeze every bit of performance possible out of the db.
I'm actually researching this on behalf of a client. I think he
understands the dbreindex every night is probably overkill -- but he's
experimenting trying to get the best performance possible.
The white paper you pointed us to seems to imply that reindexing may
not even have a very significant effect on performance, depending on
the characteristics of the database. So we realize we may be barking
up the wrong tree, but the fact that the fragmentation can not be
eliminated seems strange, so we're wondering if it could be a clue
into some hardware problem or configuration problem we may be able to
fix.
I'm going to double check and make sure there's just one file in the
filegroup.
Until I verify that, here's what we're working on:
To try and eliminate as many variables as we could, we set up a simple
test that we could take and run on diff't servers.
The test just creates a dummy table and fills it with data with lots
of single insert statements (to try and create fragmentation). Then
we check the ext. scan frag before and after the dbcc dbreindex.
Here's the test itself...
========================================
====================================
==
use pubs
go
if exists (select * from dbo.sysobjects where id =
object_id(N'[dbo].[FragTest]') and OBJECTPROPERTY(id, N'IsUserTable'
)
= 1)
drop table [dbo].[FragTest]
GO
CREATE TABLE [dbo].[FragTest] (
[FragTestId] [int] IDENTITY (1, 1) NOT NULL ,
[Field1] [varchar] (10) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[Field2] [varchar] (50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL
) ON [PRIMARY]
GO
CREATE CLUSTERED INDEX [idxFragTestId] ON
[dbo].[FragTest]([FragTestId]) ON [PRIMARY]
GO
ALTER TABLE [dbo].[FragTest] ADD
CONSTRAINT [PK_FragTest] PRIMARY KEY NONCLUSTERED
(
[FragTestId]
) ON [PRIMARY]
GO
CREATE INDEX [idxField1] ON [dbo].[FragTest]([Field1]) ON &
#91;PRIMARY]
GO
set nocount on
declare @.i as integer
set @.i = 0
while @.i < 30000
begin
insert FragTest (Field1,Field2) values
('aaa','aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaa')
insert FragTest (Field1,Field2) values
('bbb','bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb
bbbbbbbbbbbbbbb')
insert FragTest (Field1,Field2) values
('ccc','cccccccccccccccccccccccccccccccc
ccccccccccccccc')
insert FragTest (Field1,Field2) values
('ddd','dddddddddddddddddddddddddddddddd
dddddddddddddd')
insert FragTest (Field1,Field2) values
('eee','eeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee
eeeeeeeeeeeeee')
insert FragTest (Field1,Field2) values
('fff','ffffffffffffffffffffffffffffffff
fffffffffffffff')
insert FragTest (Field1,Field2) values
('ggg','gggggggggggggggggggggggggggggggg
gggggggggggggg')
insert FragTest (Field1,Field2) values
('hhh','hhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhh
hhhhhhhhhhhhhhh')
insert FragTest (Field1,Field2) values
('iii','iiiiiiiiiiiiiiiiiiiiiiiiiiiiiiii
iiiiiiiiiiiiiii')
insert FragTest (Field1,Field2) values
('jjj','jjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjj
jjjjjjjjjjjjjjj')
set @.i = @.i + 1
end
dbcc showcontig('FragTest')
DBCC DBREINDEX (FragTest, '', 100)
dbcc showcontig('FragTest')
========================================
===============================
Here are the results we got on the 'problem' server...
Before DBCC REINDEX:
DBCC SHOWCONTIG scanning 'FragTest' table...
Table: 'FragTest' (1187767955); index ID: 1, database ID: 7
TABLE level scan performed.
- Pages Scanned........................: 2632
- Extents Scanned.......................: 334
- Extent Switches.......................: 333
- Avg. Pages per Extent..................: 7.9
- Scan Density [Best Count:Actual Count]......: 98.50% [329:334]
- Logical Scan Fragmentation ..............: 0.04%
- Extent Scan Fragmentation ...............: 59.88%
- Avg. Bytes Free per Page................: 48.9
- Avg. Page Density (full)................: 99.40%
DBCC execution completed. If DBCC printed error messages, contact your
system administrator.
After DBCC REINDEX:
DBCC SHOWCONTIG scanning 'FragTest' table...
Table: 'FragTest' (1187767955); index ID: 1, database ID: 7
TABLE level scan performed.
- Pages Scanned........................: 2632
- Extents Scanned.......................: 330
- Extent Switches.......................: 329
- Avg. Pages per Extent..................: 8.0
- Scan Density [Best Count:Actual Count]......: 99.70% [329:330]
- Logical Scan Fragmentation ..............: 0.00%
- Extent Scan Fragmentation ...............: 31.82%
- Avg. Bytes Free per Page................: 48.9
- Avg. Page Density (full)................: 99.40%
DBCC execution completed. If DBCC printed error messages, contact your
system administrator.
"Paul S Randal [MS]" <prandal@.online.microsoft.com> wrote in message news:<u3zdscOVEHA.3
540@.TK2MSFTNGP11.phx.gbl>...[vbcol=seagreen]
> I'd guess the free space in the file is fragmented then, preventing
> contiguous extents being allocated when the index is rebuilt - although th
is
> much continuously present extent fragmentation really smacks of multiple
> files. Are you sure there is only one file in the filegroup (I noticed you
> used primary file and primary filegroup in different responses)? Can you
> post the output of dbcc checkalloc (as text, not as an attachment)?
> DBCC DBREINDEX will only fix the problem if there is contiguous free space
> in the file equal to the size of the index being rebuilt.
> Btw, why are you reindexing every night?
> Thanks
> --
> Paul Randal
> Dev Lead, Microsoft SQL Server Storage Engine
> This posting is provided "AS IS" with no warranties, and confers no rights
.
> "brett" <brettolges@.gmail.com> wrote in message
> news:6de7e854.0406170436.21ae2e17@.posting.google.com...
> news:<OcSydv9UEHA.1292@.TK2MSFTNGP10.phx.gbl>...
> files -
> read
> if
> [url]http://www.microsoft.com/technet/prodtechnol/sql/2000/maintain/ss2kidbp.mspx[/u
rl]
> rights.
> Fragmentation" was not a
> disks.|||This looks like you've got multiple files. You're running showcontig on the
clustered index over an identity column - which is going to monatonically
increase with each insert, so the inserts are ordered in the index
perfectly - that's why there's no logical scan fragmentation. The allocation
system allocates one extent at a time when doing regular inserts, and
round-robins them through the available files, thus creating extent scan
fragmentation. I'd guess there are 3 or 4 data files from the data below.
When you rebuild the index, it's probably allocating 4 extents at a time per
file, thus reducing the extent scan fragmentation reported. Even though the
extent scan fragmentation algorithm doesn't work very well on multiple
files, it will still respond to bigger groups of contiguous extents per
file.
What are the results of running your test on a database you've setup and can
prove to yourself only has a single data file?
Thanks
Paul Randal
Dev Lead, Microsoft SQL Server Storage Engine
This posting is provided "AS IS" with no warranties, and confers no rights.
"brett" <brettolges@.gmail.com> wrote in message
news:6de7e854.0406180528.4dc7243@.posting.google.com...
> Hi Paul,
> Thanks for your help on this.
> As far as the dbreindex every night, I guess that's just being done in
> an attempt to squeeze every bit of performance possible out of the db.
> I'm actually researching this on behalf of a client. I think he
> understands the dbreindex every night is probably overkill -- but he's
> experimenting trying to get the best performance possible.
> The white paper you pointed us to seems to imply that reindexing may
> not even have a very significant effect on performance, depending on
> the characteristics of the database. So we realize we may be barking
> up the wrong tree, but the fact that the fragmentation can not be
> eliminated seems strange, so we're wondering if it could be a clue
> into some hardware problem or configuration problem we may be able to
> fix.
> I'm going to double check and make sure there's just one file in the
> filegroup.
> Until I verify that, here's what we're working on:
> To try and eliminate as many variables as we could, we set up a simple
> test that we could take and run on diff't servers.
> The test just creates a dummy table and fills it with data with lots
> of single insert statements (to try and create fragmentation). Then
> we check the ext. scan frag before and after the dbcc dbreindex.
>
> Here's the test itself...
>
========================================
====================================
==
> use pubs
> go
> if exists (select * from dbo.sysobjects where id =
> object_id(N'[dbo].[FragTest]') and OBJECTPROPERTY(id, N'IsUserTabl
e')
> = 1)
> drop table [dbo].[FragTest]
> GO
> CREATE TABLE [dbo].[FragTest] (
> [FragTestId] [int] IDENTITY (1, 1) NOT NULL ,
> [Field1] [varchar] (10) COLLATE SQL_Latin1_General_CP1_CI_AS NULL
,
> [Field2] [varchar] (50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL
> ) ON [PRIMARY]
> GO
> CREATE CLUSTERED INDEX [idxFragTestId] ON
> [dbo].[FragTest]([FragTestId]) ON [PRIMARY]
> GO
> ALTER TABLE [dbo].[FragTest] ADD
> CONSTRAINT [PK_FragTest] PRIMARY KEY NONCLUSTERED
> (
> [FragTestId]
> ) ON [PRIMARY]
> GO
> CREATE INDEX [idxField1] ON [dbo].[FragTest]([Field1]) O
N [PRIMARY]
> GO
>
> set nocount on
> declare @.i as integer
> set @.i = 0
> while @.i < 30000
> begin
> insert FragTest (Field1,Field2) values
> ('aaa','aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaa')
> insert FragTest (Field1,Field2) values
> ('bbb','bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb
bbbbbbbbbbbbbbb')
> insert FragTest (Field1,Field2) values
> ('ccc','cccccccccccccccccccccccccccccccc
ccccccccccccccc')
> insert FragTest (Field1,Field2) values
> ('ddd','dddddddddddddddddddddddddddddddd
dddddddddddddd')
> insert FragTest (Field1,Field2) values
> ('eee','eeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee
eeeeeeeeeeeeee')
> insert FragTest (Field1,Field2) values
> ('fff','ffffffffffffffffffffffffffffffff
fffffffffffffff')
> insert FragTest (Field1,Field2) values
> ('ggg','gggggggggggggggggggggggggggggggg
gggggggggggggg')
> insert FragTest (Field1,Field2) values
> ('hhh','hhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhh
hhhhhhhhhhhhhhh')
> insert FragTest (Field1,Field2) values
> ('iii','iiiiiiiiiiiiiiiiiiiiiiiiiiiiiiii
iiiiiiiiiiiiiii')
> insert FragTest (Field1,Field2) values
> ('jjj','jjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjj
jjjjjjjjjjjjjjj')
> set @.i = @.i + 1
> end
>
> dbcc showcontig('FragTest')
> DBCC DBREINDEX (FragTest, '', 100)
> dbcc showcontig('FragTest')
>
> ========================================
===============================
>
> Here are the results we got on the 'problem' server...
>
> Before DBCC REINDEX:
> DBCC SHOWCONTIG scanning 'FragTest' table...
> Table: 'FragTest' (1187767955); index ID: 1, database ID: 7
> TABLE level scan performed.
> - Pages Scanned........................: 2632
> - Extents Scanned.......................: 334
> - Extent Switches.......................: 333
> - Avg. Pages per Extent..................: 7.9
> - Scan Density [Best Count:Actual Count]......: 98.50% [329:334]
> - Logical Scan Fragmentation ..............: 0.04%
> - Extent Scan Fragmentation ...............: 59.88%
> - Avg. Bytes Free per Page................: 48.9
> - Avg. Page Density (full)................: 99.40%
> DBCC execution completed. If DBCC printed error messages, contact your
> system administrator.
>
> After DBCC REINDEX:
> DBCC SHOWCONTIG scanning 'FragTest' table...
> Table: 'FragTest' (1187767955); index ID: 1, database ID: 7
> TABLE level scan performed.
> - Pages Scanned........................: 2632
> - Extents Scanned.......................: 330
> - Extent Switches.......................: 329
> - Avg. Pages per Extent..................: 8.0
> - Scan Density [Best Count:Actual Count]......: 99.70% [329:330]
> - Logical Scan Fragmentation ..............: 0.00%
> - Extent Scan Fragmentation ...............: 31.82%
> - Avg. Bytes Free per Page................: 48.9
> - Avg. Page Density (full)................: 99.40%
> DBCC execution completed. If DBCC printed error messages, contact your
> system administrator.
>
>
>
>
> "Paul S Randal [MS]" <prandal@.online.microsoft.com> wrote in message
news:<u3zdscOVEHA.3540@.TK2MSFTNGP11.phx.gbl>...[vbcol=seagreen]
this[vbcol=seagreen]
you[vbcol=seagreen]
space[vbcol=seagreen]
rights.[vbcol=seagreen]
Please[vbcol=seagreen]
know[vbcol=seagreen]
http://www.microsoft.com/technet/pr...n/ss2kidbp.mspx[vbcol=seagreen]
number from a dbcc showcontig.
Even right after a DBCC REINDEX, the table comes back with an Extent
Scan Fragmentation number of 40 - 50%.
The table does have a clustered index -- although it is a identity
field.
The table is on a db built on a RAID 5 array. When I take the exact
same table and data, and restore it to my laptop (single hard drive),
a DBCC REINDEX will drop the Extent Scan Fragmentation to 0%.
Any ideas what could be driving the Extent Scan Fragmentation up on
the other server? Could it be the fact that it's on a RAID drive?
Thanksyes.
I read an article that suggested "Extent Scan Fragmentation" was not a
reliable number to look at when data spanned multiple disks.
I believe the more reliable number to use is scan density.
Hope this helps
Greg Jackson
PDX, Oregon|||Hi,
Just wanted to send a minor correction (I think). Here is
what BOL says:
Understanding Logical Scan Fragmentation and Extent Scan
Fragmentation values:
Logical Scan Fragmentation and, to a lesser extent, Extent
Scan Fragmentation values give the best indication of a
table's fragmentation level. Both these values should be
as close to zero as possible (although a value from 0%
through 10% may be acceptable). It should be noted that
the Extent Scan Fragmentation value will be high if the
index spans multiple files. Both methods of reducing
fragmentation can be used to reduce these values.
IMHO, RAID level has no effect on fragementation. Do you
have an Index/table that has multiple files? Also, how are
you moving the data to your laptop? Backing up and restore
a database file anywhere (RAID 0, 1, 2, 5, etc) does NOT
remove fragmentation. You can test this by restoring the
database to a different RAID 5 server and compare the DBCC
output. It will be the same.
hth
DeeJay
>--Original Message--
>yes.
>I read an article that suggested "Extent Scan
Fragmentation" was not a
>reliable number to look at when data spanned multiple
disks.
>I believe the more reliable number to use is scan density.
>Hope this helps
>
>Greg Jackson
>PDX, Oregon
>
>.
>|||The Extent Scan Fragmentation algorithm does not work on multiple files -
this is documented. You shoudl use Logical Scan Fragmentation. Please read
the whitepaper below which will explain everything to you. Let me know if
you have any more questions.
http://www.microsoft.com/technet/pr...n/ss2kidbp.mspx
Regards
Paul Randal
Dev Lead, Microsoft SQL Server Storage Engine
This posting is provided "AS IS" with no warranties, and confers no rights.
"DeeJay Puar" <deejaypuar@.yahoo.com> wrote in message
news:1d6ce01c453ac$22a84570$a001280a@.phx
.gbl...[vbcol=seagreen]
> Hi,
> Just wanted to send a minor correction (I think). Here is
> what BOL says:
> Understanding Logical Scan Fragmentation and Extent Scan
> Fragmentation values:
> Logical Scan Fragmentation and, to a lesser extent, Extent
> Scan Fragmentation values give the best indication of a
> table's fragmentation level. Both these values should be
> as close to zero as possible (although a value from 0%
> through 10% may be acceptable). It should be noted that
> the Extent Scan Fragmentation value will be high if the
> index spans multiple files. Both methods of reducing
> fragmentation can be used to reduce these values.
> IMHO, RAID level has no effect on fragementation. Do you
> have an Index/table that has multiple files? Also, how are
> you moving the data to your laptop? Backing up and restore
> a database file anywhere (RAID 0, 1, 2, 5, etc) does NOT
> remove fragmentation. You can test this by restoring the
> database to a different RAID 5 server and compare the DBCC
> output. It will be the same.
> hth
> DeeJay
>
> Fragmentation" was not a
> disks.|||Hi Paul,
This table and it's indexes are not built on multiple files. Both the
table and it's indexes reside on the same RAID array, and all on the
PRIMARY file.
Everything i've found so far says that a dbcc dbreindex should fix the
problem with extent scan fragmentation as long as a clustered index
exists on the table. We've tried everything and I'm stumped.
Actually it's not just one table, but several that show high (40-50%)
extent scan fragmentation, even though we do dbcc dbreindexes on them
every night. The Logical Scan Fragmentation goes down as we'd expect,
but the dbreindex has very minimal (maybe 2 or 3%) effect on the
extent scan fragmentation.
Thanks for your response and any further clues!
"Paul S Randal [MS]" <prandal@.online.microsoft.com> wrote in message new
s:<OcSydv9UEHA.1292@.TK2MSFTNGP10.phx.gbl>...
[vbcol=seagreen]
> The Extent Scan Fragmentation algorithm does not work on multiple files -
> this is documented. You shoudl use Logical Scan Fragmentation. Please read
> the whitepaper below which will explain everything to you. Let me know if
> you have any more questions.
> [url]http://www.microsoft.com/technet/prodtechnol/sql/2000/maintain/ss2kidbp.mspx[/ur
l]
> Regards
> --
> Paul Randal
> Dev Lead, Microsoft SQL Server Storage Engine
> This posting is provided "AS IS" with no warranties, and confers no rights
.
> "DeeJay Puar" <deejaypuar@.yahoo.com> wrote in message
> news:1d6ce01c453ac$22a84570$a001280a@.phx
.gbl...
> Fragmentation" was not a
> disks.|||Hi DeeJay,
> IMHO, RAID level has no effect on fragementation. Do you
> have an Index/table that has multiple files?
No, the table and its indexes all exist on the PRIMARY filegroup.
> Backing up and restore
> a database file anywhere (RAID 0, 1, 2, 5, etc) does NOT
> remove fragmentation.
Right, I agree. What I'm puzzled by is this: We decided to run a very
simple test to try and figure out if this was a hardware or software
problem. We wrote a script that creates a very simple table and fills
it with data (30,000 rows -- table is about 10 mg). This table ends
up having extent scan fragmentation because of the 30,000 inserts. On
my laptop (1 single harddrive), a dbcc dbreindex clears up this extent
scan fragmentation right away, no problem. On the production server
with the RAID array, the dbcc reindex has almost no effect on the
extent scan fragmentation.
"DeeJay Puar" <deejaypuar@.yahoo.com> wrote in message news:<1d6ce01c453ac$22a84570$a001280a@.
phx.gbl>...[vbcol=seagreen]
> Hi,
> Just wanted to send a minor correction (I think). Here is
> what BOL says:
> Understanding Logical Scan Fragmentation and Extent Scan
> Fragmentation values:
> Logical Scan Fragmentation and, to a lesser extent, Extent
> Scan Fragmentation values give the best indication of a
> table's fragmentation level. Both these values should be
> as close to zero as possible (although a value from 0%
> through 10% may be acceptable). It should be noted that
> the Extent Scan Fragmentation value will be high if the
> index spans multiple files. Both methods of reducing
> fragmentation can be used to reduce these values.
> IMHO, RAID level has no effect on fragementation. Do you
> have an Index/table that has multiple files? Also, how are
> you moving the data to your laptop? Backing up and restore
> a database file anywhere (RAID 0, 1, 2, 5, etc) does NOT
> remove fragmentation. You can test this by restoring the
> database to a different RAID 5 server and compare the DBCC
> output. It will be the same.
> hth
> DeeJay
>
> Fragmentation" was not a
> disks.|||Hi Brett,
Before Paul, posted the article link, I read it a couple
weeks ago and I re-read it. Here is something else I found:
DBCC INDEXDEFRAG does not help to untangle indexes that
have become interleaved within a data file. Likewise, DBCC
INDEXDEFRAG does not correct extent fragmentation on
indexes. Interleaving occurs when index extents (a group
of eight index pages) for an index are not completely
contiguous within the data file, leaving extents from one
or more indexes intermingled in the file. Interleaving can
occur even when there is no logical fragmentation, because
all index pages are not necessarily contiguous, even when
logical ordering matches physical ordering.
Even with this limitation, the tests showed that DBCC
INDEXDEFRAG can be as effective at improving performance
of workloads as DBCC DBREINDEX. In fact, the tests showed
that even if you can rebuild indexes so that there is
minimal interleaving, this does not have a significant
effect on performance. Reducing the logical fragmentation
levels had a much greater impact on workload performance.
This is why it is recommended that you focus on logical
fragmentation and page density levels when examining
fragmentation on your indexes.
My Notes:
How big is the table in production? If its 10 MB like your
test, it should just work fine. Also, the 'higher' extent
scan fragementation seems to occur when there is
interleaving. This could happen if your production table
is very large. I do not think it is RAID level related,
its the amount of data and how it is spread across on the
disk.
Also, did you size the database properly in production or
did it autogrow during its use?
DeeJay
>--Original Message--
>Hi DeeJay,
>
you[vbcol=seagreen]
>No, the table and its indexes all exist on the PRIMARY
filegroup.
>
NOT[vbcol=seagreen]
>Right, I agree. What I'm puzzled by is this: We decided
to run a very
>simple test to try and figure out if this was a hardware
or software
>problem. We wrote a script that creates a very simple
table and fills
>it with data (30,000 rows -- table is about 10 mg). This
table ends
>up having extent scan fragmentation because of the 30,000
inserts. On
>my laptop (1 single harddrive), a dbcc dbreindex clears
up this extent
>scan fragmentation right away, no problem. On the
production server
>with the RAID array, the dbcc reindex has almost no
effect on the
>extent scan fragmentation.
>
>
>"DeeJay Puar" <deejaypuar@.yahoo.com> wrote in message
news:<1d6ce01c453ac$22a84570$a001280a@.phx.gbl>...
is[vbcol=seagreen]
Scan[vbcol=seagreen]
Extent[vbcol=seagreen]
be[vbcol=seagreen]
you[vbcol=seagreen]
are[vbcol=seagreen]
restore[vbcol=seagreen]
NOT[vbcol=seagreen]
the[vbcol=seagreen]
DBCC[vbcol=seagreen]
density.[vbcol=seagreen]
>.
>|||I'd guess the free space in the file is fragmented then, preventing
contiguous extents being allocated when the index is rebuilt - although this
much continuously present extent fragmentation really smacks of multiple
files. Are you sure there is only one file in the filegroup (I noticed you
used primary file and primary filegroup in different responses)? Can you
post the output of dbcc checkalloc (as text, not as an attachment)?
DBCC DBREINDEX will only fix the problem if there is contiguous free space
in the file equal to the size of the index being rebuilt.
Btw, why are you reindexing every night?
Thanks
Paul Randal
Dev Lead, Microsoft SQL Server Storage Engine
This posting is provided "AS IS" with no warranties, and confers no rights.
"brett" <brettolges@.gmail.com> wrote in message
news:6de7e854.0406170436.21ae2e17@.posting.google.com...
> Hi Paul,
> This table and it's indexes are not built on multiple files. Both the
> table and it's indexes reside on the same RAID array, and all on the
> PRIMARY file.
> Everything i've found so far says that a dbcc dbreindex should fix the
> problem with extent scan fragmentation as long as a clustered index
> exists on the table. We've tried everything and I'm stumped.
> Actually it's not just one table, but several that show high (40-50%)
> extent scan fragmentation, even though we do dbcc dbreindexes on them
> every night. The Logical Scan Fragmentation goes down as we'd expect,
> but the dbreindex has very minimal (maybe 2 or 3%) effect on the
> extent scan fragmentation.
> Thanks for your response and any further clues!
>
> "Paul S Randal [MS]" <prandal@.online.microsoft.com> wrote in message
news:<OcSydv9UEHA.1292@.TK2MSFTNGP10.phx.gbl>...[vbcol=seagreen]
>
files -[vbcol=seagreen]
read[vbcol=seagreen]
if[vbcol=seagreen]
http://www.microsoft.com/technet/pr...n/ss2kidbp.mspx[vbcol=seagreen]
rights.[vbcol=seagreen]|||Hi Paul,
Thanks for your help on this.
As far as the dbreindex every night, I guess that's just being done in
an attempt to squeeze every bit of performance possible out of the db.
I'm actually researching this on behalf of a client. I think he
understands the dbreindex every night is probably overkill -- but he's
experimenting trying to get the best performance possible.
The white paper you pointed us to seems to imply that reindexing may
not even have a very significant effect on performance, depending on
the characteristics of the database. So we realize we may be barking
up the wrong tree, but the fact that the fragmentation can not be
eliminated seems strange, so we're wondering if it could be a clue
into some hardware problem or configuration problem we may be able to
fix.
I'm going to double check and make sure there's just one file in the
filegroup.
Until I verify that, here's what we're working on:
To try and eliminate as many variables as we could, we set up a simple
test that we could take and run on diff't servers.
The test just creates a dummy table and fills it with data with lots
of single insert statements (to try and create fragmentation). Then
we check the ext. scan frag before and after the dbcc dbreindex.
Here's the test itself...
========================================
====================================
==
use pubs
go
if exists (select * from dbo.sysobjects where id =
object_id(N'[dbo].[FragTest]') and OBJECTPROPERTY(id, N'IsUserTable'
)
= 1)
drop table [dbo].[FragTest]
GO
CREATE TABLE [dbo].[FragTest] (
[FragTestId] [int] IDENTITY (1, 1) NOT NULL ,
[Field1] [varchar] (10) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[Field2] [varchar] (50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL
) ON [PRIMARY]
GO
CREATE CLUSTERED INDEX [idxFragTestId] ON
[dbo].[FragTest]([FragTestId]) ON [PRIMARY]
GO
ALTER TABLE [dbo].[FragTest] ADD
CONSTRAINT [PK_FragTest] PRIMARY KEY NONCLUSTERED
(
[FragTestId]
) ON [PRIMARY]
GO
CREATE INDEX [idxField1] ON [dbo].[FragTest]([Field1]) ON &
#91;PRIMARY]
GO
set nocount on
declare @.i as integer
set @.i = 0
while @.i < 30000
begin
insert FragTest (Field1,Field2) values
('aaa','aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaa')
insert FragTest (Field1,Field2) values
('bbb','bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb
bbbbbbbbbbbbbbb')
insert FragTest (Field1,Field2) values
('ccc','cccccccccccccccccccccccccccccccc
ccccccccccccccc')
insert FragTest (Field1,Field2) values
('ddd','dddddddddddddddddddddddddddddddd
dddddddddddddd')
insert FragTest (Field1,Field2) values
('eee','eeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee
eeeeeeeeeeeeee')
insert FragTest (Field1,Field2) values
('fff','ffffffffffffffffffffffffffffffff
fffffffffffffff')
insert FragTest (Field1,Field2) values
('ggg','gggggggggggggggggggggggggggggggg
gggggggggggggg')
insert FragTest (Field1,Field2) values
('hhh','hhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhh
hhhhhhhhhhhhhhh')
insert FragTest (Field1,Field2) values
('iii','iiiiiiiiiiiiiiiiiiiiiiiiiiiiiiii
iiiiiiiiiiiiiii')
insert FragTest (Field1,Field2) values
('jjj','jjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjj
jjjjjjjjjjjjjjj')
set @.i = @.i + 1
end
dbcc showcontig('FragTest')
DBCC DBREINDEX (FragTest, '', 100)
dbcc showcontig('FragTest')
========================================
===============================
Here are the results we got on the 'problem' server...
Before DBCC REINDEX:
DBCC SHOWCONTIG scanning 'FragTest' table...
Table: 'FragTest' (1187767955); index ID: 1, database ID: 7
TABLE level scan performed.
- Pages Scanned........................: 2632
- Extents Scanned.......................: 334
- Extent Switches.......................: 333
- Avg. Pages per Extent..................: 7.9
- Scan Density [Best Count:Actual Count]......: 98.50% [329:334]
- Logical Scan Fragmentation ..............: 0.04%
- Extent Scan Fragmentation ...............: 59.88%
- Avg. Bytes Free per Page................: 48.9
- Avg. Page Density (full)................: 99.40%
DBCC execution completed. If DBCC printed error messages, contact your
system administrator.
After DBCC REINDEX:
DBCC SHOWCONTIG scanning 'FragTest' table...
Table: 'FragTest' (1187767955); index ID: 1, database ID: 7
TABLE level scan performed.
- Pages Scanned........................: 2632
- Extents Scanned.......................: 330
- Extent Switches.......................: 329
- Avg. Pages per Extent..................: 8.0
- Scan Density [Best Count:Actual Count]......: 99.70% [329:330]
- Logical Scan Fragmentation ..............: 0.00%
- Extent Scan Fragmentation ...............: 31.82%
- Avg. Bytes Free per Page................: 48.9
- Avg. Page Density (full)................: 99.40%
DBCC execution completed. If DBCC printed error messages, contact your
system administrator.
"Paul S Randal [MS]" <prandal@.online.microsoft.com> wrote in message news:<u3zdscOVEHA.3
540@.TK2MSFTNGP11.phx.gbl>...[vbcol=seagreen]
> I'd guess the free space in the file is fragmented then, preventing
> contiguous extents being allocated when the index is rebuilt - although th
is
> much continuously present extent fragmentation really smacks of multiple
> files. Are you sure there is only one file in the filegroup (I noticed you
> used primary file and primary filegroup in different responses)? Can you
> post the output of dbcc checkalloc (as text, not as an attachment)?
> DBCC DBREINDEX will only fix the problem if there is contiguous free space
> in the file equal to the size of the index being rebuilt.
> Btw, why are you reindexing every night?
> Thanks
> --
> Paul Randal
> Dev Lead, Microsoft SQL Server Storage Engine
> This posting is provided "AS IS" with no warranties, and confers no rights
.
> "brett" <brettolges@.gmail.com> wrote in message
> news:6de7e854.0406170436.21ae2e17@.posting.google.com...
> news:<OcSydv9UEHA.1292@.TK2MSFTNGP10.phx.gbl>...
> files -
> read
> if
> [url]http://www.microsoft.com/technet/prodtechnol/sql/2000/maintain/ss2kidbp.mspx[/u
rl]
> rights.
> Fragmentation" was not a
> disks.|||This looks like you've got multiple files. You're running showcontig on the
clustered index over an identity column - which is going to monatonically
increase with each insert, so the inserts are ordered in the index
perfectly - that's why there's no logical scan fragmentation. The allocation
system allocates one extent at a time when doing regular inserts, and
round-robins them through the available files, thus creating extent scan
fragmentation. I'd guess there are 3 or 4 data files from the data below.
When you rebuild the index, it's probably allocating 4 extents at a time per
file, thus reducing the extent scan fragmentation reported. Even though the
extent scan fragmentation algorithm doesn't work very well on multiple
files, it will still respond to bigger groups of contiguous extents per
file.
What are the results of running your test on a database you've setup and can
prove to yourself only has a single data file?
Thanks
Paul Randal
Dev Lead, Microsoft SQL Server Storage Engine
This posting is provided "AS IS" with no warranties, and confers no rights.
"brett" <brettolges@.gmail.com> wrote in message
news:6de7e854.0406180528.4dc7243@.posting.google.com...
> Hi Paul,
> Thanks for your help on this.
> As far as the dbreindex every night, I guess that's just being done in
> an attempt to squeeze every bit of performance possible out of the db.
> I'm actually researching this on behalf of a client. I think he
> understands the dbreindex every night is probably overkill -- but he's
> experimenting trying to get the best performance possible.
> The white paper you pointed us to seems to imply that reindexing may
> not even have a very significant effect on performance, depending on
> the characteristics of the database. So we realize we may be barking
> up the wrong tree, but the fact that the fragmentation can not be
> eliminated seems strange, so we're wondering if it could be a clue
> into some hardware problem or configuration problem we may be able to
> fix.
> I'm going to double check and make sure there's just one file in the
> filegroup.
> Until I verify that, here's what we're working on:
> To try and eliminate as many variables as we could, we set up a simple
> test that we could take and run on diff't servers.
> The test just creates a dummy table and fills it with data with lots
> of single insert statements (to try and create fragmentation). Then
> we check the ext. scan frag before and after the dbcc dbreindex.
>
> Here's the test itself...
>
========================================
====================================
==
> use pubs
> go
> if exists (select * from dbo.sysobjects where id =
> object_id(N'[dbo].[FragTest]') and OBJECTPROPERTY(id, N'IsUserTabl
e')
> = 1)
> drop table [dbo].[FragTest]
> GO
> CREATE TABLE [dbo].[FragTest] (
> [FragTestId] [int] IDENTITY (1, 1) NOT NULL ,
> [Field1] [varchar] (10) COLLATE SQL_Latin1_General_CP1_CI_AS NULL
,
> [Field2] [varchar] (50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL
> ) ON [PRIMARY]
> GO
> CREATE CLUSTERED INDEX [idxFragTestId] ON
> [dbo].[FragTest]([FragTestId]) ON [PRIMARY]
> GO
> ALTER TABLE [dbo].[FragTest] ADD
> CONSTRAINT [PK_FragTest] PRIMARY KEY NONCLUSTERED
> (
> [FragTestId]
> ) ON [PRIMARY]
> GO
> CREATE INDEX [idxField1] ON [dbo].[FragTest]([Field1]) O
N [PRIMARY]
> GO
>
> set nocount on
> declare @.i as integer
> set @.i = 0
> while @.i < 30000
> begin
> insert FragTest (Field1,Field2) values
> ('aaa','aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaa')
> insert FragTest (Field1,Field2) values
> ('bbb','bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb
bbbbbbbbbbbbbbb')
> insert FragTest (Field1,Field2) values
> ('ccc','cccccccccccccccccccccccccccccccc
ccccccccccccccc')
> insert FragTest (Field1,Field2) values
> ('ddd','dddddddddddddddddddddddddddddddd
dddddddddddddd')
> insert FragTest (Field1,Field2) values
> ('eee','eeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee
eeeeeeeeeeeeee')
> insert FragTest (Field1,Field2) values
> ('fff','ffffffffffffffffffffffffffffffff
fffffffffffffff')
> insert FragTest (Field1,Field2) values
> ('ggg','gggggggggggggggggggggggggggggggg
gggggggggggggg')
> insert FragTest (Field1,Field2) values
> ('hhh','hhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhh
hhhhhhhhhhhhhhh')
> insert FragTest (Field1,Field2) values
> ('iii','iiiiiiiiiiiiiiiiiiiiiiiiiiiiiiii
iiiiiiiiiiiiiii')
> insert FragTest (Field1,Field2) values
> ('jjj','jjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjj
jjjjjjjjjjjjjjj')
> set @.i = @.i + 1
> end
>
> dbcc showcontig('FragTest')
> DBCC DBREINDEX (FragTest, '', 100)
> dbcc showcontig('FragTest')
>
> ========================================
===============================
>
> Here are the results we got on the 'problem' server...
>
> Before DBCC REINDEX:
> DBCC SHOWCONTIG scanning 'FragTest' table...
> Table: 'FragTest' (1187767955); index ID: 1, database ID: 7
> TABLE level scan performed.
> - Pages Scanned........................: 2632
> - Extents Scanned.......................: 334
> - Extent Switches.......................: 333
> - Avg. Pages per Extent..................: 7.9
> - Scan Density [Best Count:Actual Count]......: 98.50% [329:334]
> - Logical Scan Fragmentation ..............: 0.04%
> - Extent Scan Fragmentation ...............: 59.88%
> - Avg. Bytes Free per Page................: 48.9
> - Avg. Page Density (full)................: 99.40%
> DBCC execution completed. If DBCC printed error messages, contact your
> system administrator.
>
> After DBCC REINDEX:
> DBCC SHOWCONTIG scanning 'FragTest' table...
> Table: 'FragTest' (1187767955); index ID: 1, database ID: 7
> TABLE level scan performed.
> - Pages Scanned........................: 2632
> - Extents Scanned.......................: 330
> - Extent Switches.......................: 329
> - Avg. Pages per Extent..................: 8.0
> - Scan Density [Best Count:Actual Count]......: 99.70% [329:330]
> - Logical Scan Fragmentation ..............: 0.00%
> - Extent Scan Fragmentation ...............: 31.82%
> - Avg. Bytes Free per Page................: 48.9
> - Avg. Page Density (full)................: 99.40%
> DBCC execution completed. If DBCC printed error messages, contact your
> system administrator.
>
>
>
>
> "Paul S Randal [MS]" <prandal@.online.microsoft.com> wrote in message
news:<u3zdscOVEHA.3540@.TK2MSFTNGP11.phx.gbl>...[vbcol=seagreen]
this[vbcol=seagreen]
you[vbcol=seagreen]
space[vbcol=seagreen]
rights.[vbcol=seagreen]
Please[vbcol=seagreen]
know[vbcol=seagreen]
http://www.microsoft.com/technet/pr...n/ss2kidbp.mspx[vbcol=seagreen]
Subscribe to:
Posts (Atom)