restoring from the backup optimize the database? (like it would if i
backed up a drive to tape, formatted the drive, then restored from the
tape).
And what maintenance can i perform on the database to make things
faster?
Any help would be greatly appreciated.
RobThis is a multi-part message in MIME format.
--=_NextPart_000_0326_01C37933.941430B0
Content-Type: text/plain;
charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
It would not optimize the database internally. However, reformatting =your disk would have the effect of defragging it. What's your specific =problem?
-- Tom
---
Thomas A. Moreau, BSc, PhD, MCSE, MCDBA
SQL Server MVP
Columnist, SQL Server Professional
Toronto, ON Canada
www.pinnaclepublishing.com/sql
"RKay" <robkayinto@.yahoo.com> wrote in message =news:2adcc671.0309120934.21827c1a@.posting.google.com...
I was wondering if backing up a database, whacking the database, then
restoring from the backup optimize the database? (like it would if i
backed up a drive to tape, formatted the drive, then restored from the
tape).
And what maintenance can i perform on the database to make things
faster?
Any help would be greatly appreciated.
Rob
--=_NextPart_000_0326_01C37933.941430B0
Content-Type: text/html;
charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
&
It would not optimize the database internally. However, reformatting your disk would have the effect =of defragging it. What's your specific problem?
-- Tom
---T=homas A. Moreau, BSc, PhD, MCSE, MCDBASQL Server MVPColumnist, SQL =Server ProfessionalToronto, ON Canadahttp://www.pinnaclepublishing.com/sql">www.pinnaclepublishing.com=/sql
"RKay"
--=_NextPart_000_0326_01C37933.941430B0--|||I want to optimize the layout of the data and indexes in the database
to speed up the queries.
"Tom Moreau" <tom@.dont.spam.me.cips.ca> wrote in message news:<OJbgHUVeDHA.560@.tk2msftngp13.phx.gbl>...
> It would not optimize the database internally. However, reformatting
> your disk would have the effect of defragging it. What's your specific
> problem?
> --
> Tom
> ---
> Thomas A. Moreau, BSc, PhD, MCSE, MCDBA
> SQL Server MVP
> Columnist, SQL Server Professional
> Toronto, ON Canada
> www.pinnaclepublishing.com/sql
>
> "RKay" <robkayinto@.yahoo.com> wrote in message
> news:2adcc671.0309120934.21827c1a@.posting.google.com...
> I was wondering if backing up a database, whacking the database, then
> restoring from the backup optimize the database? (like it would if i
> backed up a drive to tape, formatted the drive, then restored from the
> tape).
> And what maintenance can i perform on the database to make things
> faster?
> Any help would be greatly appreciated.
> Rob
> --|||Rob,
Optimise data means?Usually you tune your queries by providing indexes on
the relevant columns and update the statistics etc in order to have a faster
response time for your queries.Is that what you are looking for? Regarding
data, you can check if the data is fragmented by using DBCC SHOWCONTIG.Also,
in oder to make sure that your database is consistent, perform DBCC CHECKDB
on it at offpeak times.
--
Dinesh.
SQL Server FAQ at
http://www.tkdinesh.com
"RKay" <robkayinto@.yahoo.com> wrote in message
news:2adcc671.0309121404.62467127@.posting.google.com...
> I want to optimize the layout of the data and indexes in the database
> to speed up the queries.
>
> "Tom Moreau" <tom@.dont.spam.me.cips.ca> wrote in message
news:<OJbgHUVeDHA.560@.tk2msftngp13.phx.gbl>...
> > It would not optimize the database internally. However, reformatting
> > your disk would have the effect of defragging it. What's your specific
> > problem?
> >
> > --
> > Tom
> >
> > ---
> > Thomas A. Moreau, BSc, PhD, MCSE, MCDBA
> > SQL Server MVP
> > Columnist, SQL Server Professional
> > Toronto, ON Canada
> > www.pinnaclepublishing.com/sql
> >
> >
> > "RKay" <robkayinto@.yahoo.com> wrote in message
> > news:2adcc671.0309120934.21827c1a@.posting.google.com...
> > I was wondering if backing up a database, whacking the database, then
> > restoring from the backup optimize the database? (like it would if i
> > backed up a drive to tape, formatted the drive, then restored from the
> > tape).
> >
> > And what maintenance can i perform on the database to make things
> > faster?
> >
> > Any help would be greatly appreciated.
> >
> > Rob
> > --|||This is a multi-part message in MIME format.
--=_NextPart_000_0088_01C37964.0C1ACF80
Content-Type: text/plain;
charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Consider running a test script against the Index Tuning Wizard. =Physical design is based on how you use your data.
-- Tom
----
Thomas A. Moreau, BSc, PhD, MCSE, MCDBA
SQL Server MVP
Columnist, SQL Server Professional
Toronto, ON Canada
www.pinnaclepublishing.com/sql
.
"RKay" <robkayinto@.yahoo.com> wrote in message =news:2adcc671.0309121404.62467127@.posting.google.com...
I want to optimize the layout of the data and indexes in the database
to speed up the queries.
"Tom Moreau" <tom@.dont.spam.me.cips.ca> wrote in message =news:<OJbgHUVeDHA.560@.tk2msftngp13.phx.gbl>...
> It would not optimize the database internally. However, reformatting > your disk would have the effect of defragging it. What's your =specific > problem?
> > -- > Tom
> > ---
> Thomas A. Moreau, BSc, PhD, MCSE, MCDBA
> SQL Server MVP
> Columnist, SQL Server Professional
> Toronto, ON Canada
> www.pinnaclepublishing.com/sql
> > > "RKay" <robkayinto@.yahoo.com> wrote in message > news:2adcc671.0309120934.21827c1a@.posting.google.com...
> I was wondering if backing up a database, whacking the database, then
> restoring from the backup optimize the database? (like it would if i
> backed up a drive to tape, formatted the drive, then restored from the
> tape).
> > And what maintenance can i perform on the database to make things
> faster?
> > Any help would be greatly appreciated.
> > Rob
> --
--=_NextPart_000_0088_01C37964.0C1ACF80
Content-Type: text/html;
charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
&
Consider running a test script against =the Index Tuning Wizard. Physical design is based on how you use your data.
-- Tom
----Thomas A. =Moreau, BSc, PhD, MCSE, MCDBASQL Server MVPColumnist, SQL Server ProfessionalToronto, ON Canadahttp://www.pinnaclepublishing.com/sql">www.pinnaclepublishing.com=/sql.
"RKay"
--=_NextPart_000_0088_01C37964.0C1ACF80--|||"Bob Simms" <bob_simms@.hotmail.com> wrote in message news:<8Xr8b.1050$QO6.707@.news-binary.blueyonder.co.uk>...
> "RKay" <robkayinto@.yahoo.com> wrote in message
> news:2adcc671.0309121404.62467127@.posting.google.com...
> > I want to optimize the layout of the data and indexes in the database
> > to speed up the queries.
> >
> Backup does a page-level backup. It neither knows nor cares whether the
> pages it is backing up contain tables, indexes or anything else. The
> restore simply dumps the files back in the internal format it got them in.
> If you have the hardware, you might consider putting the tables and their
> corresponding indexes on different drives, thus reducing IO contention when
> updating indexed columns.
> Bob
>
I think what Rob is looking for is physical optimization of the data.
How do you get the physical storage optimized so that all the table
data & indexes are stored unfragmented and in order thereby improving
performance.
My experience in the past (with Oracle) is that if you did a export of
the data base, dropped the database, then imported the whole database
into a "fresh" db it runs faster. I've experienced greatly improved
speeds with Oracle, but I'm unfamiliar with the internals of SQL
Server, to know if this will truly improve performance with SQLServer.
Mike|||Make sure you have a clustered index on the table and do a DBCC DBREINDEX.
--
Andrew J. Kelly
SQL Server MVP
"RKay" <robkayinto@.yahoo.com> wrote in message
news:2adcc671.0309121404.62467127@.posting.google.com...
> I want to optimize the layout of the data and indexes in the database
> to speed up the queries.
>
> "Tom Moreau" <tom@.dont.spam.me.cips.ca> wrote in message
news:<OJbgHUVeDHA.560@.tk2msftngp13.phx.gbl>...
> > It would not optimize the database internally. However, reformatting
> > your disk would have the effect of defragging it. What's your specific
> > problem?
> >
> > --
> > Tom
> >
> > ---
> > Thomas A. Moreau, BSc, PhD, MCSE, MCDBA
> > SQL Server MVP
> > Columnist, SQL Server Professional
> > Toronto, ON Canada
> > www.pinnaclepublishing.com/sql
> >
> >
> > "RKay" <robkayinto@.yahoo.com> wrote in message
> > news:2adcc671.0309120934.21827c1a@.posting.google.com...
> > I was wondering if backing up a database, whacking the database, then
> > restoring from the backup optimize the database? (like it would if i
> > backed up a drive to tape, formatted the drive, then restored from the
> > tape).
> >
> > And what maintenance can i perform on the database to make things
> > faster?
> >
> > Any help would be greatly appreciated.
> >
> > Rob
> > --|||Hello
> Make sure you have a clustered index on the table and do a DBCC DBREINDEX.
It is not always possible to stop production system and rebuild indexes.
I think that physical location of data in DB significantly affects
performance.
Neither DBCC DBREINDEX nor INDEXDEFRAG doesn't remove physical
fragmentation. Using SQLFE tool you can make sure that objects in
database
are heavily fragmented even after "defragmentation".
I don't know if exist command to swap two pages or extents? This command
will be enough to write defragmentation program.
Serge Shakhov|||> Neither DBCC DBREINDEX nor INDEXDEFRAG doesn't remove physical
> fragmentation
That is just plain not true. Both will indeed remove physical fragmentation
although INDEXDEFRAG will only work at the Leaf level of the index.
DBREINDEX will actually build a whole new table so if you have enough free
contiguous space in the db it will completely rebuild the indexes.
> I don't know if exist command to swap two pages or extents? This
command
> will be enough to write defragmentation program.
That is almost exactly how INDEXDEFRAG works. It's actually a little better
than just swapping but it does indeed work similar to that.
I suggest you get a hold of "Inside SQL Server 2000" and read up on these
two commands. BooksOnLine also does a pretty good job explaining. This
too may be of interest:
http://www.microsoft.com/technet/treeview/default.asp?url=/technet/prodtechnol/sql/maintain/Optimize/SS2KIDBP.asp
Andrew J. Kelly
SQL Server MVP
"Serge Shakhov" <ACETYLENE@.mail.ru> wrote in message
news:mht3kb.lv1.ln@.proxyserver.ctd.mmk.chel.su...
> Hello
> > Make sure you have a clustered index on the table and do a DBCC
DBREINDEX.
> It is not always possible to stop production system and rebuild
indexes.
> I think that physical location of data in DB significantly affects
> performance.
> Neither DBCC DBREINDEX nor INDEXDEFRAG doesn't remove physical
> fragmentation. Using SQLFE tool you can make sure that objects in
> database
> are heavily fragmented even after "defragmentation".
> I don't know if exist command to swap two pages or extents? This
command
> will be enough to write defragmentation program.
>
> Serge Shakhov
>|||Hello
> > Neither DBCC DBREINDEX nor INDEXDEFRAG doesn't remove physical
> > fragmentation
> That is just plain not true. Both will indeed remove physical
fragmentation
Unfortunately this is true. Let me show you simple example:
dbcc showcontig ('contracts', 1)
DBCC SHOWCONTIG scanning 'CONTRACTS' table...
Table: 'CONTRACTS' (1973634124); index ID: 1, database ID: 7
TABLE level scan performed.
- Pages Scanned........................: 113
- Extents Scanned.......................: 16
- Extent Switches.......................: 24
- Avg. Pages per Extent..................: 7.1
- Scan Density [Best Count:Actual Count]......: 60.00% [15:25]
- Logical Scan Fragmentation ..............: 4.42%
- Extent Scan Fragmentation ...............: 37.50%
- Avg. Bytes Free per Page................: 121.5
- Avg. Page Density (full)................: 98.50%
So we can see that table (clustered index) is fragmented.
dbcc indexdefrag ('gruz','contracts',1)
dbcc showcontig ('contracts', 1)
DBCC SHOWCONTIG scanning 'CONTRACTS' table...
Table: 'CONTRACTS' (1973634124); index ID: 1, database ID: 7
TABLE level scan performed.
- Pages Scanned........................: 112
- Extents Scanned.......................: 16
- Extent Switches.......................: 24
- Avg. Pages per Extent..................: 7.0
- Scan Density [Best Count:Actual Count]......: 56.00% [14:25]
- Logical Scan Fragmentation ..............: 4.46%
- Extent Scan Fragmentation ...............: 37.50%
- Avg. Bytes Free per Page................: 50.3
- Avg. Page Density (full)................: 99.38%
The table is still fragmented.
DBCC DBREINDEX ('cargo.dbo.contracts','PK_contracts')
dbcc showcontig ('contracts', 1)
DBCC SHOWCONTIG scanning 'CONTRACTS' table...
Table: 'CONTRACTS' (1973634124); index ID: 1, database ID: 7
TABLE level scan performed.
- Pages Scanned........................: 112
- Extents Scanned.......................: 15
- Extent Switches.......................: 14
- Avg. Pages per Extent..................: 7.5
- Scan Density [Best Count:Actual Count]......: 93.33% [14:15]
- Logical Scan Fragmentation ..............: 0.00%
- Extent Scan Fragmentation ...............: 20.00%
- Avg. Bytes Free per Page................: 50.3
- Avg. Page Density (full)................: 99.38%
Now level of fragmentation significantly decreased
but table is still FRAGMENTED! Not entire table located in contiguous
space. For small tables final Scan Density will be 100% but the more
table size the worse results we can get.
> although INDEXDEFRAG will only work at the Leaf level of the index.
> DBREINDEX will actually build a whole new table so if you have enough free
> contiguous space in the db it will completely rebuild the indexes.
Yes, it will. But there is no guarantee that rebuilded index will be
placed
in contiguous space wholly. This is what I'm talking about.
By the way I simply can't use DBREINDEX because my system works
in 24x7 mode. That is why I'm trying to find the way to write my own
defragmentation procedure.
Serge Shakhov|||Serge,
Do you have more than 1 file in your filegroup by any chance? If so the
Extent Fragmentation value that is reported will generally be high due to
the fact that Indexdefrag only works on a file at a time and does not swap
pages between files. This is well documented in BooksOnLine and elsewhere.
As for the DBREINDEX you only have 15 extents and the switches were only 14.
That's not too bad even though the stats might look high but that is due to
the small number of extents and possibly due to the first extent being a
mixed one. This is quite normal. Even on larger tables you can never
guarantee you will get 100% logical and physical defragmentation due to the
factors mentioned already. And you don't need it either. Did you read the
link I posted? If not then I really suggest you do as it will explain a lot
and calm down your fears of needing to be 100% defragged.
--
Andrew J. Kelly
SQL Server MVP
"Serge Shakhov" <ACETYLENE@.mail.ru> wrote in message
news:28j8kb.r1k.ln@.proxyserver.ctd.mmk.chel.su...
> Hello
> > > Neither DBCC DBREINDEX nor INDEXDEFRAG doesn't remove physical
> > > fragmentation
> >
> > That is just plain not true. Both will indeed remove physical
> fragmentation
> Unfortunately this is true. Let me show you simple example:
> dbcc showcontig ('contracts', 1)
> DBCC SHOWCONTIG scanning 'CONTRACTS' table...
> Table: 'CONTRACTS' (1973634124); index ID: 1, database ID: 7
> TABLE level scan performed.
> - Pages Scanned........................: 113
> - Extents Scanned.......................: 16
> - Extent Switches.......................: 24
> - Avg. Pages per Extent..................: 7.1
> - Scan Density [Best Count:Actual Count]......: 60.00% [15:25]
> - Logical Scan Fragmentation ..............: 4.42%
> - Extent Scan Fragmentation ...............: 37.50%
> - Avg. Bytes Free per Page................: 121.5
> - Avg. Page Density (full)................: 98.50%
> So we can see that table (clustered index) is fragmented.
> dbcc indexdefrag ('gruz','contracts',1)
> dbcc showcontig ('contracts', 1)
> DBCC SHOWCONTIG scanning 'CONTRACTS' table...
> Table: 'CONTRACTS' (1973634124); index ID: 1, database ID: 7
> TABLE level scan performed.
> - Pages Scanned........................: 112
> - Extents Scanned.......................: 16
> - Extent Switches.......................: 24
> - Avg. Pages per Extent..................: 7.0
> - Scan Density [Best Count:Actual Count]......: 56.00% [14:25]
> - Logical Scan Fragmentation ..............: 4.46%
> - Extent Scan Fragmentation ...............: 37.50%
> - Avg. Bytes Free per Page................: 50.3
> - Avg. Page Density (full)................: 99.38%
> The table is still fragmented.
> DBCC DBREINDEX ('cargo.dbo.contracts','PK_contracts')
> dbcc showcontig ('contracts', 1)
> DBCC SHOWCONTIG scanning 'CONTRACTS' table...
> Table: 'CONTRACTS' (1973634124); index ID: 1, database ID: 7
> TABLE level scan performed.
> - Pages Scanned........................: 112
> - Extents Scanned.......................: 15
> - Extent Switches.......................: 14
> - Avg. Pages per Extent..................: 7.5
> - Scan Density [Best Count:Actual Count]......: 93.33% [14:15]
> - Logical Scan Fragmentation ..............: 0.00%
> - Extent Scan Fragmentation ...............: 20.00%
> - Avg. Bytes Free per Page................: 50.3
> - Avg. Page Density (full)................: 99.38%
> Now level of fragmentation significantly decreased
> but table is still FRAGMENTED! Not entire table located in contiguous
> space. For small tables final Scan Density will be 100% but the more
> table size the worse results we can get.
>
> > although INDEXDEFRAG will only work at the Leaf level of the index.
> > DBREINDEX will actually build a whole new table so if you have enough
free
> > contiguous space in the db it will completely rebuild the indexes.
> Yes, it will. But there is no guarantee that rebuilded index will be
> placed
> in contiguous space wholly. This is what I'm talking about.
> By the way I simply can't use DBREINDEX because my system works
> in 24x7 mode. That is why I'm trying to find the way to write my own
> defragmentation procedure.
>
> Serge Shakhov
>
>|||Andrew
> Do you have more than 1 file in your filegroup by any chance? If so the
> Extent Fragmentation value that is reported will generally be high due to
> the fact that Indexdefrag only works on a file at a time and does not swap
> pages between files.
Yes, there are two files in each filegroup and I wasn't surprised
by results of INDEXDEFRAG.
> As for the DBREINDEX you only have 15 extents and the switches were only
14.
> That's not too bad even though the stats might look high but that is due
to
> the small number of extents and possibly due to the first extent being a
> mixed one. This is quite normal.
I know. This is just small sample table. I can not start DBREINDEX on
large tables in production database.
> Even on larger tables you can never
> guarantee you will get 100% logical and physical defragmentation due to
the
> factors mentioned already. And you don't need it either.
Of course I'll be satisfied with 90-95% Scan Density.
The problem is that I can't achieve this result with INDEXDEFRAG
and can't stop database to rebuild indexes.
> Did you read the link I posted?
Not yet, I've just printed it... 17 pages and english is not my native
language...
Of course, I'll read it.
Serge Shakhov
No comments:
Post a Comment