Hi,
My DB size (Right click on DB Name, Data Files tab, Space Allocated field) was 10914 MB.
I delete a huge table (1.2 million records * 15 columns).
I checked the db size again. It didnt change.
Shouldn't it decrease because I delete a huge table ??The size will only decrease if you use the shrinkdb command.|||Originally posted by rokslide
The size will only decrease if you use the shrinkdb command.
When and why should we use Shrink DB ??|||Originally posted by rokslide
The size will only decrease if you use the shrinkdb command.
Would the deleted data be still on the database physical file ?|||The use of shrinkdb depends on who you talk to,... I really don't have a theory about it except to say that if the database is taking up too much space, shrink it...
As for the data still being in the database file... I would guess (and it's just a guess) that it would still be there but it would be unaccessible via the database. I think the delete would work much like deleting a file off your hard drive, all that is really deleted is the file header information, the data is still there, but you can't access it unless you use a tool to specificly locate the data.
Does that make sense? Can anyone else offer more advise??|||Originally posted by rokslide
The use of shrinkdb depends on who you talk to,... I really don't have a theory about it except to say that if the database is taking up too much space, shrink it...
As for the data still being in the database file... I would guess (and it's just a guess) that it would still be there but it would be unaccessible via the database. I think the delete would work much like deleting a file off your hard drive, all that is really deleted is the file header information, the data is still there, but you can't access it unless you use a tool to specificly locate the data.
Does that make sense? Can anyone else offer more advise??
Can u give a quick example of how I use ShrinkDB ?|||Originally posted by forXLDB
Can u give a quick example of how I use ShrinkDB ?
How do I know what size to reduce to ?
(If the size doesnt decrease even after deleting tables, how do i know what must be the exact size occupied by all the tables)...
I would have deleted and created lots of huge files involving around a million records atleast 10 times..
and above all, does the size effect the sql server performance in any way ?|||straight from the books online
This example decreases the size of the files in the UserDB user database to allow 10 percent free space in the files of UserDB.
DBCC SHRINKDATABASE (UserDB, 10)
GO
or you can do it through the EM|||Originally posted by rokslide
straight from the books online
or you can do it through the EM
i just did use for another test database, it shrunk more than half the size...
thx anyway !!1
Still looking for the answer on any performance degradation if the db file size is more ?|||Guessing once again I would say that it shouldn't affect the speed to any great extent,.. fragmentation of the database file would... there are some commands/functions that you can use to find out this kinda stuff but I have never used them so I can't really help alot I'm sorry.|||Originally posted by rokslide
Guessing once again I would say that it shouldn't affect the speed to any great extent,.. fragmentation of the database file would... there are some commands/functions that you can use to find out this kinda stuff but I have never used them so I can't really help alot I'm sorry.
thx again...
lemme know any functions !!!|||Since the log file takes the big size of the database, you can use the following script to reduce the file size:
USE DatabaseName
GO
DBCC SHRINKFILE ('DatabaseName_Log', Size)
GO
Here "Size" is the number of MB, i.e., if you put 100, that means the file will be shrinked to 100 MB.|||Since you are already working in em - when you right click on the database go to all task ->shrink database ->files... Under database file, look at the data file and log file. What is the current size/space used ? Are you backing up the database/transaction log ? What is the recovery model ?
What is the performance issue ?|||You can use "dbcc showcontig" to display fragmentation info. Yes performance will be affected - you need to either drop/recreate the indexes, use dbcc dbreindex or dbcc indexdefrag. Since you deleted so many records (more than half), fragmentation is a good start for performance issues.|||Do you have access to Books Online (BOL) ?|||An addition to rnealejr's post on dbcc indexdefrag and recreating indexes. If you have tables with a large amount of data input or modification take a good look at the indexes. If you use a non-seed value for your index then increasing the fillfactor number can help to reduce fragmentation in the future. I.E if your customer table uses first four characters of the name plus the zipcode for a customerID then you would be inserting new records into the middle of the table frequently. Once the fillfactor is used up then the next record inserted will create a new page and now your data is getting fragmented which does impact performance as mentioned be other posters.
Now that you've opened this pandoras box, you could spend a week in BOL learning how to tweak each tables index based on usage, calculating how many records fit on a page, use of varchar over char, benifits/penalties of seed indexes...............|||Originally posted by rnealejr
Do you have access to Books Online (BOL) ?
Yes. I can read BOL
Friday, March 9, 2012
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment