Showing posts with label file. Show all posts
Showing posts with label file. Show all posts

Thursday, March 29, 2012

Does SQL Server support compressed drive?

Hi, all,

I try to start SQL Server, but I got the Error in "ERRORLOG":

Starting up database 'master'.

The file "C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\DATA\mastlog.ldf" is compressed but does not reside in a read-only database or filegroup. The file must be decompressed.

Error: 5118, Severity: 16, State: 1.

That means, I have to decompress the file "mastlog.ldf", right?

How can I do this?

Thanks in advance!

it means somebody has changed the OS level setting of this folder or it parent folder... In property of the folder

in General Tab Press Advance button and see whether "Compress Contents to Save disk " option is choosed ... if it is there then remove that

Madhu

|||

Thank you! Madhu,

yes, I have done, It works

sql

Tuesday, March 27, 2012

Does SQL Server 2005 have something like setup.iss?

Is there a way to capture the steps in a manual installation of SQL Server 2005 into a file similar to setup.iss in 2000? If there is, it's not readily apparent.

I've modified a template.ini file and used it for a successful installation. It's just that it would be more convenient to replay a perfectly executed manual installation. It isn't always obvious which selections in a manual install correspond to specific keywords in template.ini.

No, sorry, this isn't available for SQL Server 2005

Sunday, March 25, 2012

Does Sql Ce 3.5 beta version support remote access in Intranet

I need to connect to a sql ce data file via Intranet .
Does sql ce support that ?
When I attempt to connect that remote file, it show me the error message
"There is a file sharing violation .A different process migth be using this file".

Does this mean , I can not do this .

This means that SQL Compact is a single user, in-process database, so when opening a file on a network share it will acquire an exclusive lock to the database file. For more information, see these blog entries from Jim Wilson:

http://www.pluralsight.com/blogs/jimw/archive/2007/02/19/46151.aspx

|||Good answer .

Thank you ErikEJ

I like this forum.

Does Sql Ce 3.5 beta version support remote access in Intranet

I need to connect to a sql ce data file via Intranet .
Does sql ce support that ?
When I attempt to connect that remote file, it show me the error message
"There is a file sharing violation .A different process migth be using this file".

Does this mean , I can not do this .

This means that SQL Compact is a single user, in-process database, so when opening a file on a network share it will acquire an exclusive lock to the database file. For more information, see these blog entries from Jim Wilson:

http://www.pluralsight.com/blogs/jimw/archive/2007/02/19/46151.aspx

|||Good answer .

Thank you ErikEJ

I like this forum.

Thursday, March 22, 2012

Does reporting service support Chinese?

My report has some chinese. It can be display by IE, but when I export it
as PDF, the chinese words are displayed by "'?" in PDF file.
I don't know how to handle it.I have the same problem.
The Chinese characters show very well in HTML. However, when I export it to
PDF, all of these Chinese characters show as question mark -- ? even I use
Chinese version of Acrobat Reader.
Is it a reporting service bug?
Regards,
AndyZ
"°Ë´óɽÈË" <zhoushaofeng@.hotmail.com> дÈëÓʼþ
news:%23tsMrF1XEHA.2812@.TK2MSFTNGP11.phx.gbl...
> My report has some chinese. It can be display by IE, but when I export it
> as PDF, the chinese words are displayed by "'?" in PDF file.
> I don't know how to handle it.
>|||Update to SP1, it is working fine now.
But I am very sad that when I upgrade to SP1, PDF is OK and E-mail in worng
code(GB2312, it should be UTF-8)
Tony
"Andy Zhang" <A@.A> wrote in message
news:eJPJRw2XEHA.4008@.TK2MSFTNGP09.phx.gbl...
> I have the same problem.
> The Chinese characters show very well in HTML. However, when I export it
to
> PDF, all of these Chinese characters show as question mark -- ? even I use
> Chinese version of Acrobat Reader.
> Is it a reporting service bug?
> Regards,
> AndyZ
> "°Ë´óɽÈË" <zhoushaofeng@.hotmail.com> дÈëÓʼþ
> news:%23tsMrF1XEHA.2812@.TK2MSFTNGP11.phx.gbl...
> > My report has some chinese. It can be display by IE, but when I export
it
> > as PDF, the chinese words are displayed by "'?" in PDF file.
> > I don't know how to handle it.
> >
> >
>|||You will need to use a FontFamily that is localized. The default FontFamily,
Arial, does not support Chinese characters.
--
Bruce Johnson [MSFT]
Microsoft SQL Server Reporting Services
This posting is provided "AS IS" with no warranties, and confers no rights.
"°Ë´óɽÈË" <zhoushaofeng@.hotmail.com> wrote in message
news:%23tsMrF1XEHA.2812@.TK2MSFTNGP11.phx.gbl...
> My report has some chinese. It can be display by IE, but when I export it
> as PDF, the chinese words are displayed by "'?" in PDF file.
> I don't know how to handle it.
>

Monday, March 19, 2012

Does MS Access installation is required for running application that uses Access mdb file

Hi,

I am developing an application that uses Access database (mdb file) to store the user data. The user of this application is not interested in the database file (to view in MS Access Environment). Does the user machine requires MS Access installation to run my application or just some couple of dlls (OleDB driver, Access DB Engine,..) should be enough to run my application?

Thanks,

Rao

No, they don't need Access. They will simply need to have MDAC installed, and chances are it will already be there. If not get it here

http://msdn.microsoft.com/data/ref/mdac/

|||

The Information is helpful.

Thanks

Sunday, March 11, 2012

Does it increase performance to use multiple datafiles for large databases

If you've large databases (above 200GB/file). Does it help to have multiple smaller files (For e.g four 50GB files)? Does it improve performance. Also if a single table (size 200gb) is alloted to a file group which has one datafile. Does it help to have such filegroup with multiple files (fore e.g 4 files of 50GB each). My thought was if SQL Server is looking for certain data (assuming the file is not very fragmented) it is better for SQL Server to load up only the specific file that corresponds to the data, instead of loading up an entire 200gb file.

Moving to engine forum|||In general, the size of the data files does not really matter that much, as SQL Server will not load an entire file. SQL Server uses an offset in a file to go to a particular page (8K chunk of data) that it needs to load.

What really matters is the placement of files on disks, and the type of disks you use. If you have a single large file that is backed up by a RAID array, it does not really matter much. However, if you use ordinary (non-RAID) disks, multiple files might help, as you can place different files on different disk. Especially, placing the non-clustered indexes on different disks than clustered indexes will help.

Also, placing tempdb on (one or more) different disks, and placing the log files on different disks will help performance.

If you need more information, could you explain the problem you are trying to solve in a litte bit more detail.

Thanks,

Marcel van der Holst
[MSFT]|||

A lot of this depends on what kind of storage you have, whether is direct attached storage or a SAN, and if its a SAN, what kind of SAN it is. Generally speaking, Microsoft recommends that you have one TempDB file per physical CPU (and dual-core CPUs count as physical CPUs here), and one data file for each physical CPU. Doing this can help with allocation bottlenecks. Some SANs (such as 3PAR) make this somewhat moot, since they spread a file among all of the physical disks. Having multiple small data files can make backups and restores easier to manage.

You want your data files on one set of disks, log files on another set of disks, and TempDB on another set of disks.

Does it increase performance to use multiple datafiles for large databases

If you've large databases (above 200GB/file). Does it help to have multiple smaller files (For e.g four 50GB files)? Does it improve performance. Also if a single table (size 200gb) is alloted to a file group which has one datafile. Does it help to have such filegroup with multiple files (fore e.g 4 files of 50GB each). My thought was if SQL Server is looking for certain data (assuming the file is not very fragmented) it is better for SQL Server to load up only the specific file that corresponds to the data, instead of loading up an entire 200gb file.

Moving to engine forum|||In general, the size of the data files does not really matter that much, as SQL Server will not load an entire file. SQL Server uses an offset in a file to go to a particular page (8K chunk of data) that it needs to load.

What really matters is the placement of files on disks, and the type of disks you use. If you have a single large file that is backed up by a RAID array, it does not really matter much. However, if you use ordinary (non-RAID) disks, multiple files might help, as you can place different files on different disk. Especially, placing the non-clustered indexes on different disks than clustered indexes will help.

Also, placing tempdb on (one or more) different disks, and placing the log files on different disks will help performance.

If you need more information, could you explain the problem you are trying to solve in a litte bit more detail.

Thanks,

Marcel van der Holst
[MSFT]|||

A lot of this depends on what kind of storage you have, whether is direct attached storage or a SAN, and if its a SAN, what kind of SAN it is. Generally speaking, Microsoft recommends that you have one TempDB file per physical CPU (and dual-core CPUs count as physical CPUs here), and one data file for each physical CPU. Doing this can help with allocation bottlenecks. Some SANs (such as 3PAR) make this somewhat moot, since they spread a file among all of the physical disks. Having multiple small data files can make backups and restores easier to manage.

You want your data files on one set of disks, log files on another set of disks, and TempDB on another set of disks.

does importing a trace file via FN_TRACE_GETTABLE use the tempDB?

i need to look at some large trace files (3 - 8 Gig). The drive that
has the tempDB on is is small..currently has < 2 G of free space on
it.
does the FN_TRACE_GETTABLE() function route the import through the
tempDB?
if so is there a way to avoid this?
I don't want to choke the root drive of the server importing this...I have not seen it being spooled on a simple select into from a trace file
but you might just want to keep an eye on it.
--
Jason Massie
Web: http://statisticsio.com
RSS: http://feeds.feedburner.com/statisticsio
<kent@.kaeservices.com> wrote in message
news:29307ecd-75d3-42e1-ba4a-06d67f0c5c14@.d4g2000prg.googlegroups.com...
>i need to look at some large trace files (3 - 8 Gig). The drive that
> has the tempDB on is is small..currently has < 2 G of free space on
> it.
> does the FN_TRACE_GETTABLE() function route the import through the
> tempDB?
> if so is there a way to avoid this?
> I don't want to choke the root drive of the server importing this...

does importing a trace file via FN_TRACE_GETTABLE use the tempDB?

i need to look at some large trace files (3 - 8 Gig). The drive that
has the tempDB on is is small..currently has < 2 G of free space on
it.
does the FN_TRACE_GETTABLE() function route the import through the
tempDB?
if so is there a way to avoid this?
I don't want to choke the root drive of the server importing this...
I have not seen it being spooled on a simple select into from a trace file
but you might just want to keep an eye on it.
Jason Massie
Web: http://statisticsio.com
RSS: http://feeds.feedburner.com/statisticsio
<kent@.kaeservices.com> wrote in message
news:29307ecd-75d3-42e1-ba4a-06d67f0c5c14@.d4g2000prg.googlegroups.com...
>i need to look at some large trace files (3 - 8 Gig). The drive that
> has the tempDB on is is small..currently has < 2 G of free space on
> it.
> does the FN_TRACE_GETTABLE() function route the import through the
> tempDB?
> if so is there a way to avoid this?
> I don't want to choke the root drive of the server importing this...

Friday, March 9, 2012

Does format file for bulk insert allow mix of native and character format?

I tried to place this question to the .Net framework Data Access and Storage forum and got no answer, so I am trying to move it on this forum.

So I have a module which require me to import big amount of data. I believe that the native format data files with format files will be the most efficient way of implementation. I am trying to programmatically produce a BCP like exported files of native format(it means without type conversion) from tables with nullable and nonnullable values.I prefere to be able to not produce computed or identity or rowguid fields,so I need format files.

I don't have problems producing different kinds of int or float( which are the majority of fields) or char or nchar fields.

Problems are emerging with the datetime or smalldatetime or decimal fields because I don't know how to convert to them from the strings or from the CLR types.

So I trying to find a way to find a native format of those fields or to find if a bulk insert will accept mixed format files with some of the fields in the native format without field terminators and some with field terminators or to use char format with field terminators only plus maybe format files.

So if the answer to above question is positive I can partially resolve the problem, if negative I will have to use the character format.

Unless you can educate me on the convertion to the SQL server internal formats of datetimes and decimals from the CLR types.

See SQL Server 2005 Books Online topics:

SQL Server Data Types and Their .NET Framework Equivalents
http://msdn2.microsoft.com/en-us/library/ms131092.aspx
Specifying File Storage Type by Using bcp
http://msdn2.microsoft.com/en-US/library/ms189110.aspx

Data Type Conversion (Database Engine)
http://msdn2.microsoft.com/en-us/library/ms191530.aspx

Does file connection manager not work offline?

Hi there

I have a connectin manager for file( New File Connection..) along with some oledb connection managers.

Now If a put SSIS work Offline as True (I need this for some reason). Connection manager for File gives error as something --

"Connection manager for File connection failed with error code-0xc0014019,Connections cannot acquire connections while work offline is true."

But other connections of olddb work fine.

so do we have to change some more settings with file connection or with file connection we can't work offline?

Thanks and Regards

Rahul Kumar

logtorahul wrote:

"Connection manager for File connection failed with error code-0xc0014019,Connections cannot acquire connections while work offline is true."

Rahul,

The answer seems to be in the error message itself. I don't know your reason for working offline; but when I use it I do not expect any of the connections to work. I guess that option was put there to allow you to open packages while 'no-online' without receiving error messages in your connections.

|||

Yes Rafael,

You are right when working offline we donot expect connections to work,even i do not.

But any other connection manager does not give you this error when we run the package.

Let me explain with a scenario:

Suppose we have a oledb connection ( for our source and destination DB) and a file connection for our log file and all the connection string is being provided by package configuration file.Given work offline as true and delayValidation is also true.

Now if we run the package, all other connection managers pick connection string from the package config file but the file connection gives error as

"it cannot acquire connection while working offline."

Now this sounds little wired to me as all the connection manager can acquire connection while working offline and file connection cannot.

Does DBCC SHRINKFILE on a data file cause the log to grow?

We have 60GB free of a 140GB data file in one of our databases in SQL
Server 2000 SP2. I've truncated the free space at the end, but it won't
shrink any further now. I tried running DBCC SHRINKFILE, but had to
cancel it and run a log backup when the transaction log grew to 40GB and
started threatening the free disk space. Is this expected behaviour.
If it is, can anybody recommend a way to shrink the data file without
growing the log - e.g. do I have to temporarily change the database
model to simple, or something like that?
Cheers,
MalcMalcolm
Make a long story shortly
1)Perfom BACKUP LOG file ( it removes all inactive transaction )
2)Pefrom DBCC SHRINKFILE to reduce physical size of the log file.
"Malcolm Ferguson" <Malcolm_Ferguson@.NO_SPAM_PLEASEyahoo.com> wrote in
message news:ekOInAj9DHA.2472@.TK2MSFTNGP10.phx.gbl...
> We have 60GB free of a 140GB data file in one of our databases in SQL
> Server 2000 SP2. I've truncated the free space at the end, but it won't
> shrink any further now. I tried running DBCC SHRINKFILE, but had to
> cancel it and run a log backup when the transaction log grew to 40GB and
> started threatening the free disk space. Is this expected behaviour.
> If it is, can anybody recommend a way to shrink the data file without
> growing the log - e.g. do I have to temporarily change the database
> model to simple, or something like that?
> Cheers,
> Malc
>|||Sometimes, if this fails, you have to put some transactions into the
database to roll the virtual log to the beginning of the physical log. I've
had success with:
1) Backup the log
2) Run DBCC SHRINKFILE
3) If step 2 does not work, create a temp table in the database and add 1000
rows.
4) Delete the rows and the temp table. This will create t-log entries that
will force the virtual log to roll to the frony of the physical file (See
BOL for details on t-log architecture).
5) Backup the log.
6) Run DBCC SHRINKFILE again. It should work this time. If not, repeat
from 3.
The process is documented in a Q article somewhere for SQL 7. You are not
supposed to need it for SQL 2000 but I havbe found it comes in handy.
Christian Smith
"Malcolm Ferguson" <Malcolm_Ferguson@.NO_SPAM_PLEASEyahoo.com> wrote in
message news:ekOInAj9DHA.2472@.TK2MSFTNGP10.phx.gbl...
> We have 60GB free of a 140GB data file in one of our databases in SQL
> Server 2000 SP2. I've truncated the free space at the end, but it won't
> shrink any further now. I tried running DBCC SHRINKFILE, but had to
> cancel it and run a log backup when the transaction log grew to 40GB and
> started threatening the free disk space. Is this expected behaviour.
> If it is, can anybody recommend a way to shrink the data file without
> growing the log - e.g. do I have to temporarily change the database
> model to simple, or something like that?
> Cheers,
> Malc
>|||Yes this is expected behavior. When you shrink a data file it has to
physically move all data at the end of the file towards the beginning and
each move is fully logged. I would shrink in smaller increments and backup
the log after each one to keep it in check.
Andrew J. Kelly
SQL Server MVP
"Malcolm Ferguson" <Malcolm_Ferguson@.NO_SPAM_PLEASEyahoo.com> wrote in
message news:ekOInAj9DHA.2472@.TK2MSFTNGP10.phx.gbl...
> We have 60GB free of a 140GB data file in one of our databases in SQL
> Server 2000 SP2. I've truncated the free space at the end, but it won't
> shrink any further now. I tried running DBCC SHRINKFILE, but had to
> cancel it and run a log backup when the transaction log grew to 40GB and
> started threatening the free disk space. Is this expected behaviour.
> If it is, can anybody recommend a way to shrink the data file without
> growing the log - e.g. do I have to temporarily change the database
> model to simple, or something like that?
> Cheers,
> Malc
>|||Guys,
Thanks for the advice, but I'm not trying to shrink the log file. I'm
trying to shrink the *DATA* file. However, whenever I try this, the log
file starts growing like crazy.
Please advice further,
Malc
Christian Smith wrote:

>Sometimes, if this fails, you have to put some transactions into the
>database to roll the virtual log to the beginning of the physical log. I'v
e
>had success with:
>1) Backup the log
>2) Run DBCC SHRINKFILE
>3) If step 2 does not work, create a temp table in the database and add 100
0
>rows.
>4) Delete the rows and the temp table. This will create t-log entries tha
t
>will force the virtual log to roll to the frony of the physical file (See
>BOL for details on t-log architecture).
>5) Backup the log.
>6) Run DBCC SHRINKFILE again. It should work this time. If not, repeat
>from 3.
>The process is documented in a Q article somewhere for SQL 7. You are not
>supposed to need it for SQL 2000 but I havbe found it comes in handy.
>Christian Smith
>"Malcolm Ferguson" <Malcolm_Ferguson@.NO_SPAM_PLEASEyahoo.com> wrote in
>message news:ekOInAj9DHA.2472@.TK2MSFTNGP10.phx.gbl...
>
>
>|||Andrew J. Kelly wrote:

>Yes this is expected behavior. When you shrink a data file it has to
>physically move all data at the end of the file towards the beginning and
>each move is fully logged. I would shrink in smaller increments and backup
>the log after each one to keep it in check.
>
Andrew,
That's interesting. If I understand you correctly, when trying to
recover some of this 61,000MB of slack space, I should do the following:
DBCC SHRINKFILE (MyDb_Data, 140000)
-> Backup log
DBCC SHRINKFILE (MyDb_Data, 135000)
-> Backup log
DBCC ... etc.
rather than:
DBCC SHRINKFILE (MyDb_Data)
or
DBCC SHRINKFILE (MyDb_Data, 85000)
If I do this, you're saying it will consume less log space?
Cheers,
Malc|||yes. shrinking the data file(s) will fill up your log file. you could try
setting your recovery mode to simple before doing the shrinkfile. you'll st
ill
need some available space in your log file even with simple recovery.
Malcolm Ferguson wrote:

> Thanks for the advice, but I'm not trying to shrink the log file. I'm
> trying to shrink the *DATA* file. However, whenever I try this, the log
> file starts growing like crazy.|||What I am saying is it will allow you to control your log file by giving you
time to issue log backups in between the shrinks. That way your log file
won't grow on you. It also allows you to manage the process a bit better.
By the way you don't want to remove any where near all of your free space.
The database needs lots of free space to operate properly. The less free
space you have the more you risk fragmentating your tables when you reindex.
When you reindex a table you should ensure you have 1.2 to 1.5 times the
size of the table and indexes free and hopefully contiguous.
Andrew J. Kelly
SQL Server MVP
"Malcolm Ferguson" <Malcolm_Ferguson@.NO_SPAM_PLEASEyahoo.com> wrote in
message news:%23nlIL7j9DHA.3880@.TK2MSFTNGP11.phx.gbl...
> Andrew J. Kelly wrote:
>
backup
> Andrew,
> That's interesting. If I understand you correctly, when trying to
> recover some of this 61,000MB of slack space, I should do the following:
> DBCC SHRINKFILE (MyDb_Data, 140000)
> -> Backup log
> DBCC SHRINKFILE (MyDb_Data, 135000)
> -> Backup log
> DBCC ... etc.
> rather than:
> DBCC SHRINKFILE (MyDb_Data)
> or
> DBCC SHRINKFILE (MyDb_Data, 85000)
> If I do this, you're saying it will consume less log space?
> Cheers,
> Malc
>|||Andrew J. Kelly wrote:

>What I am saying is it will allow you to control your log file by giving yo
u
>time to issue log backups in between the shrinks. That way your log file
>won't grow on you. It also allows you to manage the process a bit better.
>By the way you don't want to remove any where near all of your free space.
>The database needs lots of free space to operate properly. The less free
>space you have the more you risk fragmentating your tables when you reindex
.
>When you reindex a table you should ensure you have 1.2 to 1.5 times the
>size of the table and indexes free and hopefully contiguous.
>
Thanks all for you help and advice. I was able to remove 20GB using the
suggestions, which leaves plenty of slack space in the datafile, and
leaves the free disk space at a sane level.
Cheers,
Malc

Does DBCC SHRINKFILE on a data file cause the log to grow?

We have 60GB free of a 140GB data file in one of our databases in SQL
Server 2000 SP2. I've truncated the free space at the end, but it won't
shrink any further now. I tried running DBCC SHRINKFILE, but had to
cancel it and run a log backup when the transaction log grew to 40GB and
started threatening the free disk space. Is this expected behaviour.
If it is, can anybody recommend a way to shrink the data file without
growing the log - e.g. do I have to temporarily change the database
model to simple, or something like that?
Cheers,
MalcMalcolm
Make a long story shortly
1)Perfom BACKUP LOG file ( it removes all inactive transaction )
2)Pefrom DBCC SHRINKFILE to reduce physical size of the log file.
"Malcolm Ferguson" <Malcolm_Ferguson@.NO_SPAM_PLEASEyahoo.com> wrote in
message news:ekOInAj9DHA.2472@.TK2MSFTNGP10.phx.gbl...
> We have 60GB free of a 140GB data file in one of our databases in SQL
> Server 2000 SP2. I've truncated the free space at the end, but it won't
> shrink any further now. I tried running DBCC SHRINKFILE, but had to
> cancel it and run a log backup when the transaction log grew to 40GB and
> started threatening the free disk space. Is this expected behaviour.
> If it is, can anybody recommend a way to shrink the data file without
> growing the log - e.g. do I have to temporarily change the database
> model to simple, or something like that?
> Cheers,
> Malc
>|||Sometimes, if this fails, you have to put some transactions into the
database to roll the virtual log to the beginning of the physical log. I've
had success with:
1) Backup the log
2) Run DBCC SHRINKFILE
3) If step 2 does not work, create a temp table in the database and add 1000
rows.
4) Delete the rows and the temp table. This will create t-log entries that
will force the virtual log to roll to the frony of the physical file (See
BOL for details on t-log architecture).
5) Backup the log.
6) Run DBCC SHRINKFILE again. It should work this time. If not, repeat
from 3.
The process is documented in a Q article somewhere for SQL 7. You are not
supposed to need it for SQL 2000 but I havbe found it comes in handy.
Christian Smith
"Malcolm Ferguson" <Malcolm_Ferguson@.NO_SPAM_PLEASEyahoo.com> wrote in
message news:ekOInAj9DHA.2472@.TK2MSFTNGP10.phx.gbl...
> We have 60GB free of a 140GB data file in one of our databases in SQL
> Server 2000 SP2. I've truncated the free space at the end, but it won't
> shrink any further now. I tried running DBCC SHRINKFILE, but had to
> cancel it and run a log backup when the transaction log grew to 40GB and
> started threatening the free disk space. Is this expected behaviour.
> If it is, can anybody recommend a way to shrink the data file without
> growing the log - e.g. do I have to temporarily change the database
> model to simple, or something like that?
> Cheers,
> Malc
>|||Yes this is expected behavior. When you shrink a data file it has to
physically move all data at the end of the file towards the beginning and
each move is fully logged. I would shrink in smaller increments and backup
the log after each one to keep it in check.
--
Andrew J. Kelly
SQL Server MVP
"Malcolm Ferguson" <Malcolm_Ferguson@.NO_SPAM_PLEASEyahoo.com> wrote in
message news:ekOInAj9DHA.2472@.TK2MSFTNGP10.phx.gbl...
> We have 60GB free of a 140GB data file in one of our databases in SQL
> Server 2000 SP2. I've truncated the free space at the end, but it won't
> shrink any further now. I tried running DBCC SHRINKFILE, but had to
> cancel it and run a log backup when the transaction log grew to 40GB and
> started threatening the free disk space. Is this expected behaviour.
> If it is, can anybody recommend a way to shrink the data file without
> growing the log - e.g. do I have to temporarily change the database
> model to simple, or something like that?
> Cheers,
> Malc
>|||Guys,
Thanks for the advice, but I'm not trying to shrink the log file. I'm
trying to shrink the *DATA* file. However, whenever I try this, the log
file starts growing like crazy.
Please advice further,
Malc
Christian Smith wrote:
>Sometimes, if this fails, you have to put some transactions into the
>database to roll the virtual log to the beginning of the physical log. I've
>had success with:
>1) Backup the log
>2) Run DBCC SHRINKFILE
>3) If step 2 does not work, create a temp table in the database and add 1000
>rows.
>4) Delete the rows and the temp table. This will create t-log entries that
>will force the virtual log to roll to the frony of the physical file (See
>BOL for details on t-log architecture).
>5) Backup the log.
>6) Run DBCC SHRINKFILE again. It should work this time. If not, repeat
>from 3.
>The process is documented in a Q article somewhere for SQL 7. You are not
>supposed to need it for SQL 2000 but I havbe found it comes in handy.
>Christian Smith
>"Malcolm Ferguson" <Malcolm_Ferguson@.NO_SPAM_PLEASEyahoo.com> wrote in
>message news:ekOInAj9DHA.2472@.TK2MSFTNGP10.phx.gbl...
>
>>We have 60GB free of a 140GB data file in one of our databases in SQL
>>Server 2000 SP2. I've truncated the free space at the end, but it won't
>>shrink any further now. I tried running DBCC SHRINKFILE, but had to
>>cancel it and run a log backup when the transaction log grew to 40GB and
>>started threatening the free disk space. Is this expected behaviour.
>>If it is, can anybody recommend a way to shrink the data file without
>>growing the log - e.g. do I have to temporarily change the database
>>model to simple, or something like that?
>>Cheers,
>>Malc
>>
>
>|||Andrew J. Kelly wrote:
>Yes this is expected behavior. When you shrink a data file it has to
>physically move all data at the end of the file towards the beginning and
>each move is fully logged. I would shrink in smaller increments and backup
>the log after each one to keep it in check.
>
Andrew,
That's interesting. If I understand you correctly, when trying to
recover some of this 61,000MB of slack space, I should do the following:
DBCC SHRINKFILE (MyDb_Data, 140000)
-> Backup log
DBCC SHRINKFILE (MyDb_Data, 135000)
-> Backup log
DBCC ... etc.
rather than:
DBCC SHRINKFILE (MyDb_Data)
or
DBCC SHRINKFILE (MyDb_Data, 85000)
If I do this, you're saying it will consume less log space?
Cheers,
Malc|||yes. shrinking the data file(s) will fill up your log file. you could try
setting your recovery mode to simple before doing the shrinkfile. you'll still
need some available space in your log file even with simple recovery.
Malcolm Ferguson wrote:
> Thanks for the advice, but I'm not trying to shrink the log file. I'm
> trying to shrink the *DATA* file. However, whenever I try this, the log
> file starts growing like crazy.|||What I am saying is it will allow you to control your log file by giving you
time to issue log backups in between the shrinks. That way your log file
won't grow on you. It also allows you to manage the process a bit better.
By the way you don't want to remove any where near all of your free space.
The database needs lots of free space to operate properly. The less free
space you have the more you risk fragmentating your tables when you reindex.
When you reindex a table you should ensure you have 1.2 to 1.5 times the
size of the table and indexes free and hopefully contiguous.
--
Andrew J. Kelly
SQL Server MVP
"Malcolm Ferguson" <Malcolm_Ferguson@.NO_SPAM_PLEASEyahoo.com> wrote in
message news:%23nlIL7j9DHA.3880@.TK2MSFTNGP11.phx.gbl...
> Andrew J. Kelly wrote:
> >Yes this is expected behavior. When you shrink a data file it has to
> >physically move all data at the end of the file towards the beginning and
> >each move is fully logged. I would shrink in smaller increments and
backup
> >the log after each one to keep it in check.
> >
> >
> Andrew,
> That's interesting. If I understand you correctly, when trying to
> recover some of this 61,000MB of slack space, I should do the following:
> DBCC SHRINKFILE (MyDb_Data, 140000)
> -> Backup log
> DBCC SHRINKFILE (MyDb_Data, 135000)
> -> Backup log
> DBCC ... etc.
> rather than:
> DBCC SHRINKFILE (MyDb_Data)
> or
> DBCC SHRINKFILE (MyDb_Data, 85000)
> If I do this, you're saying it will consume less log space?
> Cheers,
> Malc
>|||Andrew J. Kelly wrote:
>What I am saying is it will allow you to control your log file by giving you
>time to issue log backups in between the shrinks. That way your log file
>won't grow on you. It also allows you to manage the process a bit better.
>By the way you don't want to remove any where near all of your free space.
>The database needs lots of free space to operate properly. The less free
>space you have the more you risk fragmentating your tables when you reindex.
>When you reindex a table you should ensure you have 1.2 to 1.5 times the
>size of the table and indexes free and hopefully contiguous.
>
Thanks all for you help and advice. I was able to remove 20GB using the
suggestions, which leaves plenty of slack space in the datafile, and
leaves the free disk space at a sane level.
Cheers,
Malc

Wednesday, March 7, 2012

does backup database need to specify file name?

hi, we are using SQL Server 2000 SP4, i am very new. Since i normally use
DB2, when i backup a database, i only need to designate the backup folder and
it will automatically generate the image file by using the timestamp as the
file name. But on SQL server, only specify the backup folder gives me error :
ODBC SQLState: 42000, cannot open backup device (which is the directory
name), device error, can some one help me on this?
thanks
Yes, you need to specify a file name. It is easy to generate using some TSQL code, though. For
example:
DECLARE @.f varchar(8000)
SET @.f = 'C:\temp\pubs' + CONVERT(char(8), CURRENT_TIMESTAMP, 112) + '.bak'
BACKUP DATABASE pubs
TO DISK = @.f
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
Blog: http://solidqualitylearning.com/blogs/tibor/
"tulip" <tulip@.discussions.microsoft.com> wrote in message
news:43F3E954-4DA6-4D3F-8F76-88DF689BBADD@.microsoft.com...
> hi, we are using SQL Server 2000 SP4, i am very new. Since i normally use
> DB2, when i backup a database, i only need to designate the backup folder and
> it will automatically generate the image file by using the timestamp as the
> file name. But on SQL server, only specify the backup folder gives me error :
> ODBC SQLState: 42000, cannot open backup device (which is the directory
> name), device error, can some one help me on this?
> thanks
|||but the backup image has not timestamp on it, only year, month, day, so if i
have multiple copies on the same day, what is the best way to distinct them?
thank you
|||I only posted an example. Read in Books Online about the CONVERT function and use a suitable
formatting code (3:rd parameter).
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
Blog: http://solidqualitylearning.com/blogs/tibor/
"tulip" <tulip@.discussions.microsoft.com> wrote in message
news:9F07F525-8E2B-4984-BB97-D9810B6F730F@.microsoft.com...
> but the backup image has not timestamp on it, only year, month, day, so if i
> have multiple copies on the same day, what is the best way to distinct them?
> thank you
|||yes, i look at the third parameter, i try to use the one with time stamp
(e.g. 120), but they all contain space in the filename , which gives me error
,Cannot open backup device 'C:\2005-09-06 15:53:53.bak'. Device error or
device off-line. See the SQL Server error log for more details. how do you do
normally?
thanks
|||The character ":" is not allowed in a file name. That is the reason for your error message. Use
another style, or use REPLACE to change those characters to something else:
DECLARE @.f varchar(8000)
SET @.f = 'C:\temp\pubs ' + REPLACE(CONVERT(char(19), CURRENT_TIMESTAMP, 120) + '.bak', ':', '.')
select @.f
BACKUP DATABASE pubs
TO DISK = @.f
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"tulip" <tulip@.discussions.microsoft.com> wrote in message
news:74896E89-E698-446E-9D34-65380179FDB3@.microsoft.com...
> yes, i look at the third parameter, i try to use the one with time stamp
> (e.g. 120), but they all contain space in the filename , which gives me error
> ,Cannot open backup device 'C:\2005-09-06 15:53:53.bak'. Device error or
> device off-line. See the SQL Server error log for more details. how do you do
> normally?
> thanks

does backup database need to specify file name?

hi, we are using SQL Server 2000 SP4, i am very new. Since i normally use
DB2, when i backup a database, i only need to designate the backup folder an
d
it will automatically generate the image file by using the timestamp as the
file name. But on SQL server, only specify the backup folder gives me error
:
ODBC SQLState: 42000, cannot open backup device (which is the directory
name), device error, can some one help me on this?
thanksYes, you need to specify a file name. It is easy to generate using some TSQL
code, though. For
example:
DECLARE @.f varchar(8000)
SET @.f = 'C:\temp\pubs' + CONVERT(char(8), CURRENT_TIMESTAMP, 112) + '.bak'
BACKUP DATABASE pubs
TO DISK = @.f
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
Blog: http://solidqualitylearning.com/blogs/tibor/
"tulip" <tulip@.discussions.microsoft.com> wrote in message
news:43F3E954-4DA6-4D3F-8F76-88DF689BBADD@.microsoft.com...
> hi, we are using SQL Server 2000 SP4, i am very new. Since i normally use
> DB2, when i backup a database, i only need to designate the backup folder
and
> it will automatically generate the image file by using the timestamp as th
e
> file name. But on SQL server, only specify the backup folder gives me erro
r :
> ODBC SQLState: 42000, cannot open backup device (which is the directory
> name), device error, can some one help me on this?
> thanks|||but the backup image has not timestamp on it, only year, month, day, so if i
have multiple copies on the same day, what is the best way to distinct them?
thank you|||I only posted an example. Read in Books Online about the CONVERT function an
d use a suitable
formatting code (3:rd parameter).
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
Blog: http://solidqualitylearning.com/blogs/tibor/
"tulip" <tulip@.discussions.microsoft.com> wrote in message
news:9F07F525-8E2B-4984-BB97-D9810B6F730F@.microsoft.com...
> but the backup image has not timestamp on it, only year, month, day, so if
i
> have multiple copies on the same day, what is the best way to distinct the
m?
> thank you|||yes, i look at the third parameter, i try to use the one with time stamp
(e.g. 120), but they all contain space in the filename , which gives me erro
r
,Cannot open backup device 'C:\2005-09-06 15:53:53.bak'. Device error or
device off-line. See the SQL Server error log for more details. how do you d
o
normally?
thanks|||The character ":" is not allowed in a file name. That is the reason for your
error message. Use
another style, or use REPLACE to change those characters to something else:
DECLARE @.f varchar(8000)
SET @.f = 'C:\temp\pubs ' + REPLACE(CONVERT(char(19), CURRENT_TIMESTAMP, 120)
+ '.bak', ':', '.')
select @.f
BACKUP DATABASE pubs
TO DISK = @.f
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"tulip" <tulip@.discussions.microsoft.com> wrote in message
news:74896E89-E698-446E-9D34-65380179FDB3@.microsoft.com...
> yes, i look at the third parameter, i try to use the one with time stamp
> (e.g. 120), but they all contain space in the filename , which gives me er
ror
> ,Cannot open backup device 'C:\2005-09-06 15:53:53.bak'. Device error or
> device off-line. See the SQL Server error log for more details. how do you
do
> normally?
> thanks

does backup database need to specify file name?

hi, we are using SQL Server 2000 SP4, i am very new. Since i normally use
DB2, when i backup a database, i only need to designate the backup folder and
it will automatically generate the image file by using the timestamp as the
file name. But on SQL server, only specify the backup folder gives me error :
ODBC SQLState: 42000, cannot open backup device (which is the directory
name), device error, can some one help me on this?
thanksYes, you need to specify a file name. It is easy to generate using some TSQL code, though. For
example:
DECLARE @.f varchar(8000)
SET @.f = 'C:\temp\pubs' + CONVERT(char(8), CURRENT_TIMESTAMP, 112) + '.bak'
BACKUP DATABASE pubs
TO DISK = @.f
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
Blog: http://solidqualitylearning.com/blogs/tibor/
"tulip" <tulip@.discussions.microsoft.com> wrote in message
news:43F3E954-4DA6-4D3F-8F76-88DF689BBADD@.microsoft.com...
> hi, we are using SQL Server 2000 SP4, i am very new. Since i normally use
> DB2, when i backup a database, i only need to designate the backup folder and
> it will automatically generate the image file by using the timestamp as the
> file name. But on SQL server, only specify the backup folder gives me error :
> ODBC SQLState: 42000, cannot open backup device (which is the directory
> name), device error, can some one help me on this?
> thanks|||but the backup image has not timestamp on it, only year, month, day, so if i
have multiple copies on the same day, what is the best way to distinct them?
thank you|||I only posted an example. Read in Books Online about the CONVERT function and use a suitable
formatting code (3:rd parameter).
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
Blog: http://solidqualitylearning.com/blogs/tibor/
"tulip" <tulip@.discussions.microsoft.com> wrote in message
news:9F07F525-8E2B-4984-BB97-D9810B6F730F@.microsoft.com...
> but the backup image has not timestamp on it, only year, month, day, so if i
> have multiple copies on the same day, what is the best way to distinct them?
> thank you|||yes, i look at the third parameter, i try to use the one with time stamp
(e.g. 120), but they all contain space in the filename , which gives me error
,Cannot open backup device 'C:\2005-09-06 15:53:53.bak'. Device error or
device off-line. See the SQL Server error log for more details. how do you do
normally?
thanks|||The character ":" is not allowed in a file name. That is the reason for your error message. Use
another style, or use REPLACE to change those characters to something else:
DECLARE @.f varchar(8000)
SET @.f = 'C:\temp\pubs ' + REPLACE(CONVERT(char(19), CURRENT_TIMESTAMP, 120) + '.bak', ':', '.')
select @.f
BACKUP DATABASE pubs
TO DISK = @.f
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"tulip" <tulip@.discussions.microsoft.com> wrote in message
news:74896E89-E698-446E-9D34-65380179FDB3@.microsoft.com...
> yes, i look at the third parameter, i try to use the one with time stamp
> (e.g. 120), but they all contain space in the filename , which gives me error
> ,Cannot open backup device 'C:\2005-09-06 15:53:53.bak'. Device error or
> device off-line. See the SQL Server error log for more details. how do you do
> normally?
> thanks

does anyone know how to run a batch file using vb script?

This is not working.

Set oShell = CreateObject("WScript.Shell")
oShell.Exec "test.bat"

Try the "call" command. it works in similar scenarios. I have not tested it in vb script from SSIS but it should work just fine.

Here is the modified command:

Set oShell = CreateObject("WScript.Shell")
oShell.Exec "call test.bat"

Thanks,
Greg Van Mullem|||That doesn't seem to work. The error is can't find specified file.|||

Try Run instead of Exec.

Set oShell = CreateObject("WScript.Shell")
oShell.Run "test.bat"

|||call shell(c:\test.bat)

Does anyone know how to change the Document Map root text? - DISPLAYNAME property

hi, guys
Does anyone know how to change the Document Map root text? For example, i have report, the file name is sc.rdl, and then the root is sc. Apparently this is not good. I am thinking is there a way to change it.If you are using the ReportViewer controls in VS 2005, set DisplayName on either ReportViewer.LocalReport or ReportViewer.ServerReport, depending on the mode you are using. Setting DisplayName will also affect the generated file name for export. If you are connecting to the server directly via url access or using Report Server 2000, there is no way to change this value.|||

Brian, I want to be able to assign displayname at report design time....then be able to extract it out for display purposes when using the reportviewer control connecting to a server report. Doesn't sound like it's possible but is this something for a future enhancement. Currently, using the report viewer control, there's limited properties exposed when using ReportViewer.ServerReport.<property>. Would be nice to get at the author, description, etc. (and have an additional displayname property available).

Does anyone know how to change the Document Map root text?

hi, guys
Does anyone know how to change the Document Map root text? For example, i have report, the file name is sc.rdl, and then the root is sc. Apparently this is not good. I am thinking is there a way to change it.If you are using the ReportViewer controls in VS 2005, set DisplayName on either ReportViewer.LocalReport or ReportViewer.ServerReport, depending on the mode you are using. Setting DisplayName will also affect the generated file name for export. If you are connecting to the server directly via url access or using Report Server 2000, there is no way to change this value.|||

Brian, I want to be able to assign displayname at report design time....then be able to extract it out for display purposes when using the reportviewer control connecting to a server report. Doesn't sound like it's possible but is this something for a future enhancement. Currently, using the report viewer control, there's limited properties exposed when using ReportViewer.ServerReport.<property>. Would be nice to get at the author, description, etc. (and have an additional displayname property available).