Showing posts with label production. Show all posts
Showing posts with label production. Show all posts

Sunday, March 25, 2012

Does SQL degrade when DBCC Traceon 1222 is switched on?

Hi,

I'm seeing a few deadlocks on my SQL 2005 production database and wondered what i could expect if i switch on the trace flag 1222. Can this cause more deadlocks to occur? Will the performance of SQL degrade? Is it safe to have this flag set on a production server or is there another method you would recommend?

Thanks

Martin

In general every extra logging causes a little performance impact but it will not cause extra deadlocks. The performance impact is more acceptable than having deadlocks so until you have found the root cause of these deadlocks I would not hesitate to use the trace flag.

Do not bother too much about the performance impact however, it is very little and definitely less than the cost of having deadlocks.

WesleyB

Visit my SQL Server weblog @. http://dis4ea.blogspot.com

Does SQL degrade when DBCC Traceon 1222 is switched on?

Hi,

I'm seeing a few deadlocks on my SQL 2005 production database and wondered what i could expect if i switch on the trace flag 1222. Can this cause more deadlocks to occur? Will the performance of SQL degrade? Is it safe to have this flag set on a production server or is there another method you would recommend?

Thanks

Martin

In general every extra logging causes a little performance impact but it will not cause extra deadlocks. The performance impact is more acceptable than having deadlocks so until you have found the root cause of these deadlocks I would not hesitate to use the trace flag.

Do not bother too much about the performance impact however, it is very little and definitely less than the cost of having deadlocks.

WesleyB

Visit my SQL Server weblog @. http://dis4ea.blogspot.com

sql

Thursday, March 22, 2012

does restore operation break tlog bckup chain?

Hi all!
I'm thinking of switching my production db to full recovery model.
Now, I've some questions regarding tlog backups chain. What can
possibly break chain of transaction log backups?
And yet another question. Does restore operation break transaction log
backups chain? I mean, should every restore operation be followed by
full db backup?
with regards
Maciej Szymanski> What can possibly break chain of transaction log backups?
In the FULL recovery model, truncating the log without creating a log backup
file (BACKUP LOG WITH TRUNCATE_ONLY or NO_LOG) will break the log backup
sequence.
> And yet another question. Does restore operation break transaction log
> backups chain? I mean, should every restore operation be followed by
> full db backup?
The script below illustrates that a backup isn't needed following a restore.
CREATE DATABASE MyDatabase
ALTER DATABASE MyDatabase
SET RECOVERY FULL
GO
USE MyDatabase
CREATE TABLE MyTable(Col1 int NOT NULL)
INSERT INTO MyTable VALUES(1)
BACKUP DATABASE MyDatabase
TO DISK='C:\Backups\MyDatabase.bak'
WITH INIT
INSERT INTO MyTable VALUES(2)
BACKUP LOG MyDatabase
TO DISK='C:\Backups\MyDatabase_Log1.bak'
WITH INIT
GO
USE master
RESTORE DATABASE MyDatabase
FROM DISK='C:\Backups\MyDatabase.bak'
WITH NORECOVERY
RESTORE LOG MyDatabase
FROM DISK='C:\Backups\MyDatabase_Log1.bak'
WITH RECOVERY
GO
USE MyDatabase
INSERT INTO MyTable VALUES(3)
BACKUP LOG MyDatabase
TO DISK='C:\Backups\MyDatabase_Log2.bak'
WITH INIT
GO
USE master
RESTORE DATABASE MyDatabase
FROM DISK='C:\Backups\MyDatabase.bak'
WITH NORECOVERY
RESTORE LOG MyDatabase
FROM DISK='C:\Backups\MyDatabase_Log1.bak'
WITH NORECOVERY
RESTORE LOG MyDatabase
FROM DISK='C:\Backups\MyDatabase_Log2.bak'
WITH RECOVERY
GO
USE MyDatabase
SELECT * FROM MyTable -- 3 rows returned
--
Hope this helps.
Dan Guzman
SQL Server MVP
"Maciej Szymanski" <maciek@.kolobrzeg.com.pl> wrote in message
news:85a5f2.0401170757.4de3ca20@.posting.google.com...
> Hi all!
> I'm thinking of switching my production db to full recovery model.
> Now, I've some questions regarding tlog backups chain. What can
> possibly break chain of transaction log backups?
> And yet another question. Does restore operation break transaction log
> backups chain? I mean, should every restore operation be followed by
> full db backup?
> with regards
> Maciej Szymanski

does restore operation break tlog bckup chain?

Hi all!
I'm thinking of switching my production db to full recovery model.
Now, I've some questions regarding tlog backups chain. What can
possibly break chain of transaction log backups?
And yet another question. Does restore operation break transaction log
backups chain? I mean, should every restore operation be followed by
full db backup?
with regards
Maciej Szymanski> What can possibly break chain of transaction log backups?
In the FULL recovery model, truncating the log without creating a log backup
file (BACKUP LOG WITH TRUNCATE_ONLY or NO_LOG) will break the log backup
sequence.
quote:

> And yet another question. Does restore operation break transaction log
> backups chain? I mean, should every restore operation be followed by
> full db backup?

The script below illustrates that a backup isn't needed following a restore.
CREATE DATABASE MyDatabase
ALTER DATABASE MyDatabase
SET RECOVERY FULL
GO
USE MyDatabase
CREATE TABLE MyTable(Col1 int NOT NULL)
INSERT INTO MyTable VALUES(1)
BACKUP DATABASE MyDatabase
TO DISK='C:\Backups\MyDatabase.bak'
WITH INIT
INSERT INTO MyTable VALUES(2)
BACKUP LOG MyDatabase
TO DISK='C:\Backups\MyDatabase_Log1.bak'
WITH INIT
GO
USE master
RESTORE DATABASE MyDatabase
FROM DISK='C:\Backups\MyDatabase.bak'
WITH NORECOVERY
RESTORE LOG MyDatabase
FROM DISK='C:\Backups\MyDatabase_Log1.bak'
WITH RECOVERY
GO
USE MyDatabase
INSERT INTO MyTable VALUES(3)
BACKUP LOG MyDatabase
TO DISK='C:\Backups\MyDatabase_Log2.bak'
WITH INIT
GO
USE master
RESTORE DATABASE MyDatabase
FROM DISK='C:\Backups\MyDatabase.bak'
WITH NORECOVERY
RESTORE LOG MyDatabase
FROM DISK='C:\Backups\MyDatabase_Log1.bak'
WITH NORECOVERY
RESTORE LOG MyDatabase
FROM DISK='C:\Backups\MyDatabase_Log2.bak'
WITH RECOVERY
GO
USE MyDatabase
SELECT * FROM MyTable -- 3 rows returned
Hope this helps.
Dan Guzman
SQL Server MVP
"Maciej Szymanski" <maciek@.kolobrzeg.com.pl> wrote in message
news:85a5f2.0401170757.4de3ca20@.posting.google.com...
quote:

> Hi all!
> I'm thinking of switching my production db to full recovery model.
> Now, I've some questions regarding tlog backups chain. What can
> possibly break chain of transaction log backups?
> And yet another question. Does restore operation break transaction log
> backups chain? I mean, should every restore operation be followed by
> full db backup?
> with regards
> Maciej Szymanski

Wednesday, March 21, 2012

Does Query plans change on static tables

I have attached two query plans , which are generated by Production server
at different times (1 day apart). ( i could not send mail with attached
plans I had to remove them)
Same query I have executed every day over two weeks time to see how the
query is performing in prod. server. I got two different plans. I didnt
understand how come the plan is changing so drastic when there are not many
data updates on the tables referred in the query. These tables are very
static and i am running dbcc reindex every night on these tables. Plan 1 has
index scan which is taking long time to execute (11 Sec), Plan 2 has no
index scan and so it is executing fast (<200ms). I didnt get a clue why it
is going for index scan some time and no index scan other times. I hope some
one will have answer for it.
Windows 2000 Server/SQL Server 2000 Standard SP4, 2 cpu, 1GB RAM, RAID 1
Disk
Thanks,
Subbu.
underprocessable|||underprocessable|||Statistics can influence the plan choice. Not all stats are tied to
indexes. So, you likely will want to make sure that all statistics are
up-to-date before making any conclusions. (update statistics <tablename>)
If you are close to a boundary between two plan choices, small changes can
push you from one plan to another on a recompile.
Hope that helps,
Conor
"subbu" <subbaiahd@.hotmail.com> wrote in message
news:%23wq2y6NWGHA.1192@.TK2MSFTNGP03.phx.gbl...
> Plan 1: No index SCAN
> Plan 2: Index SCAN
> I had to send in two separate mail as it crossed max limit size
> Thanks,
> Subbu.
> "subbu" <subbaiahd@.hotmail.com> wrote in message
> news:%238V3L3NWGHA.4212@.TK2MSFTNGP03.phx.gbl...
> many
> has
> it
> some
>
>

Does Query plans change on static tables

I have attached two query plans , which are generated by Production server
at different times (1 day apart). ( i could not send mail with attached
plans I had to remove them)
Same query I have executed every day over two weeks time to see how the
query is performing in prod. server. I got two different plans. I didnt
understand how come the plan is changing so drastic when there are not many
data updates on the tables referred in the query. These tables are very
static and i am running dbcc reindex every night on these tables. Plan 1 has
index scan which is taking long time to execute (11 Sec), Plan 2 has no
index scan and so it is executing fast (<200ms). I didnt get a clue why it
is going for index scan some time and no index scan other times. I hope some
one will have answer for it.
Windows 2000 Server/SQL Server 2000 Standard SP4, 2 cpu, 1GB RAM, RAID 1
Disk
Thanks,
Subbu.underprocessable|||underprocessable|||Statistics can influence the plan choice. Not all stats are tied to
indexes. So, you likely will want to make sure that all statistics are
up-to-date before making any conclusions. (update statistics <tablename> )
If you are close to a boundary between two plan choices, small changes can
push you from one plan to another on a recompile.
Hope that helps,
Conor
"subbu" <subbaiahd@.hotmail.com> wrote in message
news:%23wq2y6NWGHA.1192@.TK2MSFTNGP03.phx.gbl...
> Plan 1: No index SCAN
> Plan 2: Index SCAN
> I had to send in two separate mail as it crossed max limit size
> Thanks,
> Subbu.
> "subbu" <subbaiahd@.hotmail.com> wrote in message
> news:%238V3L3NWGHA.4212@.TK2MSFTNGP03.phx.gbl...
> many
> has
> it
> some
>
>

Does Query plans change on static tables

I have attached two query plans , which are generated by Production server
at different times (1 day apart). ( i could not send mail with attached
plans I had to remove them)
Same query I have executed every day over two weeks time to see how the
query is performing in prod. server. I got two different plans. I didnt
understand how come the plan is changing so drastic when there are not many
data updates on the tables referred in the query. These tables are very
static and i am running dbcc reindex every night on these tables. Plan 1 has
index scan which is taking long time to execute (11 Sec), Plan 2 has no
index scan and so it is executing fast (<200ms). I didnt get a clue why it
is going for index scan some time and no index scan other times. I hope some
one will have answer for it.
Windows 2000 Server/SQL Server 2000 Standard SP4, 2 cpu, 1GB RAM, RAID 1
Disk
Thanks,
Subbu.Statistics can influence the plan choice. Not all stats are tied to
indexes. So, you likely will want to make sure that all statistics are
up-to-date before making any conclusions. (update statistics <tablename>)
If you are close to a boundary between two plan choices, small changes can
push you from one plan to another on a recompile.
Hope that helps,
Conor
"subbu" <subbaiahd@.hotmail.com> wrote in message
news:%23wq2y6NWGHA.1192@.TK2MSFTNGP03.phx.gbl...
> Plan 1: No index SCAN
> Plan 2: Index SCAN
> I had to send in two separate mail as it crossed max limit size
> Thanks,
> Subbu.
> "subbu" <subbaiahd@.hotmail.com> wrote in message
> news:%238V3L3NWGHA.4212@.TK2MSFTNGP03.phx.gbl...
>> I have attached two query plans , which are generated by Production
>> server
>> at different times (1 day apart). ( i could not send mail with attached
>> plans I had to remove them)
>> Same query I have executed every day over two weeks time to see how the
>> query is performing in prod. server. I got two different plans. I didnt
>> understand how come the plan is changing so drastic when there are not
> many
>> data updates on the tables referred in the query. These tables are very
>> static and i am running dbcc reindex every night on these tables. Plan 1
> has
>> index scan which is taking long time to execute (11 Sec), Plan 2 has no
>> index scan and so it is executing fast (<200ms). I didnt get a clue why
> it
>> is going for index scan some time and no index scan other times. I hope
> some
>> one will have answer for it.
>>
>> Windows 2000 Server/SQL Server 2000 Standard SP4, 2 cpu, 1GB RAM, RAID 1
>> Disk
>>
>> Thanks,
>> Subbu.
>>
>>
>>
>
>

Monday, March 19, 2012

Does Log Shipping Handle Schema Changes from Primary to Secondary

Anyone, we have a Vendor application where we install modules or
software that updates our primary production database. We have a
reporting database that is updated via log shipping. I believe that
the article by Paul Ibison log shipping v. replication answers my question
but wanted
to know if I was missing something. So here goes. When we do an
install of a module or software it modifies our primary database. From
Paul is saying in his article it looks like any schema changes in
the primary database would not be done in the reporting database. Is
this correct?
References:
Log Shipping v. Replication
http://www.sqlservercentral.com/columnists/pibison/logshippingvsreplication.asp
Using Secondary Servers for Query Processing
http://msdn2.microsoft.com/en-us/library/ms189572.aspx
Thanks,
Chad
Hi Chad - absolutely. This is one of the differences cf replication - ALL
schema changes are replicated. In SQL Server 2005 you can replicate ddl
changes which makes it closer but not all changes are replicated to the
subscribers.
Cheers,
Paul Ibison SQL Server MVP, www.replicationanswers.com .

Friday, March 9, 2012

Does Disk space affect Database performance

We have aroud 12 databases running on a live production server which is
segmented
into 3 physical drives.
The E drive is where data and logs are stored.
We have implemented a back up strategy over the past week. As a result , the
E
drive has shrunk in size.
Its a total of 135 GB but has 125 MB free.
Consequently we have noticed that various VB6 and MS Access applications are
suffering from slow performance.
But..the apps are merely querying data and bringing it back to the client.
My question is the lack of disk space affecting performance?
The lack of disk space will cause performance problems, iirc the MS
recommendation is to not exceed about 80% capacity on a disk/disk set.
Not sure what you mean by backing up causing the disk to shrink in
size? Also not a good idea to have log and data files on the same
physical drive.
ALI

Does Disk space affect Database performance

We have aroud 12 databases running on a live production server which is
segmented
into 3 physical drives.
The E drive is where data and logs are stored.
We have implemented a back up strategy over the past week. As a result , the
E
drive has shrunk in size.
Its a total of 135 GB but has 125 MB free.
Consequently we have noticed that various VB6 and MS Access applications are
suffering from slow performance.
But..the apps are merely querying data and bringing it back to the client.
My question is the lack of disk space affecting performance?The lack of disk space will cause performance problems, iirc the MS
recommendation is to not exceed about 80% capacity on a disk/disk set.
Not sure what you mean by backing up causing the disk to shrink in
size? Also not a good idea to have log and data files on the same
physical drive.
ALI

Does Disk space affect Database performance

We have aroud 12 databases running on a live production server which is
segmented
into 3 physical drives.
The E drive is where data and logs are stored.
We have implemented a back up strategy over the past week. As a result , the
E
drive has shrunk in size.
Its a total of 135 GB but has 125 MB free.
Consequently we have noticed that various VB6 and MS Access applications are
suffering from slow performance.
But..the apps are merely querying data and bringing it back to the client.
My question is the lack of disk space affecting performance?The lack of disk space will cause performance problems, iirc the MS
recommendation is to not exceed about 80% capacity on a disk/disk set.
Not sure what you mean by backing up causing the disk to shrink in
size? Also not a good idea to have log and data files on the same
physical drive.
ALI

Friday, February 24, 2012

does "with (nolock)" absolutely guarantee no locks are taken?

I need to run a few short-running queries against a production system. I need to be absolutely certain that SS doesn't take out any locks on the table as a result.

Does "with (nolock)" absolutely guarantee this? I've read BOL on the topic and I understand isolation level read-uncommitted. But I want to validate that there's not any undocumented behavior in SS that might violate the documentation (which clearly states that no shared locks are issued).

Thanks!

You're talking two different things.

with (nolock) is a HINT. That means there is a possibility that SS will override it.

SET TRANSACTION ISOLATION LEVEL is a command. AFAIK, SS won't override this.

|||SQL Server honors the read uncommitted hint. It is however possible that you might get some error due to the dirty read behavior. The transaction isolation level is recommended if you do not want to specify hint on every table that you query and it is for the session. You can also set it in the tools option so it is always automatic. Note that this does affect behavior of any query since you will be reading uncommitted data i.e., perform dirty reads. In SQL Server 2005, you can use the READ COMMITTED SNAPSHOT option at the database level or the new snapshot isoaltion level which uses versioning mechanism to provide consistent view of data without taking locks.|||

The answer I am about to give is not likely to be pertinent to your question since you state short running, but just in case since you were so adament about NO LOCKS :) There is one tiny exception to the nolock/read uncommitted isolation level, in that it does take a shared schema lock so other users cannot drop/alter the table while your query is running. in 2005, run this:

create table testLocks
(
testLockId int,
bigValue char(8000) default (replicate('*',8000))
)

insert into testLocks(testLockId)
select 1
go 1000

set transaction isolation level read uncommitted
select * from testLocks
cross join testLocks as t2
-

Then on another connection run this (that query will return 1000000 16KB rows, so it will take a while :)

select login_name,
case des.transaction_isolation_level
when 0 then 'Unspecified' when 1 then 'ReadUncomitted'
when 2 then 'ReadCommitted' when 3 then 'Repeatable'
when 4 then 'Serializable' when 5 then 'Snapshot'
end as transaction_isolation_level,
request_session_id, resource_type, resource_subtype, request_mode,
request_type, request_status, request_owner_type,
case when resource_type = 'object' then object_name(resource_associated_entity_id)
when resource_type = 'database' then db_name(resource_associated_entity_id)
when resource_type in ('key','page') then
(select object_name(object_id) from sys.partitions
where hobt_id = resource_associated_entity_id)
else cast(resource_associated_entity_id as varchar(20))
end
from sys.dm_tran_locks dtl
left outer join sys.dm_exec_sessions des
on dtl.request_session_id = des.session_id
where request_session_id <> @.@.spid

On my 3Ghz, 512 MB machine running Express Edition (on a Media Center PC) I got two rows, one of which was the shared schema lock, the other was a bulk operation lock, likely in Tempdb trying to build these rows. Unless you are dropping and creating rows, this is unlikely to be a concern.

|||If you perform insert/update in a transaction that has isolation level READ UNCOMMITTED it will anyhow acquire X locks. I suppose your short transaction are read-only though.|||We all kind of forgot about that one huh? Good point :)|||

thanks everyone for your very helpful information.

|||sorry I should have mentioned that I'm only running queries

does "with (nolock)" absolutely guarantee no locks are taken?

I need to run a few short-running queries against a production system. I need to be absolutely certain that SS doesn't take out any locks on the table as a result.

Does "with (nolock)" absolutely guarantee this? I've read BOL on the topic and I understand isolation level read-uncommitted. But I want to validate that there's not any undocumented behavior in SS that might violate the documentation (which clearly states that no shared locks are issued).

Thanks!

You're talking two different things.

with (nolock) is a HINT. That means there is a possibility that SS will override it.

SET TRANSACTION ISOLATION LEVEL is a command. AFAIK, SS won't override this.

|||SQL Server honors the read uncommitted hint. It is however possible that you might get some error due to the dirty read behavior. The transaction isolation level is recommended if you do not want to specify hint on every table that you query and it is for the session. You can also set it in the tools option so it is always automatic. Note that this does affect behavior of any query since you will be reading uncommitted data i.e., perform dirty reads. In SQL Server 2005, you can use the READ COMMITTED SNAPSHOT option at the database level or the new snapshot isoaltion level which uses versioning mechanism to provide consistent view of data without taking locks.|||

The answer I am about to give is not likely to be pertinent to your question since you state short running, but just in case since you were so adament about NO LOCKS :) There is one tiny exception to the nolock/read uncommitted isolation level, in that it does take a shared schema lock so other users cannot drop/alter the table while your query is running. in 2005, run this:

create table testLocks
(
testLockId int,
bigValue char(8000) default (replicate('*',8000))
)

insert into testLocks(testLockId)
select 1
go 1000

set transaction isolation level read uncommitted
select * from testLocks
cross join testLocks as t2
-

Then on another connection run this (that query will return 1000000 16KB rows, so it will take a while :)

select login_name,
case des.transaction_isolation_level
when 0 then 'Unspecified' when 1 then 'ReadUncomitted'
when 2 then 'ReadCommitted' when 3 then 'Repeatable'
when 4 then 'Serializable' when 5 then 'Snapshot'
end as transaction_isolation_level,
request_session_id, resource_type, resource_subtype, request_mode,
request_type, request_status, request_owner_type,
case when resource_type = 'object' then object_name(resource_associated_entity_id)
when resource_type = 'database' then db_name(resource_associated_entity_id)
when resource_type in ('key','page') then
(select object_name(object_id) from sys.partitions
where hobt_id = resource_associated_entity_id)
else cast(resource_associated_entity_id as varchar(20))
end
from sys.dm_tran_locks dtl
left outer join sys.dm_exec_sessions des
on dtl.request_session_id = des.session_id
where request_session_id <> @.@.spid

On my 3Ghz, 512 MB machine running Express Edition (on a Media Center PC) I got two rows, one of which was the shared schema lock, the other was a bulk operation lock, likely in Tempdb trying to build these rows. Unless you are dropping and creating rows, this is unlikely to be a concern.

|||If you perform insert/update in a transaction that has isolation level READ UNCOMMITTED it will anyhow acquire X locks. I suppose your short transaction are read-only though.|||We all kind of forgot about that one huh? Good point :)|||

thanks everyone for your very helpful information.

|||sorry I should have mentioned that I'm only running queries