Thursday, March 29, 2012
Does SQL uses index in the following select statement
PK_Column1
PK_Column2
IndexedColumn
Column_ABC
Column_XYZ
Does SQL Server 2005 uses the IndexedColumn index to find the MIN and MAX
values in the following select statement:
SELECT MIN(IndexedColumn), MAX(IndexedColumn) FROM MyTable WHERE
PK_Column1=@.MyParam
When I run this statement it works too slow and I see alot of reads in SQL
Server Profiler.
Is there any way to improve the performance in this case?
Message posted via webservertalk.com
http://www.webservertalk.com/Uwe/Forum...amming/200512/1Hi
If Indexed column is clustered, it probably would as it can do a clustered
index range scan.
Else, it may not. It all depends on how up to date the statistics are, the
data types of the columns, how selective the indexes are and the number of
rows.
Show the query plan and we can tell.
Regards
--
Mike Epprecht, Microsoft SQL Server MVP
Zurich, Switzerland
IM: mike@.epprecht.net
MVP Program: http://www.microsoft.com/mvp
Blog: http://www.msmvps.com/epprecht/
"Alex via webservertalk.com" <no@.spam.pls> wrote in message
news:58e799d08a0ce@.uwe...
>I have the following table structure:
> PK_Column1
> PK_Column2
> IndexedColumn
> Column_ABC
> Column_XYZ
> Does SQL Server 2005 uses the IndexedColumn index to find the MIN and MAX
> values in the following select statement:
> SELECT MIN(IndexedColumn), MAX(IndexedColumn) FROM MyTable WHERE
> PK_Column1=@.MyParam
> When I run this statement it works too slow and I see alot of reads in SQL
> Server Profiler.
> Is there any way to improve the performance in this case?
> --
> Message posted via webservertalk.com
> http://www.webservertalk.com/Uwe/Forum...amming/200512/1|||Thank you for your answer.
Here are more details:
PK_Column1 smallint
PK_Column2 int
IndexedColumn DateTime (NON-CLUSTERED and not unique)
Column_ABC varchar
Column_XYZ varbinary(BLOB)
The table has about 4M rows.
>Show the query plan and we can tell.
How can I get it? I am using the Standard edition of SQL Server 2005.
Message posted via webservertalk.com
http://www.webservertalk.com/Uwe/Forum...amming/200512/1|||If there is a clustered index on PK_Column1, then it might do an index
scan on the index of IndexedColumn. However, the query would benefit
more from an index on (PK_Column1, IndexedColumn).
I am not running SQL2K5, but I guess that SET SHOWPLAN_TEXT ON will
probably still work...
HTH,
Gert-Jaqn
"Alex via webservertalk.com" wrote:
> I have the following table structure:
> PK_Column1
> PK_Column2
> IndexedColumn
> Column_ABC
> Column_XYZ
> Does SQL Server 2005 uses the IndexedColumn index to find the MIN and MAX
> values in the following select statement:
> SELECT MIN(IndexedColumn), MAX(IndexedColumn) FROM MyTable WHERE
> PK_Column1=@.MyParam
> When I run this statement it works too slow and I see alot of reads in SQL
> Server Profiler.
> Is there any way to improve the performance in this case?
> --
> Message posted via webservertalk.com
> http://www.webservertalk.com/Uwe/Forum...amming/200512/1
Sunday, March 25, 2012
Does SQL Express support upgrade from MSDE2000 Chinese Edition?
I have no idea for the language edition for sql express since the download page doesn't have a language selection.
So does anyone know that,
1. would sql express have multi language edtion ?
2. if sql express would have one language edtion, how could install it to upgrade from one existing msde2000 simplified chinese edtion ?Upgrading across languages is not supported. If you want to upgrade your Chinese databases you'll need to wait until the Chinese edition of Express is released. The currently released version of Express is English only.
Does SQL 2005 take XML query time into acount?
anything (internally) like an automatic sub-select if there's standard
fields to compare, saving the XML analysis for later, (to avoid uneccessary
calculations)?Hello Jerry,
XML queries do cache their query plans, of course, like any other T-SQL quer
y.
However, unindexed XML has to reconstruct the node table each time. That's
an expensive unit of work (usually), so the best thing you can (usually)
do is create an primary XML index over those instances. That "kind of" does
that you're looking for here.
Cheers,
Kent|||> XML queries do cache their query plans, of course, like any other T-SQL
> query. However, unindexed XML has to reconstruct the node table each time.
> That's an expensive unit of work (usually), so the best thing you can
> (usually) do is create an primary XML index over those instances. That
> "kind of" does that you're looking for here.
Thanks for all, Kent ;)
Does SQL 2005 take XML query time into acount?
anything (internally) like an automatic sub-select if there's standard
fields to compare, saving the XML analysis for later, (to avoid uneccessary
calculations)?
Hello Jerry,
XML queries do cache their query plans, of course, like any other T-SQL query.
However, unindexed XML has to reconstruct the node table each time. That's
an expensive unit of work (usually), so the best thing you can (usually)
do is create an primary XML index over those instances. That "kind of" does
that you're looking for here.
Cheers,
Kent
|||> XML queries do cache their query plans, of course, like any other T-SQL
> query. However, unindexed XML has to reconstruct the node table each time.
> That's an expensive unit of work (usually), so the best thing you can
> (usually) do is create an primary XML index over those instances. That
> "kind of" does that you're looking for here.
Thanks for all, Kent ;)
sql
Thursday, March 22, 2012
Does Reporting Services allow you to select a dataset dynamically?
I need to call one of two stored procedures depending on which parameters were passed to the report. (Both stored procedures return the same fields.)
Does SQL Reporting Services allow you to switch the dataset or stored procedure name dynamically? If so, where do I put the logic? Right now I have it working with one stored procedure.
I'm thinking that I should just make one stored procedure that takes all parameters and calls one of the other two stored procedures. Do I have any options besides this?
Thanks in advance!
Your query can be expression-based, e.g.; =Iif(Parameters!SomeParameterValue = 0, "EXEC sp1 1<parameter list>", EXEC sp2 1<parameter list>"). Alternatively, if the sp choice can be based on a configuration convention, you can use the Report Server web.config file to store the config value. The later case is demonstrated by the DynamicConnection report in this download. The report uses a dynamic connection string but the same approach can be applied to the query.|||That worked. Thanks!
sql
Monday, March 19, 2012
Does MS know SQL2005 query performance is slower than SQL2000?
Hi,
I tested a simple query like Select colA from TableB where colC= 'XX' with SQL2000 and SQL2005.
Of course, Index is same, number of records are same.
After I execute that query and checked it with profiler. SQL2000 just took 18 ms but SQL2005 took 118 ms in my environment. Actually, the machine that is installed SQL2005 has better H/W spec than SQL2000.
I could not belive that so I tested several times but SQL2005 was slow.
After I searched this forum, I found that some guys had same problem with SQL2005 performance. I rebuilt index in SQL2005 but didn't work.
Even though I am using SP2, it is still slow than SQL2000.
Am I missing somthing? I could not understand how it could happen.
Does anybody have any solution?
Thank you in advance
James
did you clear the proc cache before running the procedures? Running the commands below will ensure that you are running both sprocs on common ground:DBCC DROPCLEANBUFFERS
DBCC FREEPROCCACHE
Also, you can't always go by the time it takes to run a query. You really need to compare the logical reads returned by running the command SET STATISTICS IO ON before you run the statements to compare the reads. If your reads are drastically different, something may be funky.
Tim|||
Tim,
Thank you for answer.
Actually, before I tested, I restarted all services so it was not a problem of cache.
In addition, reads in profile log of SQL2005 is more than SQL2000 which is not strange based on the result.
Funny thing is speed is smiliar after data is cached. This problem happened when I tried data from disk.
I am not sure what is wrong.
I should discuss it with MS support soon.
Thank you
James
|||After you upgraded the database to SQL 2005 did you update the stats or rebuild the indexes? It's recommended to update the stats on the tables and indexes after upgrading to SQL 2005 to get proper query plans in SQL 2005.|||Thank you Denny for replying
Unfortunately, it didn't work
Actually, I didn't migrate DB from SQL2000. I just created exactly same DB as SQL2000.
100ms is not a big deal for a SQL statement but if a stored procedure has 1000 sql statements. It will be 100,000ms which is a big.
This can explain why same stored procedure is slower than SQL2000.
James.
|||Try optimising the data disks:
1) Set the disk to Basic disk
2) If the disk subsystem is RAID, set the stripe size to 64k, and the controller cache to 100% write.
3) Using DISKPART, create the partition using the command CREATE PARTITION PRIMARY ALIGN=64
4) Format the data disk with a cluster size of 64k.
Now you're ready to go from the disk side of things!
|||Does the execution plans show the SQL Servers taking the same path to the data? Where do the cost differences show up?|||Danny and BigE
Thank you for replying.
Well. Execution plan is exactly same. I am not sure where it comes from.
As I said the machine that has SQL2005 is better H/W spec so I don't think it is a problem of H/W as BigE said.
I might try to do as BigE suggested but I could not agree we should set this for running SQL2005.
Think about that. SQL2005 is advanced version than SQL2000 which is 7 years ago!! Why does user consider about those kinds of disk setting? Even though it is ture, what is the big benefit of upgrading to user who is using small or medium application?
Anyway, If you have two machine that has SQL2000 and SQL2005, just try a select statement and check read and duration.
You will notice what I am saying..
Hi James,
Did you get the solution as you described?
I have same problem and I cannot find any solutions.
Please help.
Clara
|||Thank you BigE
I think your suggestion might improve performance but here is my concern about using SQL2005.
If it is a problem of Disk speed, Why does MS provide a fucntion to make DATABASE on top of the MEMORY DISK?
In other words, MS can create MEMORY DISK DATABASE in SQL2008 for better performance !
Maybe they will say to me that I am crazy but... If you can use UPS, then SQL server can dump that memory database to Disk during UPS is working.
Anyway, I could not buy that reason becaue , As I said, the SQL is running slower server than SQL2005.
By copying Clara,
Sorry, I could not find the solution yet. One of MS consultant that I know gave to me some suggestion but it doesn't work.
Maybe I should try SQL2008 CTP instead of SQL2005
Regards,
James Lim
Does MS know SQL2005 query performance is slower than SQL2000?
Hi,
I tested a simple query like Select colA from TableB where colC= 'XX' with SQL2000 and SQL2005.
Of course, Index is same, number of records are same.
After I execute that query and checked it with profiler. SQL2000 just took 18 ms but SQL2005 took 118 ms in my environment. Actually, the machine that is installed SQL2005 has better H/W spec than SQL2000.
I could not belive that so I tested several times but SQL2005 was slow.
After I searched this forum, I found that some guys had same problem with SQL2005 performance. I rebuilt index in SQL2005 but didn't work.
Even though I am using SP2, it is still slow than SQL2000.
Am I missing somthing? I could not understand how it could happen.
Does anybody have any solution?
Thank you in advance
James
did you clear the proc cache before running the procedures? Running the commands below will ensure that you are running both sprocs on common ground:DBCC DROPCLEANBUFFERS
DBCC FREEPROCCACHE
Also, you can't always go by the time it takes to run a query. You really need to compare the logical reads returned by running the command SET STATISTICS IO ON before you run the statements to compare the reads. If your reads are drastically different, something may be funky.
Tim|||
Tim,
Thank you for answer.
Actually, before I tested, I restarted all services so it was not a problem of cache.
In addition, reads in profile log of SQL2005 is more than SQL2000 which is not strange based on the result.
Funny thing is speed is smiliar after data is cached. This problem happened when I tried data from disk.
I am not sure what is wrong.
I should discuss it with MS support soon.
Thank you
James
|||After you upgraded the database to SQL 2005 did you update the stats or rebuild the indexes? It's recommended to update the stats on the tables and indexes after upgrading to SQL 2005 to get proper query plans in SQL 2005.|||Thank you Denny for replying
Unfortunately, it didn't work
Actually, I didn't migrate DB from SQL2000. I just created exactly same DB as SQL2000.
100ms is not a big deal for a SQL statement but if a stored procedure has 1000 sql statements. It will be 100,000ms which is a big.
This can explain why same stored procedure is slower than SQL2000.
James.
|||Try optimising the data disks:
1) Set the disk to Basic disk
2) If the disk subsystem is RAID, set the stripe size to 64k, and the controller cache to 100% write.
3) Using DISKPART, create the partition using the command CREATE PARTITION PRIMARY ALIGN=64
4) Format the data disk with a cluster size of 64k.
Now you're ready to go from the disk side of things!
|||Does the execution plans show the SQL Servers taking the same path to the data? Where do the cost differences show up?|||Danny and BigE
Thank you for replying.
Well. Execution plan is exactly same. I am not sure where it comes from.
As I said the machine that has SQL2005 is better H/W spec so I don't think it is a problem of H/W as BigE said.
I might try to do as BigE suggested but I could not agree we should set this for running SQL2005.
Think about that. SQL2005 is advanced version than SQL2000 which is 7 years ago!! Why does user consider about those kinds of disk setting? Even though it is ture, what is the big benefit of upgrading to user who is using small or medium application?
Anyway, If you have two machine that has SQL2000 and SQL2005, just try a select statement and check read and duration.
You will notice what I am saying..
Hi James,
Did you get the solution as you described?
I have same problem and I cannot find any solutions.
Please help.
Clara
|||Thank you BigE
I think your suggestion might improve performance but here is my concern about using SQL2005.
If it is a problem of Disk speed, Why does MS provide a fucntion to make DATABASE on top of the MEMORY DISK?
In other words, MS can create MEMORY DISK DATABASE in SQL2008 for better performance !
Maybe they will say to me that I am crazy but... If you can use UPS, then SQL server can dump that memory database to Disk during UPS is working.
Anyway, I could not buy that reason becaue , As I said, the SQL is running slower server than SQL2005.
By copying Clara,
Sorry, I could not find the solution yet. One of MS consultant that I know gave to me some suggestion but it doesn't work.
Maybe I should try SQL2008 CTP instead of SQL2005
Regards,
James Lim
Does MS know SQL2005 query performance is slower than SQL2000?
Hi,
I tested a simple query like Select colA from TableB where colC= 'XX' with SQL2000 and SQL2005.
Of course, Index is same, number of records are same.
After I execute that query and checked it with profiler. SQL2000 just took 18 ms but SQL2005 took 118 ms in my environment. Actually, the machine that is installed SQL2005 has better H/W spec than SQL2000.
I could not belive that so I tested several times but SQL2005 was slow.
After I searched this forum, I found that some guys had same problem with SQL2005 performance. I rebuilt index in SQL2005 but didn't work.
Even though I am using SP2, it is still slow than SQL2000.
Am I missing somthing? I could not understand how it could happen.
Does anybody have any solution?
Thank you in advance
James
did you clear the proc cache before running the procedures? Running the commands below will ensure that you are running both sprocs on common ground:DBCC DROPCLEANBUFFERS
DBCC FREEPROCCACHE
Also, you can't always go by the time it takes to run a query. You really need to compare the logical reads returned by running the command SET STATISTICS IO ON before you run the statements to compare the reads. If your reads are drastically different, something may be funky.
Tim|||
Tim,
Thank you for answer.
Actually, before I tested, I restarted all services so it was not a problem of cache.
In addition, reads in profile log of SQL2005 is more than SQL2000 which is not strange based on the result.
Funny thing is speed is smiliar after data is cached. This problem happened when I tried data from disk.
I am not sure what is wrong.
I should discuss it with MS support soon.
Thank you
James
|||After you upgraded the database to SQL 2005 did you update the stats or rebuild the indexes? It's recommended to update the stats on the tables and indexes after upgrading to SQL 2005 to get proper query plans in SQL 2005.|||Thank you Denny for replying
Unfortunately, it didn't work
Actually, I didn't migrate DB from SQL2000. I just created exactly same DB as SQL2000.
100ms is not a big deal for a SQL statement but if a stored procedure has 1000 sql statements. It will be 100,000ms which is a big.
This can explain why same stored procedure is slower than SQL2000.
James.
|||Try optimising the data disks:
1) Set the disk to Basic disk
2) If the disk subsystem is RAID, set the stripe size to 64k, and the controller cache to 100% write.
3) Using DISKPART, create the partition using the command CREATE PARTITION PRIMARY ALIGN=64
4) Format the data disk with a cluster size of 64k.
Now you're ready to go from the disk side of things!
|||Does the execution plans show the SQL Servers taking the same path to the data? Where do the cost differences show up?|||Danny and BigE
Thank you for replying.
Well. Execution plan is exactly same. I am not sure where it comes from.
As I said the machine that has SQL2005 is better H/W spec so I don't think it is a problem of H/W as BigE said.
I might try to do as BigE suggested but I could not agree we should set this for running SQL2005.
Think about that. SQL2005 is advanced version than SQL2000 which is 7 years ago!! Why does user consider about those kinds of disk setting? Even though it is ture, what is the big benefit of upgrading to user who is using small or medium application?
Anyway, If you have two machine that has SQL2000 and SQL2005, just try a select statement and check read and duration.
You will notice what I am saying..
Hi James,
Did you get the solution as you described?
I have same problem and I cannot find any solutions.
Please help.
Clara
|||Thank you BigE
I think your suggestion might improve performance but here is my concern about using SQL2005.
If it is a problem of Disk speed, Why does MS provide a fucntion to make DATABASE on top of the MEMORY DISK?
In other words, MS can create MEMORY DISK DATABASE in SQL2008 for better performance !
Maybe they will say to me that I am crazy but... If you can use UPS, then SQL server can dump that memory database to Disk during UPS is working.
Anyway, I could not buy that reason becaue , As I said, the SQL is running slower server than SQL2005.
By copying Clara,
Sorry, I could not find the solution yet. One of MS consultant that I know gave to me some suggestion but it doesn't work.
Maybe I should try SQL2008 CTP instead of SQL2005
Regards,
James Lim
Sunday, March 11, 2012
does linked server solve performance issue??
I've this query using four tables of 3 different databases residing on
the same server
select top 5 * from dblezen.dbo.ads ta
left outer join dbalgemeen.dbo.users tb on ta.ads_usrid = tb.usr_id
left outer join dbingeven.dbo.ads tc on ta.ads_ovid = tc.ads_adid
left outer join dbingeven.dbo.edition td on tc.ads_editionid = td.ed_id
where ta.ads_boonline > dateadd(n, -100, getdate())
In the near future we'll propably put the 3 databases on 3 different
physical servers. So I'll have to create linked servers, meaning I've
got to execute something like this :
select top 5 * from server1.dblezen.dbo.ads ta
left outer join server2.dbalgemeen.dbo.users tb on ta.ads_usrid = tb.usr_id
left outer join server3.dbingeven.dbo.ads tc on ta.ads_ovid = tc.ads_adid
left outer join server3.dbingeven.dbo.edition td on tc.ads_editionid = td.ed_id
where ta.ads_boonline > dateadd(n, -100, getdate())
First question:
Can I fix that the servernames server1, server2 and server3 are logical
names (different than the computername) all refering to the same local
server?
If this is possible I guess I can adapt all my stored procedures before
effectively put the 3 databases on 3 different servers, and if
necessary, I can go back to 1 server again afterwards, without having to
change my stored procedures again.
Second question:
If it can be done, will it have consequences on the execution time of
the queries? Will there be overhead 1) because SQL server is going to
use distributed transaction instead of local transactions or 2) because
SQL server has to translate the logical server names into physical
server names while it's actually not necessary when the databases
reside on the same server.
Third question:
Is moving the 3 databases to 3 different servers and start using linked
server the obvious best option to resolve the performance problem of our
database server'
At the moment we have one IIS-server, running ASP.NET and one database
server, running SQL Server, using 3 databases.
- dbingeven is mainly used to insert new rows
- dblezen is mainly used to read rows (full text indexed)
- dbalgemeen contains general data used by the other two (user data,
parameters, statistical data, ...)
Data is continuously inserted in dbingeven and continuously copied
(after processing) to dblezen.
All 3 databases contain stored procedures refering each other all the
time (joins as in the query above as well as calling each others stored
procedures).
Fourth question:
Is it predictable the gain of performance win (on CPU, and disk access)
by spreading the data will be lost on network traffic and distributed
transactions processes, meaning our problem will not be really solved'
Thanks in Advance,
Peter Van Wilrijk.Why don't you replicate the data back to the server that you're query is run
on, and select from there?
"Peter Van Wilrijk" <pro@.koopjeskrant.be> wrote in message
news:N_Vte.126388$I62.7018547@.phobos.telenet-ops.be...
> Hi,
> I've this query using four tables of 3 different databases residing on the
> same server
> select top 5 * from dblezen.dbo.ads ta
> left outer join dbalgemeen.dbo.users tb on ta.ads_usrid = tb.usr_id
> left outer join dbingeven.dbo.ads tc on ta.ads_ovid = tc.ads_adid
> left outer join dbingeven.dbo.edition td on tc.ads_editionid = td.ed_id
> where ta.ads_boonline > dateadd(n, -100, getdate())
> In the near future we'll propably put the 3 databases on 3 different
> physical servers. So I'll have to create linked servers, meaning I've
> got to execute something like this :
> select top 5 * from server1.dblezen.dbo.ads ta
> left outer join server2.dbalgemeen.dbo.users tb on ta.ads_usrid => tb.usr_id
> left outer join server3.dbingeven.dbo.ads tc on ta.ads_ovid = tc.ads_adid
> left outer join server3.dbingeven.dbo.edition td on tc.ads_editionid => td.ed_id
> where ta.ads_boonline > dateadd(n, -100, getdate())
> First question:
> Can I fix that the servernames server1, server2 and server3 are logical
> names (different than the computername) all refering to the same local
> server?
> If this is possible I guess I can adapt all my stored procedures before
> effectively put the 3 databases on 3 different servers, and if necessary,
> I can go back to 1 server again afterwards, without having to change my
> stored procedures again.
> Second question:
> If it can be done, will it have consequences on the execution time of the
> queries? Will there be overhead 1) because SQL server is going to use
> distributed transaction instead of local transactions or 2) because SQL
> server has to translate the logical server names into physical server
> names while it's actually not necessary when the databases
> reside on the same server.
> Third question:
> Is moving the 3 databases to 3 different servers and start using linked
> server the obvious best option to resolve the performance problem of our
> database server'
> At the moment we have one IIS-server, running ASP.NET and one database
> server, running SQL Server, using 3 databases.
> - dbingeven is mainly used to insert new rows
> - dblezen is mainly used to read rows (full text indexed)
> - dbalgemeen contains general data used by the other two (user data,
> parameters, statistical data, ...)
> Data is continuously inserted in dbingeven and continuously copied (after
> processing) to dblezen.
> All 3 databases contain stored procedures refering each other all the time
> (joins as in the query above as well as calling each others stored
> procedures).
> Fourth question:
> Is it predictable the gain of performance win (on CPU, and disk access)
> by spreading the data will be lost on network traffic and distributed
> transactions processes, meaning our problem will not be really solved'
>
> Thanks in Advance,
> Peter Van Wilrijk.
>
>
>
>
>|||ChrisR wrote:
> Why don't you replicate the data back to the server that you're query is run
> on, and select from there?
>
Thanks, Good question?
I surely must start checking out how to implement replication ... but I
guess it will not be an option for our website, because our users, while
surfing ... read, insert and update data in all three databases. I
guess this means we should replicate continuously in two directions, so
updates, deletes and inserts on server 2 must be immediately available
on server 1 and vice versa. Can replication do that?
Kind regards,
Peter Roothans.|||Yes. Look up Transactional Replication in BOL. You will want to use the
Immediate Updating option.
"Peter Van Wilrijk" <pro@.koopjeskrant.be> wrote in message
news:Tu9ue.126847$Xj7.7072901@.phobos.telenet-ops.be...
> ChrisR wrote:
>> Why don't you replicate the data back to the server that you're query is
>> run on, and select from there?
> Thanks, Good question?
> I surely must start checking out how to implement replication ... but I
> guess it will not be an option for our website, because our users, while
> surfing ... read, insert and update data in all three databases. I guess
> this means we should replicate continuously in two directions, so updates,
> deletes and inserts on server 2 must be immediately available on server 1
> and vice versa. Can replication do that?
> Kind regards,
> Peter Roothans.
>|||Thanks ChrisR.
I just found out you can give 1 server multiple names as follows.
sp_addlinkedserver N'SRVDBI', ' ', N'SQLOLEDB', N'SRV-WEBDB'
sp_addlinkedserver N'SRVDBL', ' ', N'SQLOLEDB', N'SRV-WEBDB'
sp_addlinkedserver N'SRVDBA', ' ', N'SQLOLEDB', N'SRV-WEBDB'
So, this way I can move my database to any server without the need to
adapt all code. The only thing to do than, is to let refer the logical
name to another physical server.
I've read BOL concerning Transactional Replication. I'm certainly going
to try it out, but since I'm not familiair with it and since the
document warns for loopback detection when replicating multiple related
databases, I'll start with the linked server solution.
Thanks,
Kind regards,
Peter Van Wilrijk
ChrisR wrote:
> Yes. Look up Transactional Replication in BOL. You will want to use the
> Immediate Updating option.
>
> "Peter Van Wilrijk" <pro@.koopjeskrant.be> wrote in message
> news:Tu9ue.126847$Xj7.7072901@.phobos.telenet-ops.be...
>>ChrisR wrote:
>>Why don't you replicate the data back to the server that you're query is
>>run on, and select from there?
>>
>>Thanks, Good question?
>>I surely must start checking out how to implement replication ... but I
>>guess it will not be an option for our website, because our users, while
>>surfing ... read, insert and update data in all three databases. I guess
>>this means we should replicate continuously in two directions, so updates,
>>deletes and inserts on server 2 must be immediately available on server 1
>>and vice versa. Can replication do that?
>>Kind regards,
>>Peter Roothans.
>
>
Does LIKE operator have major performance issue with variables?
Below are two similar SQL statements that give the same results:
1. SELECT * FROM InvoiceDtl WHERE IvoNum LIKE ('Ivo-0510-00001')
2. DECLARE @.IvoNum AS NVARCHAR (20)
SET @.IvoNum = 'Ivo-0510-00001'
SELECT * FROM InvoiceDtl WHERE IvoNum LIKE (@.IvoNum)
InvoiceDtl is a big table with 2.3++ million rows. IvoNum is of type
NVARCHAR (20) and has a non-clustered index.
I run both statements seperately in Query Analyzer. Statement 1 takes 1-2
seconds. But statement 2 takes 3-4 minutes (and makes my harddisk run mad)!
Cld anyone pls kindly advise why that is happening? TQ.SQL Server processes batches of SQL statements in 3 steps:
1) Parsing: check for invalid code
2) Compilation: generate an execution plan, which tables/indexes to use, and
the order to access them in etc
3) Execution: execute the execution plan generated in step 2
Now for the first statement SQL knows the value of IvoNum it has to look for
as early as step 2, because it is a literal. The Query optimizer can look up
statistics on the indexes and estimate how often the value 'Ivo-0510-00001'
appears in the column IvoNum, and generate the fastest execution plan to be
executed by step 3.
For the second statement, SQL Server does NOT know the value of IvoNum it
has to look as early as step 2. @.IvoNum is a variable, at the assignment of
a value to this variable only happens during execution in step 3. If T-SQL
had constants, you could declare @.IvoNum as a constant, and the value would
be available in step 2, but T-SQL only has variables not constants. So the
Query Optimizer does not know in step 2 to as to what the value of @.IvoNum
will be during execution. So it uses an estimate for the number of rows that
might match, and IIRC, that estimate is 30%. Remember that the value of
@.IvoNum is unknown during step 2, so it might be 'Ivo-0510-00001' , 'Ivo%'
'%0510-00001' or even '%' in step 3. This estimate leads to a very different
execution plan, which in cases will include scanning all 2.3 million rows in
the table.
Jacco Schalkwijk
SQL Server MVP
"HardKhor" <HardKhor@.discussions.microsoft.com> wrote in message
news:43932C3C-D0D8-42EE-AE09-7388DBA8D6CE@.microsoft.com...
> Hi all,
> Below are two similar SQL statements that give the same results:
> 1. SELECT * FROM InvoiceDtl WHERE IvoNum LIKE ('Ivo-0510-00001')
> 2. DECLARE @.IvoNum AS NVARCHAR (20)
> SET @.IvoNum = 'Ivo-0510-00001'
> SELECT * FROM InvoiceDtl WHERE IvoNum LIKE (@.IvoNum)
> InvoiceDtl is a big table with 2.3++ million rows. IvoNum is of type
> NVARCHAR (20) and has a non-clustered index.
> I run both statements seperately in Query Analyzer. Statement 1 takes 1-2
> seconds. But statement 2 takes 3-4 minutes (and makes my harddisk run
> mad)!
> Cld anyone pls kindly advise why that is happening? TQ.|||HardKhor,
I got some questions for you here...
1) Why do you have nvarchar as datatype here? wouldnt varchar or char be
better?
2) Why 20 chars at most? If 'Ivo-0510-00001' is the longest, why not
char(14) ?
3) Why use LIKE if 'Ivo-0510-00001' is an exact match? i.e ... WHERE
Something='Ivo-0510-00001'
/Lasse
"HardKhor" <HardKhor@.discussions.microsoft.com> wrote in message
news:43932C3C-D0D8-42EE-AE09-7388DBA8D6CE@.microsoft.com...
> Hi all,
> Below are two similar SQL statements that give the same results:
> 1. SELECT * FROM InvoiceDtl WHERE IvoNum LIKE ('Ivo-0510-00001')
> 2. DECLARE @.IvoNum AS NVARCHAR (20)
> SET @.IvoNum = 'Ivo-0510-00001'
> SELECT * FROM InvoiceDtl WHERE IvoNum LIKE (@.IvoNum)
> InvoiceDtl is a big table with 2.3++ million rows. IvoNum is of type
> NVARCHAR (20) and has a non-clustered index.
> I run both statements seperately in Query Analyzer. Statement 1 takes 1-2
> seconds. But statement 2 takes 3-4 minutes (and makes my harddisk run
mad)!
> Cld anyone pls kindly advise why that is happening? TQ.
Does It Matter...
When creating a select statement with joins... does it matter where you plac
e additional where-clause criteria.
Considering the two examples below, is it more efficient to place additional
filtering criteria within the join section.? Does it weed out extra rows be
fore joining them? or should I put anything that is not pertinent to the joi
n itself down below in its own where clause?
EXAMPLE 1
SELECT
A.column_1
B.column_2
C.column_3
FROM
table_a A
inner join table_b B ON B.column_1 = A.column_1 and
B.column_2 = 1 and
B.column_3 = 'boys' and
B.column_4 between '01/01/06' and '01/31/06'
inner join table_c C ON C.column_2 = B.column_2
EXAMPLE 2
SELECT
A.column_1
B.column_2
C.column_3
FROM
table_a A
inner join table_b B ON B.column_1 = A.column_1 and
inner join table_c C ON C.column_2 = B.column_2
WHERE
B.column_2 = 1 and
B.column_3 = 'boys' and
B.column_4 between '01/01/06' and '01/31/06'For INNER JOINs, I like to put the JOIN criteria in the ON clause and the
filtering criteria in the WHERE clause. This makes it very clear to anyone
who inherits the code (or myself, when I go senile) which criteria are for
the relationship and which criteria are meant to limit the end result.
For OUTER JOINs, it can certainly matter, but it depends on your desired
result. You may exclude rows by moving criteria from ON to WHERE or vice
versa. I don't know of any situations in INNER JOIN where this is true, but
I bet Itzik or Steve will reproduce one if it exists.
"rmg66" <rgwathney__xXx__primepro.com> wrote in message
news:u8ljCwnfGHA.764@.TK2MSFTNGP03.phx.gbl...
MSSQl 2000
When creating a select statement with joins... does it matter where you
place additional where-clause criteria.
Considering the two examples below, is it more efficient to place additional
filtering criteria within the join section.? Does it weed out extra rows
before joining them? or should I put anything that is not pertinent to the
join itself down below in its own where clause?
EXAMPLE 1
SELECT
A.column_1
B.column_2
C.column_3
FROM
table_a A
inner join table_b B ON B.column_1 = A.column_1 and
B.column_2 = 1 and
B.column_3 = 'boys' and
B.column_4 between '01/01/06' and '01/31/06'
inner join table_c C ON C.column_2 = B.column_2
EXAMPLE 2
SELECT
A.column_1
B.column_2
C.column_3
FROM
table_a A
inner join table_b B ON B.column_1 = A.column_1 and
inner join table_c C ON C.column_2 = B.column_2
WHERE
B.column_2 = 1 and
B.column_3 = 'boys' and
B.column_4 between '01/01/06' and '01/31/06'|||Pre SQL 2000 SP4 I would say "Yes" it does matter.
Post SP4 I would say "No" it doesn't matter.
SP4 has given some huge performance gains at my site.
However, try it for yourself. Use Profiler / view the Execution plan etc.
To establish your version use SELECT @.@.VERSION.
http://www.aspfaq.com/SQL2000Builds.asp
--
HTH. Ryan
"rmg66" <rgwathney__xXx__primepro.com> wrote in message news:u8ljCwnfGHA.764
@.TK2MSFTNGP03.phx.gbl...
MSSQl 2000
When creating a select statement with joins... does it matter where you plac
e additional where-clause criteria.
Considering the two examples below, is it more efficient to place additional
filtering criteria within the join section.? Does it weed out extra rows be
fore joining them? or should I put anything that is not pertinent to the joi
n itself down below in its own where clause?
EXAMPLE 1
SELECT
A.column_1
B.column_2
C.column_3
FROM
table_a A
inner join table_b B ON B.column_1 = A.column_1 and
B.column_2 = 1 and
B.column_3 = 'boys' and
B.column_4 between '01/01/06' and '01/31/06'
inner join table_c C ON C.column_2 = B.column_2
EXAMPLE 2
SELECT
A.column_1
B.column_2
C.column_3
FROM
table_a A
inner join table_b B ON B.column_1 = A.column_1 and
inner join table_c C ON C.column_2 = B.column_2
WHERE
B.column_2 = 1 and
B.column_3 = 'boys' and
B.column_4 between '01/01/06' and '01/31/06'|||Thanks Aaron,
Actually I'm more concerned with performance at this point.
Any thoughts on that...
Robert
"Aaron Bertrand [SQL Server MVP]" <ten.xoc@.dnartreb.noraa> wrote in message news:OUZNW4nfGH
A.2032@.TK2MSFTNGP02.phx.gbl...
> For INNER JOINs, I like to put the JOIN criteria in the ON clause and the
filtering criteria in the WHERE clause. This makes it
> very clear to anyone who inherits the code (or myself, when I go senile) w
hich criteria are for the relationship and which
> criteria are meant to limit the end result.
> For OUTER JOINs, it can certainly matter, but it depends on your desired r
esult. You may exclude rows by moving criteria from ON
> to WHERE or vice versa. I don't know of any situations in INNER JOIN wher
e this is true, but I bet Itzik or Steve will reproduce
> one if it exists.
>
>
> "rmg66" <rgwathney__xXx__primepro.com> wrote in message news:u8ljCwnfGHA.7
64@.TK2MSFTNGP03.phx.gbl...
> MSSQl 2000
> When creating a select statement with joins... does it matter where you pl
ace additional where-clause criteria.
> Considering the two examples below, is it more efficient to place addition
al filtering criteria within the join section.? Does it
> weed out extra rows before joining them? or should I put anything that is
not pertinent to the join itself down below in its own
> where clause?
> EXAMPLE 1
> SELECT
> A.column_1
> B.column_2
> C.column_3
> FROM
> table_a A
> inner join table_b B ON B.column_1 = A.column_1 and
> B.column_2 = 1 and
> B.column_3 = 'boys' and
> B.column_4 between '01/01/06' and '01/31/06'
> inner join table_c C ON C.column_2 = B.column_2
>
> EXAMPLE 2
> SELECT
> A.column_1
> B.column_2
> C.column_3
> FROM
> table_a A
> inner join table_b B ON B.column_1 = A.column_1 and
> inner join table_c C ON C.column_2 = B.column_2
> WHERE
> B.column_2 = 1 and
> B.column_3 = 'boys' and
> B.column_4 between '01/01/06' and '01/31/06'
>
>
>
>|||I think as a rule, the optimizer will do the same thing regardless, although
as Ryan pointed out pre SQL 200 SP 4 it makes a difference. When optimizing
the engine will check only so many possible paths before determining which
one to use, so on larger more complex queries the order of the joins and
criteria can determine which paths get evaluated before it gives up and
chooses one.
I think the bottom line is theoretically it doesn't matter, but the only way
to be totally certain is to test it out both ways. Not just the location of
the criteria, but the order of the tables as well. For simpler queries with
a handful of joins and filter criteria, when the optimizer can afford to
calculate every possibility, it should work the same. If you have dozens of
tables and just as many filters involved, it is worth playing with different
scenarios to see if it makes a difference. With more complex queries the
optimizer can find literally billions of possible execution plans, and
influencing it to look at the right ones can be hit or miss.
"rmg66" <rgwathney__xXx__primepro.com> wrote in message
news:O1pbODofGHA.4776@.TK2MSFTNGP05.phx.gbl...
> Thanks Aaron,
> Actually I'm more concerned with performance at this point.
> Any thoughts on that...
> Robert
>
> "Aaron Bertrand [SQL Server MVP]" <ten.xoc@.dnartreb.noraa> wrote in
message news:OUZNW4nfGHA.2032@.TK2MSFTNGP02.phx.gbl...
the filtering criteria in the WHERE clause. This makes it
which criteria are for the relationship and which
result. You may exclude rows by moving criteria from ON
where this is true, but I bet Itzik or Steve will reproduce
news:u8ljCwnfGHA.764@.TK2MSFTNGP03.phx.gbl...
place additional where-clause criteria.
additional filtering criteria within the join section.? Does it
is not pertinent to the join itself down below in its own
'01/31/06'
>|||I just did one of the MS Courses last w

reality seems to be what the others have said. MS considers is more correct
to place the items on the join itself, as this will help SQL to choose the
best execution plan. The idea is that placing more items on the join, means
that the selected table will return less results, before the Join is
executed.
The blurb says that the Join syntax is evaluated before the Where syntax.
The course is the optimising and tuning course for SQL 2005!
My personal view is to use the Join in preference to the Where clause, I
find that it helps to make the syntax clearer and easier to understand.
Regards
Colin Dawson
www.cjdawson.com
"Jim Underwood" <james.underwoodATfallonclinic.com> wrote in message
news:eqcg3SofGHA.2208@.TK2MSFTNGP05.phx.gbl...
>I think as a rule, the optimizer will do the same thing regardless,
>although
> as Ryan pointed out pre SQL 200 SP 4 it makes a difference. When
> optimizing
> the engine will check only so many possible paths before determining which
> one to use, so on larger more complex queries the order of the joins and
> criteria can determine which paths get evaluated before it gives up and
> chooses one.
> I think the bottom line is theoretically it doesn't matter, but the only
> way
> to be totally certain is to test it out both ways. Not just the location
> of
> the criteria, but the order of the tables as well. For simpler queries
> with
> a handful of joins and filter criteria, when the optimizer can afford to
> calculate every possibility, it should work the same. If you have dozens
> of
> tables and just as many filters involved, it is worth playing with
> different
> scenarios to see if it makes a difference. With more complex queries the
> optimizer can find literally billions of possible execution plans, and
> influencing it to look at the right ones can be hit or miss.
> "rmg66" <rgwathney__xXx__primepro.com> wrote in message
> news:O1pbODofGHA.4776@.TK2MSFTNGP05.phx.gbl...
> message news:OUZNW4nfGHA.2032@.TK2MSFTNGP02.phx.gbl...
> the filtering criteria in the WHERE clause. This makes it
> which criteria are for the relationship and which
> result. You may exclude rows by moving criteria from ON
> where this is true, but I bet Itzik or Steve will reproduce
> news:u8ljCwnfGHA.764@.TK2MSFTNGP03.phx.gbl...
> place additional where-clause criteria.
> additional filtering criteria within the join section.? Does it
> is not pertinent to the join itself down below in its own
> '01/31/06'
>|||> The blurb says that the Join syntax is evaluated before the Where syntax.
That is only the logical order. For inner joins, it doesn't matter, and I do
n't even think that the
optimizer know what join type you expressed (the query is transformed into a
tree structure before
the optimizer gets hold of it). The optimizer is free to transform the query
in any way as long as
it returns the same information as if it executed the query as per the rules
for the logical order.
> The course is the optimizing and tuning course for SQL 2005!
Interesting. Which one? There are two such courses, one for "admins" and one
for "developers". Also,
can you point to the module and perhaps even page number and I'll have a loo
k at how they phrase it.
> My personal view is to use the Join in preference to the Where clause, I f
ind that it helps to
> make the syntax clearer and easier to understand.
I absolutely agree.
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"Colin Dawson" <newsgroups@.cjdawson.com> wrote in message
news:C1Icg.74278$wl.24621@.text.news.blueyonder.co.uk...
>I just did one of the MS Courses last w

reality seems to be what
>the others have said. MS considers is more correct to place the items on t
he join itself, as this
>will help SQL to choose the best execution plan. The idea is that placing
more items on the join,
>means that the selected table will return less results, before the Join is
executed.
> The blurb says that the Join syntax is evaluated before the Where syntax.
The course is the
> optimising and tuning course for SQL 2005!
> My personal view is to use the Join in preference to the Where clause, I f
ind that it helps to
> make the syntax clearer and easier to understand.
> Regards
> Colin Dawson
> www.cjdawson.com
>
> "Jim Underwood" <james.underwoodATfallonclinic.com> wrote in message
> news:eqcg3SofGHA.2208@.TK2MSFTNGP05.phx.gbl...
>|||Hello Tibor
The course is 2784A: Tuning and Optimising Database Queries User Microsoft
SQL Server 2005
The bit that I was referring two use in Unit 3. Specifically the Query
logical flow diagram on page 2.
Basically it shows the flow as
From & Join --> Where --> Select -- > .... (lots more stuff)
From experience I do agree that it doesn't seem to matter as the query
optimiser does make changes to the query as typed, into how it wants to
produce the results.
Regards
Colin Dawson
www.cjdawson.com
"Tibor Karaszi" <tibor_please.no.email_karaszi@.hotmail.nomail.com> wrote in
message news:OHntJPpfGHA.4304@.TK2MSFTNGP05.phx.gbl...
> That is only the logical order. For inner joins, it doesn't matter, and I
> don't even think that the optimizer know what join type you expressed (the
> query is transformed into a tree structure before the optimizer gets hold
> of it). The optimizer is free to transform the query in any way as long as
> it returns the same information as if it executed the query as per the
> rules for the logical order.
>
> Interesting. Which one? There are two such courses, one for "admins" and
> one for "developers". Also, can you point to the module and perhaps even
> page number and I'll have a look at how they phrase it.
>
> I absolutely agree.
> --
> Tibor Karaszi, SQL Server MVP
> http://www.karaszi.com/sqlserver/default.asp
> http://www.solidqualitylearning.com/
>
> "Colin Dawson" <newsgroups@.cjdawson.com> wrote in message
> news:C1Icg.74278$wl.24621@.text.news.blueyonder.co.uk...
>|||My personal preference is to use the primary key/foreign key relation
columns in the join clause, and all other predicates in the where
clause. This is a very consistent syntax that underscores the table
relations and automatically moves all filters to the where clause.
As mentioned before it is a different story for outer joins...
Gert-Jan
Tibor Karaszi wrote:
>
> That is only the logical order. For inner joins, it doesn't matter, and I
don't even think that the
> optimizer know what join type you expressed (the query is transformed into
a tree structure before
> the optimizer gets hold of it). The optimizer is free to transform the que
ry in any way as long as
> it returns the same information as if it executed the query as per the rul
es for the logical order.
>
> Interesting. Which one? There are two such courses, one for "admins" and o
ne for "developers". Also,
> can you point to the module and perhaps even page number and I'll have a l
ook at how they phrase it.
>
> I absolutely agree.
> --
> Tibor Karaszi, SQL Server MVP
> http://www.karaszi.com/sqlserver/default.asp
> http://www.solidqualitylearning.com/
> "Colin Dawson" <newsgroups@.cjdawson.com> wrote in message
> news:C1Icg.74278$wl.24621@.text.news.blueyonder.co.uk...|||Colin,
> The course is 2784A: Tuning and Optimising Database Queries User Microsoft
SQL Server 2005
> The bit that I was referring two use in Unit 3. Specifically the Query log
ical flow diagram on
> page 2.
Thanks. I had a quick look through the courses after I posted prior reply, a
nd I guessed this was
the one. The important part here is that it is the *logical* flow. Quote fro
m the same page:
"
Note that
although there is a guaranteed logical order, this is not true of the actual
physical order. The
query
processor can process the query in a different order but still ensure the sa
me results, if it can
find a
more efficient method for doing so.
"
If the optimizer had to respect the logical flow, then almost every query wo
uld give us horrendous
performance:
FROM, grab all columns, and even cross join if old style join syntax
WHERE remove the rows that doesn't satisfies the conditions (including the j
oin if old-style join)
GROUP BY
HAVING
SELECT, until now we had all the columns from all the tables
ORDER BY, not until now could we sort the rows
TOP, ouch, all rows had to be sorted until we throw away all but "top n".
:-)
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"Colin Dawson" <newsgroups@.cjdawson.com> wrote in message
news:VsIcg.74297$wl.32163@.text.news.blueyonder.co.uk...
> Hello Tibor
> The course is 2784A: Tuning and Optimising Database Queries User Microsoft
SQL Server 2005
> The bit that I was referring two use in Unit 3. Specifically the Query log
ical flow diagram on
> page 2.
> Basically it shows the flow as
> From & Join --> Where --> Select -- > .... (lots more stuff)
>
> From experience I do agree that it doesn't seem to matter as the query opt
imiser does make changes
> to the query as typed, into how it wants to produce the results.
> Regards
> Colin Dawson
> www.cjdawson.com
>
> "Tibor Karaszi" <tibor_please.no.email_karaszi@.hotmail.nomail.com> wrote i
n message
> news:OHntJPpfGHA.4304@.TK2MSFTNGP05.phx.gbl...
>
Does it locks database/tables when Only Select query comes in SQLTransaction
I want to make SQLTransaction as global and use it checking the State.
But then where there are Only Select queries are going to fire, it will open transaction.
So, Does it locks database/tables when Only Select query comes in SQLTransaction.
If you have another successful way of doing this, Please suggest.
Thanking you.
tatsA select typically only takes shared lock and releases the lock as soon as the select is done. If you want to an exclusive lock, you will have to use "tablockx" hint. Please see book online for detail.|||Thank you for your suggestion.|||
Just to top it....
SELECT COL1 FROM TBL2 WITH(NOLOCK) WHERE COL2 = @.VAL
|||You should avoid this technique or be sure to understand what is going on behind the scenes. Many people beginning with TSQL somewhere read that (NOLOCK) will not produce any problems, because one can read without careing about any locking, on the one side this is true, but you can′t be sure to have the most recent data, because you will do dirty ready on the database.HTH, Jens Suessmeyer.
http://www.sqlserver2005.de|||
What are you trying to accomplish? Every statement locks something, at least the schema so you can't drop a table while the user is executing a statement :)
Also, EVERY statement is in a transaction already, the difference between starting one using the client and just the implicit one that starts for every statement is that you might never reach the code to close it if the user gets bored and decides they want to go to lunch.
What locks are taken and how long locks are held is determined by the isolation level. The default is read committed (though your client defautl CAN BE DIFFERENT!) Note too that all of the locks I will talk about are shared locks for data that you read NOT updated data. Updated data requires locks to be held until the end of a transaction.
READ UNCOMMITTED - No locks issued or obeyed other than schema locks
READ COMMITTED - Locks are (generally) taken only as long as needed to protect the lowest level thing you are working with (like a row in a table.) Generally speaking it "crabs" through the objects grab a lock, release a lock. Another user can modify data you have already read
REPEATABLE READ - LIke Read Committed, only it doesn't release lock it takes. Prevents another user from modifying data you have already read, doesn't prevent user from creating new rows (referred to as Phantoms).
SERIALIZABLE - Like Repeatable Read, only it also places a lock on Ranges of data, so if you say WHERE between 0 and 100, a range lock is places on 0 and 100 to say no other user can create data in that range either.
SNAPSHOT (2005) - Readers look at only committed data, and if other users have locked portions of the data, they will only see the database as it was when they started their command. Really cool, but uses more tempdb than would otherwise.
So the only real concern about executing your select in a transaction is that you will run the risk of a user failure locking other users until cleanup occurs and the connection is noticed to be stale and not in use.
I generally would suggest that you execute your transactions in a single batch and put the BEGIN TRAN and COMMIT TRAN in the same batch if possible and using TRY...CATCH blocks around statements if you have 2005. Then you lower the risk of hanging a transaction. The connection might still exist, but the commit/rollback will certainly execute.
Friday, March 9, 2012
Does field exist in backup?
1) Rename table
2) Create new table (original name)
3) Create Indexes
4) Insert into new, Select from backup
My problem is that one table may have more fields than on another db, but I want to run the same script to update the tables.
What I'd like it to do, is in the insert select stuff, I want to put logic to select field from backup if it exists and insert in new.
If it doesnt exist in backup then insert null into new table, instead of having the line blow up cause the field doesnt exist in backup.
Suggestions would be appreciated. Thanks!, MitchIf it's not in the list it will automatically put in nulls...as long as it is nullable...
and you could go crazy...but it might just easier to code the dang thing...
USE Northwind
GO
sp_help Orders
GO
-- The lazy man's way to create a table
SELECT * INTO NewOrders FROM Orders WHERE 1=0
GO
ALTER TABLE NewOrders DROP Column RequiredDate
GO
DECLARE @.x varchar(8000)
SELECT @.x = 'INSERT INTO NewOrders ('
SELECT @.x = @.x + COLUMN_NAME
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = 'NewOrders' AND ORDINAL_POSITION = 1
SELECT @.x = @.x + ', '+COLUMN_NAME
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = 'NewOrders' AND ORDINAL_POSITION > 1
ORDER BY ORDINAL_POSITION
SELECT @.x = @.x + ') SELECT '
SELECT @.x = @.x + COLUMN_NAME
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = 'NewOrders' AND ORDINAL_POSITION = 1
SELECT @.x = @.x + ', '+COLUMN_NAME
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = 'NewOrders' AND ORDINAL_POSITION > 1
ORDER BY ORDINAL_POSITION
SELECT @.x = @.x + ' FROM Orders'
SELECT @.x
SET IDENTITY_INSERT NewOrders ON
EXEC(@.x)
SET IDENTITY_INSERT NewOrders OFF
GO
SELECT * FROM NewOrders
GO
DROP TABLE NewOrders
GO|||If you could send me a link or something, that would help me understand the logic below that would be awesome. I sort of follow the code below, but I'd like to see step by step what does what.
Thanks for your reply.
Mitch
Originally posted by Brett Kaiser
If it's not in the list it will automatically put in nulls...as long as it is nullable...
and you could go crazy...but it might just easier to code the dang thing...
USE Northwind
GO
sp_help Orders
GO
-- The lazy man's way to create a table
SELECT * INTO NewOrders FROM Orders WHERE 1=0
GO
ALTER TABLE NewOrders DROP Column RequiredDate
GO
DECLARE @.x varchar(8000)
SELECT @.x = 'INSERT INTO NewOrders ('
SELECT @.x = @.x + COLUMN_NAME
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = 'NewOrders' AND ORDINAL_POSITION = 1
SELECT @.x = @.x + ', '+COLUMN_NAME
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = 'NewOrders' AND ORDINAL_POSITION > 1
ORDER BY ORDINAL_POSITION
SELECT @.x = @.x + ') SELECT '
SELECT @.x = @.x + COLUMN_NAME
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = 'NewOrders' AND ORDINAL_POSITION = 1
SELECT @.x = @.x + ', '+COLUMN_NAME
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = 'NewOrders' AND ORDINAL_POSITION > 1
ORDER BY ORDINAL_POSITION
SELECT @.x = @.x + ' FROM Orders'
SELECT @.x
SET IDENTITY_INSERT NewOrders ON
EXEC(@.x)
SET IDENTITY_INSERT NewOrders OFF
GO
SELECT * FROM NewOrders
GO
DROP TABLE NewOrders
GO|||Mitch,
Just cut and paste the code into a query analyzer window...
Just execute...I already tested it and it runs like a champ...
Does DTC needed to do linked server queries
on both servers.
I could still run a query from Server A as
select * from serverB.db.dbo.tablename
So what is DTC used for then ?Hassan,
A select does not involve a transaction between the two servers so DTC is
not required. If you were to modify something you would need it.
--
Andrew J. Kelly
SQL Server MVP
"Hassan" <fatima_ja@.hotmail.com> wrote in message
news:%23kZqbgzbDHA.616@.TK2MSFTNGP11.phx.gbl...
> I have a server B registered as a linked server on Server A and have DTC
off
> on both servers.
> I could still run a query from Server A as
> select * from serverB.db.dbo.tablename
> So what is DTC used for then ?
>
>|||Try doing something that involves both servers. DTC is for distributed
transactions. That means where the changes are distribute across both
servers but must be contained in one transaction. Take a look at MS DTC in
BooksOnLine for lots of details.
--
Andrew J. Kelly
SQL Server MVP
"Hassan" <fatima_ja@.hotmail.com> wrote in message
news:uIkA1wzbDHA.1128@.tk2msftngp13.phx.gbl...
> Well i did run an update such as
> update serverB.db.dbo.tablename
> set col2 ='XYZ'
> where col1='ABC'
> It ran successfully and had the DTCs off on both sides.Can you give me an
> example of what i need to run so that I need to have DTC running ?
> I am using SQL 2000 btw
>
> "Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
> news:O1NMRozbDHA.3444@.tk2msftngp13.phx.gbl...
> > Hassan,
> >
> > A select does not involve a transaction between the two servers so DTC
is
> > not required. If you were to modify something you would need it.
> >
> > --
> >
> > Andrew J. Kelly
> > SQL Server MVP
> >
> >
> > "Hassan" <fatima_ja@.hotmail.com> wrote in message
> > news:%23kZqbgzbDHA.616@.TK2MSFTNGP11.phx.gbl...
> > > I have a server B registered as a linked server on Server A and have
DTC
> > off
> > > on both servers.
> > > I could still run a query from Server A as
> > > select * from serverB.db.dbo.tablename
> > >
> > > So what is DTC used for then ?
> > >
> > >
> > >
> >
> >
>
Does COUNT(*) use clusterd index?
I have a large table which has approximately 150 million records. I am
running a SELECT COUNT(*) query to find out the exact number of rows
present in that table. It is running for more than an hour, yet to
complete. Estimated execution plan shows that it uses clusted index
scan. But it does seem it is using the index. What is the best way to
find out the exact number of records present in that table quickly.
Thanks in advance.
--
*** Sent via Developersdex http://www.examnotes.net ***COUNT(*) is the best way but it sounds like you are being blocked. What
does sp_who2 say? And by the way SQL Server will choose the best index with
a count(*) and it does not have to be the clustered index.
Andrew J. Kelly SQL MVP
"Venkat" <nospam_venkat_asp@.yahoo.co.uk> wrote in message
news:evTgGq$zFHA.1028@.TK2MSFTNGP12.phx.gbl...
> Hi folks,
> I have a large table which has approximately 150 million records. I am
> running a SELECT COUNT(*) query to find out the exact number of rows
> present in that table. It is running for more than an hour, yet to
> complete. Estimated execution plan shows that it uses clusted index
> scan. But it does seem it is using the index. What is the best way to
> find out the exact number of records present in that table quickly.
> Thanks in advance.
> --
> *** Sent via Developersdex http://www.examnotes.net ***|||This might help.
http://toponewithties.blogspot.com/...count-them.html
Roji. P. Thomas
Net Asset Management
http://toponewithties.blogspot.com
"Venkat" <nospam_venkat_asp@.yahoo.co.uk> wrote in message
news:evTgGq$zFHA.1028@.TK2MSFTNGP12.phx.gbl...
> Hi folks,
> I have a large table which has approximately 150 million records. I am
> running a SELECT COUNT(*) query to find out the exact number of rows
> present in that table. It is running for more than an hour, yet to
> complete. Estimated execution plan shows that it uses clusted index
> scan. But it does seem it is using the index. What is the best way to
> find out the exact number of records present in that table quickly.
> Thanks in advance.
> --
> *** Sent via Developersdex http://www.examnotes.net ***|||If you don't ned to be up to the second, you can check sysindexes. To be
more accurate, you can issue DBCC UPDATEUSAGE against the table first, but
this will take time just like the scan does...
What's probably happening is that there is activity against the table. So,
another way to make the count return quicker (though, again, it may be out
of date by the time your brain processes it) is to use SELECT COUNT(*) FROM
table WITH (NOLOCK);
"Venkat" <nospam_venkat_asp@.yahoo.co.uk> wrote in message
news:evTgGq$zFHA.1028@.TK2MSFTNGP12.phx.gbl...
> Hi folks,
> I have a large table which has approximately 150 million records. I am
> running a SELECT COUNT(*) query to find out the exact number of rows
> present in that table. It is running for more than an hour, yet to
> complete. Estimated execution plan shows that it uses clusted index
> scan. But it does seem it is using the index. What is the best way to
> find out the exact number of records present in that table quickly.
> Thanks in advance.
> --
> *** Sent via Developersdex http://www.examnotes.net ***|||Ideally, you would have an non-clustered index on a narrow column so you wou
ld fit many rows on such
an index page. SQL Server would now scan that index instead of scanning the
data pages.
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"Venkat" <nospam_venkat_asp@.yahoo.co.uk> wrote in message
news:evTgGq$zFHA.1028@.TK2MSFTNGP12.phx.gbl...
> Hi folks,
> I have a large table which has approximately 150 million records. I am
> running a SELECT COUNT(*) query to find out the exact number of rows
> present in that table. It is running for more than an hour, yet to
> complete. Estimated execution plan shows that it uses clusted index
> scan. But it does seem it is using the index. What is the best way to
> find out the exact number of records present in that table quickly.
> Thanks in advance.
> --
> *** Sent via Developersdex http://www.examnotes.net ***|||If you don't require an exact answer, it isn't necessary use a
SELECT count(*) query on the rows in a table to get the row
count. SQL Server keeps the row count in sysindexes and it
can be retrieved there. The key is to select the correct
record from sysindexes.
Sysindexes is a system table that exists in every database.
SQL Server maintains at least one row in sysindexes for every
user table. A few of the most important columns are:
Column Data Type Description
-- -- ---
id int ID of the table referred to by this row
indid int See the text that follows...
rowcnt bigint Number of rows in the index
The indid column tells us what part of the table structure this
row of sysindexes is referring to:
indid value Description
-- ----
0 Table data when there is no clustered index
1 Refers to the clustered index
2 - 254 Non-clustered indexes
255 Text or Image data pages
A table will only have an entry in sysindexes with an indid value
of for 0 or 1, never both. That's the entry that we're
interested in because its rowcnt field gives is the number of
rows in the table. There's a query that shows the table, index
and indid from the pubs database:
/-- Copy From Below this line --USE pubs
GO
SELECT so.[name] as [table name]
, CASE WHEN si.indid between 1 and 254
THEN si.[name] ELSE NULL END
AS [Index Name]
, si.indid
FROM sysindexes si
INNER JOIN sysobjects so
ON si.id = so.id
WHERE si.indid < 2
AND so.type = 'U' -- Only User Tables
AND so.[name] != 'dtproperties'
ORDER BY so.[name]
\-- Stop copying above this line --/
(Results)
table name Index Name indid
-- -- --
authors UPKCL_auidind 1
discounts NULL 0
employee employee_ind 1
jobs PK__jobs__117F9D94 1
pub_info UPKCL_pubinfo 1
publishers UPKCL_pubind 1
roysched NULL 0
sales UPKCL_sales 1
stores UPK_storeid 1
titleauthor UPKCL_taind 1
titles UPKCL_titleidind 1
As you can see from the results, most of the indexes are
clustered (indid=1) but a few tables such as discounts
don't have a clustered index (indid=0).
I started this newsletter with "If you don't need an exact
answer..." That's because there are times when rowcnt is
not the exact number of records in the table. This can
be corrected by updating statistics on the table with:
dbcc updateusage
go
Here's the CREATE FUNCTION script for udf_Tbl_RowCOUNT:
/-- Copy From Below this line --
CREATE FUNCTION dbo.udf_Tbl_RowCOUNT (
@.sTableName sysname -- Table to retrieve Row Count
)
RETURNS INT -- Row count of the table, NULL if not found.
/*
* Returns the row count for a table by examining sysindexes.
* This function must be run in the same database as the table.
*
* Common Usage:
SELECT dbo.udf_Tbl_RowCOUNT ('')
* Test
PRINT 'Test 1 Bad table ' + CASE WHEN SELECT
dbo.udf_Tbl_RowCOUNT ('foobar') is NULL
THEN 'Worked' ELSE 'Error' END
* ) Copyright 2002 Andrew Novick http://www.NovickSoftware.com
* You may use this function in any of your SQL Server databases
* including databases that you sell, so long as they contain
* other unrelated database objects. You may not publish this
* UDF either in print or electronically.
****************************************
***********************/
AS BEGIN
DECLARE @.nRowCount INT -- the rows
DECLARE @.nObjectID int -- Object ID
SET @.nObjectID = OBJECT_ID(@.sTableName)
-- Object might not be found
IF @.nObjectID is null RETURN NULL
SELECT TOP 1 @.nRowCount = rows
FROM sysindexes
WHERE id = @.nObjectID AND indid < 2
RETURN @.nRowCount
END
GO
GRANT EXECUTE ON [dbo].[udf_Tbl_RowCOUNT] TO PUBLIC
GO
\-- Stop copying above this line --/
Let's use it:
/-- Copy From Below this line --
use pubs -- assuming the UDF was created in pubs
go
SELECT [name]
, dbo.udf_Tbl_RowCOUNT ([name]) as [Row Count]
FROM sysobjects
WHERE type='U' and name != 'dtproperties'
ORDER BY [name]
GO
\-- Stop copying above this line --/
(Results)
name Row Count
-- --
authors 24
discounts 3
employee 43
jobs 14
pub_info 8
publishers 8
roysched 86
sales 21
stores 6
titleauthor 25
titles 18
That's all there is to it.
--
thanks,
Jose de Jesus Jr. Mcp,Mcdba
Data Architect
Sykes Asia (Manila philippines)
MCP #2324787
"Venkat" wrote:
> Hi folks,
> I have a large table which has approximately 150 million records. I am
> running a SELECT COUNT(*) query to find out the exact number of rows
> present in that table. It is running for more than an hour, yet to
> complete. Estimated execution plan shows that it uses clusted index
> scan. But it does seem it is using the index. What is the best way to
> find out the exact number of records present in that table quickly.
> Thanks in advance.
> --
> *** Sent via Developersdex http://www.examnotes.net ***
>
Does Computed Column Update on Select General Question
Okay, newb question alert!!!
I created a computer column that is based on the difference between the column start_date and getdate().
Does the computed column only update when you update the column or does it change when you select it also?
Nevermind... Sorry to crowd the forums with stupid questions... I gave it another two seconds worth of thinking and realized that if I changed it to seconds just for the sake of testing that I could see it updating (It was at months before and I didn't want to wait a month to find out)
Sorry again about the wasted post!
Does Commit Transaction delete Temporary table?
I create a global temp table in one of my procedures:
select * into ##temp_update
from ( select distinct * from .....
after this I issue a commit transaction, and when I try to select from
the table I created (from my .Net code or Query Analyzer) it is not
there!
select * from ##temp_update
Invalid object name '##temp_update'.
Any idea why?From Books Online:
"Global temporary tables are automatically dropped when the session that
created the table ends and all other tasks have stopped referencing them. Th
e
association between a task and a table is maintained only for the life of a
single Transact-SQL statement. This means that a global temporary table is
dropped at the completion of the last Transact-SQL statement that was
actively referencing the table when the creating session ended."
If the session that creates the global temporary table ends before a new
session that accesses the global temporary table is started, the table is
dropped.
ML|||Yikes!
Good to know that, but bad for my program :)
thanks much
ML wrote:
> From Books Online:
> "Global temporary tables are automatically dropped when the session that
> created the table ends and all other tasks have stopped referencing them.
The
> association between a task and a table is maintained only for the life of
a
> single Transact-SQL statement. This means that a global temporary table is
> dropped at the completion of the last Transact-SQL statement that was
> actively referencing the table when the creating session ended."
> If the session that creates the global temporary table ends before a new
> session that accesses the global temporary table is started, the table is
> dropped.
>
> ML
Wednesday, March 7, 2012
Does Commit Transaction delete Temporary table?
I create a global temp table in one of my procedures:
select * into ##temp_update
from ( select distinct * from .....
after this I issue a commit transaction, and when I try to select from
the table I created (from my .Net code or Query Analyzer) it is not
there!
select * from ##temp_update
Invalid object name '##temp_update'.
Any idea why?From Books Online:
"Global temporary tables are automatically dropped when the session that
created the table ends and all other tasks have stopped referencing them. Th
e
association between a task and a table is maintained only for the life of a
single Transact-SQL statement. This means that a global temporary table is
dropped at the completion of the last Transact-SQL statement that was
actively referencing the table when the creating session ended."
If the session that creates the global temporary table ends before a new
session that accesses the global temporary table is started, the table is
dropped.
ML|||Yikes!
Good to know that, but bad for my program :)
thanks much
ML wrote:
> From Books Online:
> "Global temporary tables are automatically dropped when the session that
> created the table ends and all other tasks have stopped referencing them.
The
> association between a task and a table is maintained only for the life of
a
> single Transact-SQL statement. This means that a global temporary table is
> dropped at the completion of the last Transact-SQL statement that was
> actively referencing the table when the creating session ended."
> If the session that creates the global temporary table ends before a new
> session that accesses the global temporary table is started, the table is
> dropped.
>
> ML
Does CLNG work in Sql Compact?
Hi,
When I execute following query against to SqlCE 3.0.53, I am getting an error, could someone guide where it is wrong.
SELECT SLCTD_MENUENTRYID FROM GEN_ENTRY WHERE EXAMID = 'EFBC0657145840CF88184BDCE71430EE'
AND TYPEVALUE IN ('SURG_PROC', 'CV_PROC') AND CLNG(SLCTD_MENUENTRYID) < 0
Error:
Major Error 0x80040E14, Minor Error 25921
> SELECT SLCTD_MENUENTRYID FROM GEN_ENTRY WHERE EXAMID = 'EFBC0657145840CF88184BDCE71430EE'
AND TYPEVALUE IN ('SURG_PROC', 'CV_PROC') AND CLNG(SLCTD_MENUENTRYID) < 0
The function is not recognized by SQL Server Compact Edition. [ Name of function = CLNG,Data type (if known) = ]
CLNG is not availabel in SQL CE. For documention on the SQL CE SQL syntax, see http://msdn2.microsoft.com/en-us/library/ms173372.aspx. You could use CONVERT(int, SLCTD_MENUENTRYID) instead.
|||I feel the above CONVERT function always returns non fractional value.For ex:
select CONVERT(int, 4.7)= 4
select CONVERT(int, 4.4)=4
select CONVERT(int, -4.7) =-4
select CONVERT(int, -4.4) =-4
But where as CLNG function does the following:
select CLNG(4.7)=5
select CLNG(4.4)=4
select CLNG(-4.7)=-5
select CLNG(-4.4)=-4
I feel the following line will work for us:
select CONVERT(int, round(4.7,0)) =5
select CONVERT(int, round(4.4,0)) =4
select CONVERT(int, round(-4.7,0)) =-5
select CONVERT(int, round(-4.4,0)) =-4
Please let me know if I am wrong.
Thanks|||If that works for you, you are of course right. CLNG is not available in SQL CE, but other conversion functions are.
Does CLNG work in Sql Compact?
Hi,
When I execute following query against to SqlCE 3.0.53, I am getting an error, could someone guide where it is wrong.
SELECT SLCTD_MENUENTRYID FROM GEN_ENTRY WHERE EXAMID = 'EFBC0657145840CF88184BDCE71430EE'
AND TYPEVALUE IN ('SURG_PROC', 'CV_PROC') AND CLNG(SLCTD_MENUENTRYID) < 0
Error:
Major Error 0x80040E14, Minor Error 25921
> SELECT SLCTD_MENUENTRYID FROM GEN_ENTRY WHERE EXAMID = 'EFBC0657145840CF88184BDCE71430EE'
AND TYPEVALUE IN ('SURG_PROC', 'CV_PROC') AND CLNG(SLCTD_MENUENTRYID) < 0
The function is not recognized by SQL Server Compact Edition. [ Name of function = CLNG,Data type (if known) = ]
CLNG is not availabel in SQL CE. For documention on the SQL CE SQL syntax, see http://msdn2.microsoft.com/en-us/library/ms173372.aspx. You could use CONVERT(int, SLCTD_MENUENTRYID) instead.
|||I feel the above CONVERT function always returns non fractional value.For ex:
select CONVERT(int, 4.7)= 4
select CONVERT(int, 4.4)=4
select CONVERT(int, -4.7) =-4
select CONVERT(int, -4.4) =-4
But where as CLNG function does the following:
select CLNG(4.7)=5
select CLNG(4.4)=4
select CLNG(-4.7)=-5
select CLNG(-4.4)=-4
I feel the following line will work for us:
select CONVERT(int, round(4.7,0)) =5
select CONVERT(int, round(4.4,0)) =4
select CONVERT(int, round(-4.7,0)) =-5
select CONVERT(int, round(-4.4,0)) =-4
Please let me know if I am wrong.
Thanks|||If that works for you, you are of course right. CLNG is not available in SQL CE, but other conversion functions are.