Showing posts with label number. Show all posts
Showing posts with label number. Show all posts

Sunday, March 25, 2012

Does sql express license limits number of user / connections?

I can't find the answer...it just talk about the CPU / ram /database limit. thanks for you help!
Hi,

no current query limit anymore, the query governour went into retirement :-). Only limitations like those you mentioned are applied.

HTH, Jens Suessmeyer.

http://www.sqlserver2005.de
sql

Does SQL Express have a Simultaneous Connections Limit?

I have used MSDE in the past & it had a limit on the number of simultaneous
connections per the Microsoft paid support folks. I found various articles
on SQL Express with the limitations but nothing referencing the number of
connections. Has anyone already found this out?"BCW" <nospam@.cfl.rr.com> wrote in message
news:uiyafD9PGHA.2912@.tk2msftngp13.phx.gbl...
>I have used MSDE in the past & it had a limit on the number of
>simultaneous
> connections per the Microsoft paid support folks. I found various
> articles
> on SQL Express with the limitations but nothing referencing the number of
> connections. Has anyone already found this out?
There is no limitation on connections in MSDE or in SQL Express. MSDE was
optimized for a "typical" load of upto 25 users I believe but this wasn't a
hard limit it was just a potential throttle on scalability. AFAIK there is
no such constraint on SQL Express. If you check out the feature comparison
however you'll see that Express supports only a single processor and 1GB
RAM. So those are the limits you have to work within:
http://www.microsoft.com/sql/prodinfo/features/compare-features.mspx
--
David Portas, SQL Server MVP
Whenever possible please post enough code to reproduce your problem.
Including CREATE TABLE and INSERT statements usually helps.
State what version of SQL Server you are using and specify the content
of any error messages.
SQL Server Books Online:
http://msdn2.microsoft.com/library/ms130214(en-US,SQL.90).aspx
--

Does SQL Express have a Simultaneous Connections Limit?

I have used MSDE in the past & it had a limit on the number of simultaneous
connections per the Microsoft paid support folks. I found various articles
on SQL Express with the limitations but nothing referencing the number of
connections. Has anyone already found this out?
"BCW" <nospam@.cfl.rr.com> wrote in message
news:uiyafD9PGHA.2912@.tk2msftngp13.phx.gbl...
>I have used MSDE in the past & it had a limit on the number of
>simultaneous
> connections per the Microsoft paid support folks. I found various
> articles
> on SQL Express with the limitations but nothing referencing the number of
> connections. Has anyone already found this out?
There is no limitation on connections in MSDE or in SQL Express. MSDE was
optimized for a "typical" load of upto 25 users I believe but this wasn't a
hard limit it was just a potential throttle on scalability. AFAIK there is
no such constraint on SQL Express. If you check out the feature comparison
however you'll see that Express supports only a single processor and 1GB
RAM. So those are the limits you have to work within:
http://www.microsoft.com/sql/prodinf...-features.mspx
David Portas, SQL Server MVP
Whenever possible please post enough code to reproduce your problem.
Including CREATE TABLE and INSERT statements usually helps.
State what version of SQL Server you are using and specify the content
of any error messages.
SQL Server Books Online:
http://msdn2.microsoft.com/library/ms130214(en-US,SQL.90).aspx

Does SQL Express have a Simultaneous Connections Limit?

I have used MSDE in the past & it had a limit on the number of simultaneous
connections per the Microsoft paid support folks. I found various articles
on SQL Express with the limitations but nothing referencing the number of
connections. Has anyone already found this out?"BCW" <nospam@.cfl.rr.com> wrote in message
news:uiyafD9PGHA.2912@.tk2msftngp13.phx.gbl...
>I have used MSDE in the past & it had a limit on the number of
>simultaneous
> connections per the Microsoft paid support folks. I found various
> articles
> on SQL Express with the limitations but nothing referencing the number of
> connections. Has anyone already found this out?
There is no limitation on connections in MSDE or in SQL Express. MSDE was
optimized for a "typical" load of upto 25 users I believe but this wasn't a
hard limit it was just a potential throttle on scalability. AFAIK there is
no such constraint on SQL Express. If you check out the feature comparison
however you'll see that Express supports only a single processor and 1GB
RAM. So those are the limits you have to work within:
http://www.microsoft.com/sql/prodin...e-features.mspx
David Portas, SQL Server MVP
Whenever possible please post enough code to reproduce your problem.
Including CREATE TABLE and INSERT statements usually helps.
State what version of SQL Server you are using and specify the content
of any error messages.
SQL Server Books Online:
http://msdn2.microsoft.com/library/ms130214(en-US,SQL.90).aspx
--

Monday, March 19, 2012

Does not connect with Enterprise Manger

I am not been able to connect to SQL Server with Enterprise Manger as I
increased the number of databases upto 170 .Earlier i had no problem when I
had less of databases .Moreover there is no problem in connecting through
Query Analyser.
Could any one help please ?
Regards
Musharraf SultanHi,
Are you getting any errors or any timeouts? Can you try the
enterperisemanager from the SQL server machine and see as well.
Thanks
Hari
SQL Server MVP
"NEWS" <musharrafs@.yahoo.com> wrote in message
news:eM$2fOlzGHA.4228@.TK2MSFTNGP06.phx.gbl...
>I am not been able to connect to SQL Server with Enterprise Manger as I
>increased the number of databases upto 170 .Earlier i had no problem when I
>had less of databases .Moreover there is no problem in connecting through
>Query Analyser.
> Could any one help please ?
> Regards
> Musharraf Sultan
>
>|||Thanks a lot ...
As soon as I try to connect from Enterprise Manager an Hourglass appears
and after some time if I see from Task manager
it shows NOT RESPONDING. The same thing happens on the Server as well as
from remote machines
Regards
Musharraf
"Hari Prasad" <hari_prasad_k@.hotmail.com> wrote in message
news:ebZ7GopzGHA.4920@.TK2MSFTNGP06.phx.gbl...
> Hi,
> Are you getting any errors or any timeouts? Can you try the
> enterperisemanager from the SQL server machine and see as well.
> Thanks
> Hari
> SQL Server MVP
> "NEWS" <musharrafs@.yahoo.com> wrote in message
> news:eM$2fOlzGHA.4228@.TK2MSFTNGP06.phx.gbl...
>|||Hi,
See the database option Autoclose is enabled. If yes use the SP_DBOPTION to
make the Autoclose disabled for all database.
Thanks
Hari
SQL Server MVP
"NEWS" <musharrafs@.yahoo.com> wrote in message
news:ePChVYB0GHA.2072@.TK2MSFTNGP06.phx.gbl...
> Thanks a lot ...
> As soon as I try to connect from Enterprise Manager an Hourglass appears
> and after some time if I see from Task manager
> it shows NOT RESPONDING. The same thing happens on the Server as well as
> from remote machines
> Regards
> Musharraf
> "Hari Prasad" <hari_prasad_k@.hotmail.com> wrote in message
> news:ebZ7GopzGHA.4920@.TK2MSFTNGP06.phx.gbl...
>

Does MS know SQL2005 query performance is slower than SQL2000?

Hi,

I tested a simple query like Select colA from TableB where colC= 'XX' with SQL2000 and SQL2005.

Of course, Index is same, number of records are same.

After I execute that query and checked it with profiler. SQL2000 just took 18 ms but SQL2005 took 118 ms in my environment. Actually, the machine that is installed SQL2005 has better H/W spec than SQL2000.

I could not belive that so I tested several times but SQL2005 was slow.

After I searched this forum, I found that some guys had same problem with SQL2005 performance. I rebuilt index in SQL2005 but didn't work.

Even though I am using SP2, it is still slow than SQL2000.

Am I missing somthing? I could not understand how it could happen.

Does anybody have any solution?

Thank you in advance

James

did you clear the proc cache before running the procedures? Running the commands below will ensure that you are running both sprocs on common ground:

DBCC DROPCLEANBUFFERS
DBCC FREEPROCCACHE

Also, you can't always go by the time it takes to run a query. You really need to compare the logical reads returned by running the command SET STATISTICS IO ON before you run the statements to compare the reads. If your reads are drastically different, something may be funky.

Tim|||

Tim,

Thank you for answer.

Actually, before I tested, I restarted all services so it was not a problem of cache.

In addition, reads in profile log of SQL2005 is more than SQL2000 which is not strange based on the result.

Funny thing is speed is smiliar after data is cached. This problem happened when I tried data from disk.

I am not sure what is wrong.

I should discuss it with MS support soon.

Thank you

James

|||After you upgraded the database to SQL 2005 did you update the stats or rebuild the indexes? It's recommended to update the stats on the tables and indexes after upgrading to SQL 2005 to get proper query plans in SQL 2005.|||

Thank you Denny for replying

Unfortunately, it didn't work

Actually, I didn't migrate DB from SQL2000. I just created exactly same DB as SQL2000.

100ms is not a big deal for a SQL statement but if a stored procedure has 1000 sql statements. It will be 100,000ms which is a big.

This can explain why same stored procedure is slower than SQL2000.

James.

|||

Try optimising the data disks:

1) Set the disk to Basic disk

2) If the disk subsystem is RAID, set the stripe size to 64k, and the controller cache to 100% write.

3) Using DISKPART, create the partition using the command CREATE PARTITION PRIMARY ALIGN=64

4) Format the data disk with a cluster size of 64k.

Now you're ready to go from the disk side of things!

|||Does the execution plans show the SQL Servers taking the same path to the data? Where do the cost differences show up?|||

Danny and BigE

Thank you for replying.

Well. Execution plan is exactly same. I am not sure where it comes from.

As I said the machine that has SQL2005 is better H/W spec so I don't think it is a problem of H/W as BigE said.

I might try to do as BigE suggested but I could not agree we should set this for running SQL2005.

Think about that. SQL2005 is advanced version than SQL2000 which is 7 years ago!! Why does user consider about those kinds of disk setting? Even though it is ture, what is the big benefit of upgrading to user who is using small or medium application?

Anyway, If you have two machine that has SQL2000 and SQL2005, just try a select statement and check read and duration.

You will notice what I am saying.. Sad

James.|||

Hi James,

Did you get the solution as you described?

I have same problem and I cannot find any solutions.

Please help.

Clara

|||

Thank you BigE

I think your suggestion might improve performance but here is my concern about using SQL2005.

If it is a problem of Disk speed, Why does MS provide a fucntion to make DATABASE on top of the MEMORY DISK?

In other words, MS can create MEMORY DISK DATABASE in SQL2008 for better performance ! Smile

Maybe they will say to me that I am crazy but... If you can use UPS, then SQL server can dump that memory database to Disk during UPS is working.

Anyway, I could not buy that reason becaue , As I said, the SQL is running slower server than SQL2005. Smile

By copying Clara,

Sorry, I could not find the solution yet. One of MS consultant that I know gave to me some suggestion but it doesn't work.

Maybe I should try SQL2008 CTP instead of SQL2005 Sad

Regards,

James Lim

Does MS know SQL2005 query performance is slower than SQL2000?

Hi,

I tested a simple query like Select colA from TableB where colC= 'XX' with SQL2000 and SQL2005.

Of course, Index is same, number of records are same.

After I execute that query and checked it with profiler. SQL2000 just took 18 ms but SQL2005 took 118 ms in my environment. Actually, the machine that is installed SQL2005 has better H/W spec than SQL2000.

I could not belive that so I tested several times but SQL2005 was slow.

After I searched this forum, I found that some guys had same problem with SQL2005 performance. I rebuilt index in SQL2005 but didn't work.

Even though I am using SP2, it is still slow than SQL2000.

Am I missing somthing? I could not understand how it could happen.

Does anybody have any solution?

Thank you in advance

James

did you clear the proc cache before running the procedures? Running the commands below will ensure that you are running both sprocs on common ground:

DBCC DROPCLEANBUFFERS
DBCC FREEPROCCACHE

Also, you can't always go by the time it takes to run a query. You really need to compare the logical reads returned by running the command SET STATISTICS IO ON before you run the statements to compare the reads. If your reads are drastically different, something may be funky.

Tim|||

Tim,

Thank you for answer.

Actually, before I tested, I restarted all services so it was not a problem of cache.

In addition, reads in profile log of SQL2005 is more than SQL2000 which is not strange based on the result.

Funny thing is speed is smiliar after data is cached. This problem happened when I tried data from disk.

I am not sure what is wrong.

I should discuss it with MS support soon.

Thank you

James

|||After you upgraded the database to SQL 2005 did you update the stats or rebuild the indexes? It's recommended to update the stats on the tables and indexes after upgrading to SQL 2005 to get proper query plans in SQL 2005.|||

Thank you Denny for replying

Unfortunately, it didn't work

Actually, I didn't migrate DB from SQL2000. I just created exactly same DB as SQL2000.

100ms is not a big deal for a SQL statement but if a stored procedure has 1000 sql statements. It will be 100,000ms which is a big.

This can explain why same stored procedure is slower than SQL2000.

James.

|||

Try optimising the data disks:

1) Set the disk to Basic disk

2) If the disk subsystem is RAID, set the stripe size to 64k, and the controller cache to 100% write.

3) Using DISKPART, create the partition using the command CREATE PARTITION PRIMARY ALIGN=64

4) Format the data disk with a cluster size of 64k.

Now you're ready to go from the disk side of things!

|||Does the execution plans show the SQL Servers taking the same path to the data? Where do the cost differences show up?|||

Danny and BigE

Thank you for replying.

Well. Execution plan is exactly same. I am not sure where it comes from.

As I said the machine that has SQL2005 is better H/W spec so I don't think it is a problem of H/W as BigE said.

I might try to do as BigE suggested but I could not agree we should set this for running SQL2005.

Think about that. SQL2005 is advanced version than SQL2000 which is 7 years ago!! Why does user consider about those kinds of disk setting? Even though it is ture, what is the big benefit of upgrading to user who is using small or medium application?

Anyway, If you have two machine that has SQL2000 and SQL2005, just try a select statement and check read and duration.

You will notice what I am saying.. Sad

James.|||

Hi James,

Did you get the solution as you described?

I have same problem and I cannot find any solutions.

Please help.

Clara

|||

Thank you BigE

I think your suggestion might improve performance but here is my concern about using SQL2005.

If it is a problem of Disk speed, Why does MS provide a fucntion to make DATABASE on top of the MEMORY DISK?

In other words, MS can create MEMORY DISK DATABASE in SQL2008 for better performance ! Smile

Maybe they will say to me that I am crazy but... If you can use UPS, then SQL server can dump that memory database to Disk during UPS is working.

Anyway, I could not buy that reason becaue , As I said, the SQL is running slower server than SQL2005. Smile

By copying Clara,

Sorry, I could not find the solution yet. One of MS consultant that I know gave to me some suggestion but it doesn't work.

Maybe I should try SQL2008 CTP instead of SQL2005 Sad

Regards,

James Lim

Does MS know SQL2005 query performance is slower than SQL2000?

Hi,

I tested a simple query like Select colA from TableB where colC= 'XX' with SQL2000 and SQL2005.

Of course, Index is same, number of records are same.

After I execute that query and checked it with profiler. SQL2000 just took 18 ms but SQL2005 took 118 ms in my environment. Actually, the machine that is installed SQL2005 has better H/W spec than SQL2000.

I could not belive that so I tested several times but SQL2005 was slow.

After I searched this forum, I found that some guys had same problem with SQL2005 performance. I rebuilt index in SQL2005 but didn't work.

Even though I am using SP2, it is still slow than SQL2000.

Am I missing somthing? I could not understand how it could happen.

Does anybody have any solution?

Thank you in advance

James

did you clear the proc cache before running the procedures? Running the commands below will ensure that you are running both sprocs on common ground:

DBCC DROPCLEANBUFFERS
DBCC FREEPROCCACHE

Also, you can't always go by the time it takes to run a query. You really need to compare the logical reads returned by running the command SET STATISTICS IO ON before you run the statements to compare the reads. If your reads are drastically different, something may be funky.

Tim|||

Tim,

Thank you for answer.

Actually, before I tested, I restarted all services so it was not a problem of cache.

In addition, reads in profile log of SQL2005 is more than SQL2000 which is not strange based on the result.

Funny thing is speed is smiliar after data is cached. This problem happened when I tried data from disk.

I am not sure what is wrong.

I should discuss it with MS support soon.

Thank you

James

|||After you upgraded the database to SQL 2005 did you update the stats or rebuild the indexes? It's recommended to update the stats on the tables and indexes after upgrading to SQL 2005 to get proper query plans in SQL 2005.|||

Thank you Denny for replying

Unfortunately, it didn't work

Actually, I didn't migrate DB from SQL2000. I just created exactly same DB as SQL2000.

100ms is not a big deal for a SQL statement but if a stored procedure has 1000 sql statements. It will be 100,000ms which is a big.

This can explain why same stored procedure is slower than SQL2000.

James.

|||

Try optimising the data disks:

1) Set the disk to Basic disk

2) If the disk subsystem is RAID, set the stripe size to 64k, and the controller cache to 100% write.

3) Using DISKPART, create the partition using the command CREATE PARTITION PRIMARY ALIGN=64

4) Format the data disk with a cluster size of 64k.

Now you're ready to go from the disk side of things!

|||Does the execution plans show the SQL Servers taking the same path to the data? Where do the cost differences show up?|||

Danny and BigE

Thank you for replying.

Well. Execution plan is exactly same. I am not sure where it comes from.

As I said the machine that has SQL2005 is better H/W spec so I don't think it is a problem of H/W as BigE said.

I might try to do as BigE suggested but I could not agree we should set this for running SQL2005.

Think about that. SQL2005 is advanced version than SQL2000 which is 7 years ago!! Why does user consider about those kinds of disk setting? Even though it is ture, what is the big benefit of upgrading to user who is using small or medium application?

Anyway, If you have two machine that has SQL2000 and SQL2005, just try a select statement and check read and duration.

You will notice what I am saying.. Sad

James.|||

Hi James,

Did you get the solution as you described?

I have same problem and I cannot find any solutions.

Please help.

Clara

|||

Thank you BigE

I think your suggestion might improve performance but here is my concern about using SQL2005.

If it is a problem of Disk speed, Why does MS provide a fucntion to make DATABASE on top of the MEMORY DISK?

In other words, MS can create MEMORY DISK DATABASE in SQL2008 for better performance ! Smile

Maybe they will say to me that I am crazy but... If you can use UPS, then SQL server can dump that memory database to Disk during UPS is working.

Anyway, I could not buy that reason becaue , As I said, the SQL is running slower server than SQL2005. Smile

By copying Clara,

Sorry, I could not find the solution yet. One of MS consultant that I know gave to me some suggestion but it doesn't work.

Maybe I should try SQL2008 CTP instead of SQL2005 Sad

Regards,

James Lim

Does many tables matters

Does it matter if we have hugh number of tables vs few tables.
One example is this.
We have a table called Vendors where VendorID is the Primary key, and
another table VendorNotes (VendorID Int, Note varchar(500)) where VendorID
is the foreign key. This is one to one relation, and all vendors won't have
notes.
Is it good practice to do like this, or just add Note column to the vendors
table, and let it be null.
And does it matter if we add many columns to a table without using it.
Please give me some advices/suggestions. I need it desperately.
Thanks for any help you can provide
SteveDatabases with hundreds, even thousands, of tables is not that unusual.
Unused columns take small amounts of storage space (space is inexpensive).
The trade off is storage/retreival cost vs. development/programming cost. In
general, I think it's a 'non issue'.
Having Vendors and VendorNotes is quite acceptable -especially if the notes
are subject to frequent change and growth in size. Of course, the db purists
would say NO, all data related directly to the Vendor key *should* be in the
same table. Others would accept this arrangement for performance and
stability reasons.
So, the tried and true response is: It Depends. It depends upon what works
best for your design and the skill sets of those that have to develop and
maintian the applications that use the database.
Arnie Rowland, YACE*
"To be successful, your heart must accompany your knowledge."
*Yet Another certification Exam
"Steve, Putman" <Steve@.noemailcom> wrote in message
news:OY$nP5tlGHA.4144@.TK2MSFTNGP05.phx.gbl...
> Does it matter if we have hugh number of tables vs few tables.
> One example is this.
> We have a table called Vendors where VendorID is the Primary key, and
> another table VendorNotes (VendorID Int, Note varchar(500)) where VendorID
> is the foreign key. This is one to one relation, and all vendors won't
> have notes.
> Is it good practice to do like this, or just add Note column to the
> vendors table, and let it be null.
> And does it matter if we add many columns to a table without using it.
> Please give me some advices/suggestions. I need it desperately.
> Thanks for any help you can provide
> Steve
>
>|||Thanks Arnie for your quick reply.
My assumption was that it does not take any storage space if column values
are null.
Steve,
"Arnie Rowland" <arnie@.1568.com> wrote in message
news:OXLSCFulGHA.4512@.TK2MSFTNGP04.phx.gbl...
> Databases with hundreds, even thousands, of tables is not that unusual.
> Unused columns take small amounts of storage space (space is inexpensive).
> The trade off is storage/retreival cost vs. development/programming cost.
> In general, I think it's a 'non issue'.
> Having Vendors and VendorNotes is quite acceptable -especially if the
> notes are subject to frequent change and growth in size. Of course, the db
> purists would say NO, all data related directly to the Vendor key *should*
> be in the same table. Others would accept this arrangement for performance
> and stability reasons.
> So, the tried and true response is: It Depends. It depends upon what works
> best for your design and the skill sets of those that have to develop and
> maintian the applications that use the database.
> --
> Arnie Rowland, YACE*
> "To be successful, your heart must accompany your knowledge."
> *Yet Another certification Exam
>
> "Steve, Putman" <Steve@.noemailcom> wrote in message
> news:OY$nP5tlGHA.4144@.TK2MSFTNGP05.phx.gbl...
>|||Oh NULL values take up storage space alright.
"Steve, Putman" <Steve@.noemailcom> wrote in message
news:eo2IHYwlGHA.748@.TK2MSFTNGP02.phx.gbl...
> Thanks Arnie for your quick reply.
> My assumption was that it does not take any storage space if column values
> are null.
> Steve,
> "Arnie Rowland" <arnie@.1568.com> wrote in message
> news:OXLSCFulGHA.4512@.TK2MSFTNGP04.phx.gbl...
>|||NULL values do take up storage space. Not as much as, say, a CHAR(40), but
unfortunately representing NULLs does take up some space. Is space your
primary concern?
"Steve, Putman" <Steve@.noemailcom> wrote in message
news:eo2IHYwlGHA.748@.TK2MSFTNGP02.phx.gbl...
> Thanks Arnie for your quick reply.
> My assumption was that it does not take any storage space if column values
> are null.
> Steve,
> "Arnie Rowland" <arnie@.1568.com> wrote in message
> news:OXLSCFulGHA.4512@.TK2MSFTNGP04.phx.gbl...
>|||I tend to avoid one-to-one joins if I can. I recognize that NULLs can
take up space, and uing a 1-to-1 is a solution for that, but having a
1-to-1 join means that everytime I want to pull back information about
a vendor (including notes), I have to perform a table join. This (in
my opinion) is unnecessary in most scenarios. If an attribute of an
entity exists, then it should belong with that entity.
However, why do your vendors only have one note? This seems like a
great scenario for a 0-to-many join; if you want to add information to
a vendor, INSERT another row in your notes table. If two people are
adding notes about a vendor, then there is minimal opportunity for
concurrency issues.
In sum: if your entity (Vendor) truly has only one instance of an
attribute (VendorNote), then I would include it in the table, and allow
for NULLs (again, a design choice). However, I would first question
why is a VendorNote a singularity.
Stu
Steve, Putman wrote:
> Does it matter if we have hugh number of tables vs few tables.
> One example is this.
> We have a table called Vendors where VendorID is the Primary key, and
> another table VendorNotes (VendorID Int, Note varchar(500)) where VendorID
> is the foreign key. This is one to one relation, and all vendors won't hav
e
> notes.
> Is it good practice to do like this, or just add Note column to the vendor
s
> table, and let it be null.
> And does it matter if we add many columns to a table without using it.
> Please give me some advices/suggestions. I need it desperately.
> Thanks for any help you can provide
> Steve|||>> We have a table called Vendors where VendorID is the Primary key, and an
other table .. <<
The other table is tricky than your pseudo-code:
CREATE TABLE VendorNotes
(vendor_id INTEGER NOT NULL PRIMARY KEY
REFERENCES Vendors(vendor_id)
ON DELETE CASCADE
ON UPDATE CASCADE,
vendor_note VARCHAR(500) NOT NULL);
The **required** uniqueness constraint has overhead. The **required**
DRI actions have overhead. Or you can take the attitude that the
database can fill up with orphans and other crap until it chokes or has
no integrity. And every time you use it, you need an OUTER JOIN. My
favorite was one of these things where a series of identifiers got
re-used and inherited orphans in the un-constrainted 1:1 table.
The cost of adding a few of NULLs is basically a bit flag to mark a
column as NULL-able, or you can default it to an empty string. That is
not looking so bad now.|||Thanks Guys,
Actualy Vendors-VendorNote is just an example I gave.
Actually there are around 40 columns which we have added to a table, and
which is very very rarely used, or may never be used.
In this scenerio should be ok to keep in on a same table or better to
seperate it.
My original question was related to this.
I am still in a learning stage. So I need to follow some good practice.
Steve
"--CELKO--" <jcelko212@.earthlink.net> wrote in message
news:1151201152.819334.35550@.y41g2000cwy.googlegroups.com...
> The other table is tricky than your pseudo-code:
> CREATE TABLE VendorNotes
> (vendor_id INTEGER NOT NULL PRIMARY KEY
> REFERENCES Vendors(vendor_id)
> ON DELETE CASCADE
> ON UPDATE CASCADE,
> vendor_note VARCHAR(500) NOT NULL);
> The **required** uniqueness constraint has overhead. The **required**
> DRI actions have overhead. Or you can take the attitude that the
> database can fill up with orphans and other crap until it chokes or has
> no integrity. And every time you use it, you need an OUTER JOIN. My
> favorite was one of these things where a series of identifiers got
> re-used and inherited orphans in the un-constrainted 1:1 table.
> The cost of adding a few of NULLs is basically a bit flag to mark a
> column as NULL-able, or you can default it to an empty string. That is
> not looking so bad now.
>|||When learning, it's always best to rely on theory. You can develop
"practical" work-arounds later in your career (when theory fails to
perform as well as needed in real-world scenarios). Doing a 1-to-1
join in order to build a complete entity (to avoid the storage of
NULLS) is a practical solution, not a theoretical one.
My .02
Stu
Steve, Putman wrote:
> Thanks Guys,
> Actualy Vendors-VendorNote is just an example I gave.
> Actually there are around 40 columns which we have added to a table, and
> which is very very rarely used, or may never be used.
> In this scenerio should be ok to keep in on a same table or better to
> seperate it.
> My original question was related to this.
> I am still in a learning stage. So I need to follow some good practice.
> Steve
>
> "--CELKO--" <jcelko212@.earthlink.net> wrote in message
> news:1151201152.819334.35550@.y41g2000cwy.googlegroups.com...

Sunday, March 11, 2012

Does JOIN order effect efficiency?

Hi all,

Does JOIN order effect efficiency?

If I have three large tables to join together should I join the two that I know will cut the number of rows down a lot first and then join the 3rd table or does it make no difference (if I join the first and 3rd - which I know will be a large result set and then join the 2nd).

Thanks in advance,

Chiz.it should make no difference. have a look at the query plan to be sure.

Friday, March 9, 2012

Does COUNT(*) use clusterd index?

Hi folks,
I have a large table which has approximately 150 million records. I am
running a SELECT COUNT(*) query to find out the exact number of rows
present in that table. It is running for more than an hour, yet to
complete. Estimated execution plan shows that it uses clusted index
scan. But it does seem it is using the index. What is the best way to
find out the exact number of records present in that table quickly.
Thanks in advance.
--
*** Sent via Developersdex http://www.examnotes.net ***COUNT(*) is the best way but it sounds like you are being blocked. What
does sp_who2 say? And by the way SQL Server will choose the best index with
a count(*) and it does not have to be the clustered index.
Andrew J. Kelly SQL MVP
"Venkat" <nospam_venkat_asp@.yahoo.co.uk> wrote in message
news:evTgGq$zFHA.1028@.TK2MSFTNGP12.phx.gbl...
> Hi folks,
> I have a large table which has approximately 150 million records. I am
> running a SELECT COUNT(*) query to find out the exact number of rows
> present in that table. It is running for more than an hour, yet to
> complete. Estimated execution plan shows that it uses clusted index
> scan. But it does seem it is using the index. What is the best way to
> find out the exact number of records present in that table quickly.
> Thanks in advance.
> --
> *** Sent via Developersdex http://www.examnotes.net ***|||This might help.
http://toponewithties.blogspot.com/...count-them.html
Roji. P. Thomas
Net Asset Management
http://toponewithties.blogspot.com
"Venkat" <nospam_venkat_asp@.yahoo.co.uk> wrote in message
news:evTgGq$zFHA.1028@.TK2MSFTNGP12.phx.gbl...
> Hi folks,
> I have a large table which has approximately 150 million records. I am
> running a SELECT COUNT(*) query to find out the exact number of rows
> present in that table. It is running for more than an hour, yet to
> complete. Estimated execution plan shows that it uses clusted index
> scan. But it does seem it is using the index. What is the best way to
> find out the exact number of records present in that table quickly.
> Thanks in advance.
> --
> *** Sent via Developersdex http://www.examnotes.net ***|||If you don't ned to be up to the second, you can check sysindexes. To be
more accurate, you can issue DBCC UPDATEUSAGE against the table first, but
this will take time just like the scan does...
What's probably happening is that there is activity against the table. So,
another way to make the count return quicker (though, again, it may be out
of date by the time your brain processes it) is to use SELECT COUNT(*) FROM
table WITH (NOLOCK);
"Venkat" <nospam_venkat_asp@.yahoo.co.uk> wrote in message
news:evTgGq$zFHA.1028@.TK2MSFTNGP12.phx.gbl...
> Hi folks,
> I have a large table which has approximately 150 million records. I am
> running a SELECT COUNT(*) query to find out the exact number of rows
> present in that table. It is running for more than an hour, yet to
> complete. Estimated execution plan shows that it uses clusted index
> scan. But it does seem it is using the index. What is the best way to
> find out the exact number of records present in that table quickly.
> Thanks in advance.
> --
> *** Sent via Developersdex http://www.examnotes.net ***|||Ideally, you would have an non-clustered index on a narrow column so you wou
ld fit many rows on such
an index page. SQL Server would now scan that index instead of scanning the
data pages.
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"Venkat" <nospam_venkat_asp@.yahoo.co.uk> wrote in message
news:evTgGq$zFHA.1028@.TK2MSFTNGP12.phx.gbl...
> Hi folks,
> I have a large table which has approximately 150 million records. I am
> running a SELECT COUNT(*) query to find out the exact number of rows
> present in that table. It is running for more than an hour, yet to
> complete. Estimated execution plan shows that it uses clusted index
> scan. But it does seem it is using the index. What is the best way to
> find out the exact number of records present in that table quickly.
> Thanks in advance.
> --
> *** Sent via Developersdex http://www.examnotes.net ***|||If you don't require an exact answer, it isn't necessary use a
SELECT count(*) query on the rows in a table to get the row
count. SQL Server keeps the row count in sysindexes and it
can be retrieved there. The key is to select the correct
record from sysindexes.
Sysindexes is a system table that exists in every database.
SQL Server maintains at least one row in sysindexes for every
user table. A few of the most important columns are:
Column Data Type Description
-- -- ---
id int ID of the table referred to by this row
indid int See the text that follows...
rowcnt bigint Number of rows in the index
The indid column tells us what part of the table structure this
row of sysindexes is referring to:
indid value Description
-- ----
0 Table data when there is no clustered index
1 Refers to the clustered index
2 - 254 Non-clustered indexes
255 Text or Image data pages
A table will only have an entry in sysindexes with an indid value
of for 0 or 1, never both. That's the entry that we're
interested in because its rowcnt field gives is the number of
rows in the table. There's a query that shows the table, index
and indid from the pubs database:
/-- Copy From Below this line --USE pubs
GO
SELECT so.[name] as [table name]
, CASE WHEN si.indid between 1 and 254
THEN si.[name] ELSE NULL END
AS [Index Name]
, si.indid
FROM sysindexes si
INNER JOIN sysobjects so
ON si.id = so.id
WHERE si.indid < 2
AND so.type = 'U' -- Only User Tables
AND so.[name] != 'dtproperties'
ORDER BY so.[name]
\-- Stop copying above this line --/
(Results)
table name Index Name indid
-- -- --
authors UPKCL_auidind 1
discounts NULL 0
employee employee_ind 1
jobs PK__jobs__117F9D94 1
pub_info UPKCL_pubinfo 1
publishers UPKCL_pubind 1
roysched NULL 0
sales UPKCL_sales 1
stores UPK_storeid 1
titleauthor UPKCL_taind 1
titles UPKCL_titleidind 1
As you can see from the results, most of the indexes are
clustered (indid=1) but a few tables such as discounts
don't have a clustered index (indid=0).
I started this newsletter with "If you don't need an exact
answer..." That's because there are times when rowcnt is
not the exact number of records in the table. This can
be corrected by updating statistics on the table with:
dbcc updateusage
go
Here's the CREATE FUNCTION script for udf_Tbl_RowCOUNT:
/-- Copy From Below this line --
CREATE FUNCTION dbo.udf_Tbl_RowCOUNT (
@.sTableName sysname -- Table to retrieve Row Count
)
RETURNS INT -- Row count of the table, NULL if not found.
/*
* Returns the row count for a table by examining sysindexes.
* This function must be run in the same database as the table.
*
* Common Usage:
SELECT dbo.udf_Tbl_RowCOUNT ('')
* Test
PRINT 'Test 1 Bad table ' + CASE WHEN SELECT
dbo.udf_Tbl_RowCOUNT ('foobar') is NULL
THEN 'Worked' ELSE 'Error' END
* ) Copyright 2002 Andrew Novick http://www.NovickSoftware.com
* You may use this function in any of your SQL Server databases
* including databases that you sell, so long as they contain
* other unrelated database objects. You may not publish this
* UDF either in print or electronically.
****************************************
***********************/
AS BEGIN
DECLARE @.nRowCount INT -- the rows
DECLARE @.nObjectID int -- Object ID
SET @.nObjectID = OBJECT_ID(@.sTableName)
-- Object might not be found
IF @.nObjectID is null RETURN NULL
SELECT TOP 1 @.nRowCount = rows
FROM sysindexes
WHERE id = @.nObjectID AND indid < 2
RETURN @.nRowCount
END
GO
GRANT EXECUTE ON [dbo].[udf_Tbl_RowCOUNT] TO PUBLIC
GO
\-- Stop copying above this line --/
Let's use it:
/-- Copy From Below this line --
use pubs -- assuming the UDF was created in pubs
go
SELECT [name]
, dbo.udf_Tbl_RowCOUNT ([name]) as [Row Count]
FROM sysobjects
WHERE type='U' and name != 'dtproperties'
ORDER BY [name]
GO
\-- Stop copying above this line --/
(Results)
name Row Count
-- --
authors 24
discounts 3
employee 43
jobs 14
pub_info 8
publishers 8
roysched 86
sales 21
stores 6
titleauthor 25
titles 18
That's all there is to it.
--
thanks,
Jose de Jesus Jr. Mcp,Mcdba
Data Architect
Sykes Asia (Manila philippines)
MCP #2324787
"Venkat" wrote:

> Hi folks,
> I have a large table which has approximately 150 million records. I am
> running a SELECT COUNT(*) query to find out the exact number of rows
> present in that table. It is running for more than an hour, yet to
> complete. Estimated execution plan shows that it uses clusted index
> scan. But it does seem it is using the index. What is the best way to
> find out the exact number of records present in that table quickly.
> Thanks in advance.
> --
> *** Sent via Developersdex http://www.examnotes.net ***
>

Wednesday, March 7, 2012

Does chart support more than 800 points?

I'm using SQL Server 2000 Reporting Services. I want to create a curve using
chart for a series of data.
If the number of points (x axis) is small (less than 800), the curve is
displayed correctly.
But if the number of points is more than 800, the curve cannot come out,
after about 1 minute, in the "Event Viewer" of the server, I find a system
warning:
"A process serving application pool 'DefaultAppPool' suffered a fatal
communication error with the World Wide Web Publishing Service. The process
id was '22656'. The data field contains the error number(8007006d). "
Anyone could give me a solution to fix it? ThanksIn the log file of reporting service, I find an exception of Out of Memory:
w3wp!processing!2afc!2006-6-30-15:38:29:: e ERROR:
System.OutOfMemoryException: Out of memory.
at System.Drawing.Graphics.CheckErrorStatus(Int32 status)
at System.Drawing.Graphics.DrawCurve(Pen pen, PointF[] points, Int32
offset, Int32 numberOfSegments, Single tension)
at Dundas.Charting.WebControl.GdiGraphics.DrawCurve(Pen pen, PointF[]
points, Int32 offset, Int32 numberOfSegments, Single tension)
at Dundas.Charting.WebControl.ChartTypes.LineChart.DrawLine(ChartGraphics
graph, CommonElements common, DataPoint point, Series series, PointF[]
points, Int32 pointIndex, Single tension)
at
Dundas.Charting.WebControl.ChartTypes.LineChart.ProcessChartType(Boolean
selection, ChartGraphics graph, CommonElements common, ChartArea area, Series
seriesToDraw)
at Dundas.Charting.WebControl.ChartTypes.LineChart.Paint(ChartGraphics
graph, CommonElements common, ChartArea area, Series seriesToDraw)
at Dundas.Charting.WebControl.ChartArea.Paint(ChartGraphics graph)
at Dundas.Charting.WebControl.ChartPicture.Paint(Graphics graph, Boolean
cursorsOnly, Boolean svgMode, XmlTextWriter svgTextWriter, String
documentTitle, Boolean resizable, Boolean preserveAspectRatio)
at Dundas.Charting.WebControl.ChartImage.GetImage()
at Dundas.Charting.WebControl.Chart.Save(Stream imageStream)
at
Microsoft.ReportingServices.ReportRendering.DundasChart.GetImage(ImageType
type, Boolean& hasImageMap)
at
Microsoft.ReportingServices.ReportRendering.DundasChart.GetImage(ImageType
type, ChartInstanceInfo instanceInfo, Boolean& hasImageMap)
at Microsoft.ReportingServices.ReportRendering.Chart.GetImage(ImageType
type, Boolean& hasImageMap)
w3wp!processing!2afc!2006-6-30-15:38:29:: e ERROR: Throwing
Microsoft.ReportingServices.ReportProcessing.ReportProcessingException: An
error has occurred during rendering of chart chart1. Details: Out of memory.,
;
Info:
Microsoft.ReportingServices.ReportProcessing.ReportProcessingException: An
error has occurred during rendering of chart chart1. Details: Out of memory.
--> System.OutOfMemoryException: Out of memory.
at System.Drawing.Graphics.CheckErrorStatus(Int32 status)
at System.Drawing.Graphics.DrawCurve(Pen pen, PointF[] points, Int32
offset, Int32 numberOfSegments, Single tension)
at Dundas.Charting.WebControl.GdiGraphics.DrawCurve(Pen pen, PointF[]
points, Int32 offset, Int32 numberOfSegments, Single tension)
at Dundas.Charting.WebControl.ChartTypes.LineChart.DrawLine(ChartGraphics
graph, CommonElements common, DataPoint point, Series series, PointF[]
points, Int32 pointIndex, Single tension)
at
Dundas.Charting.WebControl.ChartTypes.LineChart.ProcessChartType(Boolean
selection, ChartGraphics graph, CommonElements common, ChartArea area, Series
seriesToDraw)
at Dundas.Charting.WebControl.ChartTypes.LineChart.Paint(ChartGraphics
graph, CommonElements common, ChartArea area, Series seriesToDraw)
at Dundas.Charting.WebControl.ChartArea.Paint(ChartGraphics graph)
at Dundas.Charting.WebControl.ChartPicture.Paint(Graphics graph, Boolean
cursorsOnly, Boolean svgMode, XmlTextWriter svgTextWriter, String
documentTitle, Boolean resizable, Boolean preserveAspectRatio)
at Dundas.Charting.WebControl.ChartImage.GetImage()
at Dundas.Charting.WebControl.Chart.Save(Stream imageStream)
at
Microsoft.ReportingServices.ReportRendering.DundasChart.GetImage(ImageType
type, Boolean& hasImageMap)
at
Microsoft.ReportingServices.ReportRendering.DundasChart.GetImage(ImageType
type, ChartInstanceInfo instanceInfo, Boolean& hasImageMap)
at Microsoft.ReportingServices.ReportRendering.Chart.GetImage(ImageType
type, Boolean& hasImageMap)
-- End of inner exception stack trace --
w3wp!reportrendering!2afc!06/30/2006-15:38:34:: e ERROR: Throwing
Microsoft.ReportingServices.ReportProcessing.WrapperReportRenderingException:
Exception of type
Microsoft.ReportingServices.ReportRendering.ReportRenderingException was
thrown., ;
Info:
Microsoft.ReportingServices.ReportProcessing.WrapperReportRenderingException:
Exception of type
Microsoft.ReportingServices.ReportRendering.ReportRenderingException was
thrown. -->
Microsoft.ReportingServices.ReportRendering.ReportRenderingException:
Exception of type
Microsoft.ReportingServices.ReportRendering.ReportRenderingException was
thrown. -->
Microsoft.ReportingServices.ReportProcessing.ReportProcessingException: An
error has occurred during rendering of chart chart1. Details: Out of memory.
--> System.OutOfMemoryException: Out of memory.
-- End of inner exception stack trace --
at
Microsoft.ReportingServices.Rendering.HtmlRenderer.RenderingExtensionBase.Render(Report
report, NameValueCollection reportServerParameters, NameValueCollection
deviceInfo, NameValueCollection clientCapabilities,
EvaluateHeaderFooterExpressions evaluateHeaderFooterExpressions,
CreateAndRegisterStream createAndRegisterStream)
at
Microsoft.ReportingServices.ReportProcessing.ReportProcessing.a(DateTime A_0,
GetReportChunk A_1, ProcessingContext A_2, RenderingContext A_3,
CreateReportChunk A_4, Boolean& A_5)
-- End of inner exception stack trace --
w3wp!library!2afc!06/30/2006-15:38:34:: i INFO: Initializing
EnableExecutionLogging to 'True' as specified in Server system properties.
w3wp!webserver!2afc!06/30/2006-15:38:49:: e ERROR: Reporting Services error
Microsoft.ReportingServices.Diagnostics.Utilities.RSException: Exception of
type Microsoft.ReportingServices.ReportRendering.ReportRenderingException was
thrown. -->
Microsoft.ReportingServices.ReportProcessing.WrapperReportRenderingException:
Exception of type
Microsoft.ReportingServices.ReportRendering.ReportRenderingException was
thrown. -->
Microsoft.ReportingServices.ReportRendering.ReportRenderingException:
Exception of type
Microsoft.ReportingServices.ReportRendering.ReportRenderingException was
thrown. -->
Microsoft.ReportingServices.ReportProcessing.ReportProcessingException: An
error has occurred during rendering of chart chart1. Details: Out of memory.
--> System.OutOfMemoryException: Out of memory.
w3wp!library!7c40!6/30/2006-15:38:51:: i INFO: Cleaned 0 batch records, 0
policies, 0 sessions, 0 cache entries, 0 snapshots, 0 chunks, 0 running jobs
w3wp!library!2afc!06/30/2006-15:39:09:: e ERROR: Found
System.OutOfMemoryException exception:
Microsoft.ReportingServices.Diagnostics.Utilities.RSException: Exception of
type Microsoft.ReportingServices.ReportRendering.ReportRenderingException was
thrown. -->
Microsoft.ReportingServices.ReportProcessing.WrapperReportRenderingException:
Exception of type
Microsoft.ReportingServices.ReportRendering.ReportRenderingException was
thrown. -->
Microsoft.ReportingServices.ReportRendering.ReportRenderingException:
Exception of type
Microsoft.ReportingServices.ReportRendering.ReportRenderingException was
thrown. -->
Microsoft.ReportingServices.ReportProcessing.ReportProcessingException: An
error has occurred during rendering of chart chart1. Details: Out of memory.
--> System.OutOfMemoryException: Out of memory.
w3wp!library!2afc!06/30/2006-15:39:09:: e ERROR: Terminating worker process
How to avoid the out of memory? Thanks
"Jun Yuan" wrote:
> I'm using SQL Server 2000 Reporting Services. I want to create a curve using
> chart for a series of data.
> If the number of points (x axis) is small (less than 800), the curve is
> displayed correctly.
> But if the number of points is more than 800, the curve cannot come out,
> after about 1 minute, in the "Event Viewer" of the server, I find a system
> warning:
> "A process serving application pool 'DefaultAppPool' suffered a fatal
> communication error with the World Wide Web Publishing Service. The process
> id was '22656'. The data field contains the error number(8007006d). "
> Anyone could give me a solution to fix it? Thanks
>

Sunday, February 19, 2012

Documenting large numbers of SPs

Hi All,

I have a large number of SPs that I would like to be able to document and provide this documentation to prospective clients. That is, provide them enough information without giving them the source code for the procedures.

I have found that all the parameters are in the sys.parameters table.

But I was wondering. Are the fields that are sent back out of an SP captured and recorded somewhere in the SQL Server catalog?

Is there an easy way to find out what fields are coming out of an SP?

Thanks in Advance

Peter

You may be better served exploring one of the third party documentation tools.

I like ApexSQL's Doc tool.

You can download a fully functional eval version.

|||

Hi Arnie,

thanks for the tip...I'll take a look..

Peter

Tuesday, February 14, 2012

Do While Skip in Selecting...

Hello:

I have one table and it contains a column named ID Number, and a column named Date. I have a Do While statement that runs a SQL select statement a few times based on the number of records with the same ID Number. During the Do While statement the information is copied into another table and deleted from the old table. After I look at the results, I see that at the second Do While loop, the data was not selected and the Select statement did not run... so the old variable value from varValue is used again... Any reasons on why?

Here is a code snippet of what is going on:


Do While varCount < varRecordCount
conSqlConnect.Open()
cmdSelect = New SqlCommand ("Select * From temp_records_1 where [id number]=@.idnumber and date<@.date", conSqlConnect)
cmdSelect.Parameters.Add( "@.accountnumber", "10000" )
cmdSelect.Parameters.Add( "@.date", dtnow )
dtrdatareader = cmdSelect.ExecuteReader()
While dtrdatareader.Read()
If IsDbNull(dtrdatareader("value")) = false Then
varValue = dtrdatareader("value")
End If
End While
dtrdatareader.Close()
conSqlConnect.Close()

'#####The information above is copied to another table here
'#####The record where the information was received is deleted.

varCount = varCount + 1
Loop

Any ideas?After playing around with this for a while, I found that the select statement is incorrect. The part where it says date<@.date... this selects more than one row, instead of just selecting one row.

I read articles that discuss using select MAX(column) but that will only return one column... how can I select a row based on a column with the maximum value?|||ok... great got it working now... just post the answer here for future reference... I went ahead and did Select Top 1 * instead of Select *... it solved everything...