Showing posts with label usage. Show all posts
Showing posts with label usage. Show all posts

Tuesday, March 27, 2012

History Tables

Can someone please give me a quick explanation for using histoy tables in an application. The usage i am looking at is a Date of Birth field on an order form to be used later fro reporting purposes.Maybe I'm wrong, but here it is: history tables are to be used to record changes of some data elements through time. In your example, there are customers who fill order forms and choose one or more items every time. Now, it is possible to write a report and show all items customer ordered in the past. This makes sense to me.

But, what kind of a history element is a date of birth? Everyone has only one date of birth and this information is always valid - 7 years ago, today, next year and in 500 years ...|||Perhaps i didn't explain it correctly.
i need to make a DOB field on an existing order form a required field so that this order may be pulled at a later date to record how many people over a certain age ordered the product. I thought it might be wise to create a history table so that running the query for the report would not cause a drain on the application. am I wrong?|||If you are building a data warehouse, then it would probably make sense. Otherwise, storing "date of birth" column along with "orders" would be an example of denormalized design. Unless there is a good reason to do so, I'd rather see "date of birth" along with customer data (name, address, ...); orders should contain a foreign key column to join "order" with a customer.

Friday, March 23, 2012

Highest week of sales/usage

I'm working with Crystal Reports 10 with a Sql Server 2000 database. I've got a table which stores the details of items including the saletimes (a date/time field). I need to calculate the sales figure for the highest week of sales. In other words I need to find out what the highest amount of sales recorded in any given week. How can I do this?Quickest way would be to create a group based on date, and in the group options select "for each week" as the required grouping.
Create Sum(sigma sysmbol) fields for the data.
Suppress your details section.
In the group footer you will have Totals based on weekly aggregates.
In the Report menu, select TopN/Sort Group Expert.
Select the Tab for your weekly group (if you have more than one group that is).
In"for This Group Sort" change All to TopN.
Enter 1 in the "Where N is" field.
Remove the tick from the "Show others" checkbox.
OK that dialog.
Run you report. Should see only highest weeks.
May want to format the Date Field in your group to show start end end date of the week that finished top........

Hope this helps

dave|||re above - this is all client-side in Crystal.
would be quicker if could achieve it in SQL on server-side, but above solution is quick-ish, because Crystal's query to SQL Server will include an SQL-compliant "TopN" modifier.

dave

High usage of RAM on SQL2k

Hi

I have an sql server with very high usage. The web which is connected to the sql server has approx. 1000 unique visitors every time.
The database is optimized and the site goes really fast now.

But a problem we've had since the beginning is that SQL Server constantly uses 1.8 GB of RAM. Never less, never more.

Do you have any idea on how to decrease the usage of RAM. I am sure that this high usage is necessary.check your logs,if they are more truncate them,
may be it helps you.|||But arent the log files truncated during the backup?|||Logs?

run sp_configure and tell me what the results are for min server memory.|||min server memory (MB) 0 2147483647 0 0

High PF Usage

The system is a Clustered x64 SQL Server 2005 SP2 Standard 32G of RAM. The task manager show the PF Usage as high as 27G. Any ideas why the usage would be so high?

Did you give the "Lock Pages in Memory" right to the SQL Server Service account? This is extremely important for 64-bit SQL Server 2005. Otherwise, the OS controls paging instead of SQL Server. I have some more details below:

http://glennberrysqlperformance.spaces.live.com/blog/cns!45041418ECCAA960!115.entry

Wednesday, March 21, 2012

High Memory Usage by SQL 2000

Hi,
Recently i've moved some of our databases from a Windows 2000 Server with
512 physical memory to a newer Windows 2003 Server with 1024MB physical
memory.
The new server functions OK, but there is a very high usage of memory in
comparison to the old server: the sqlservr.exe process used around 400MB of
memory,
while the new one is using near 850MB after only 2 weeks (while at the
beginning it used something like 600 MB).
Is such a behaviour is normal and can be explained by that the new server
has larger available amount of physical memory?
In any case, are there any tools that helps to determine which application
catch the largest amount of memory used by SQL?
I think that such a large usage for relatively no so hard-working server is
something strange.
Any help/tips will be highly appreciated!!!
Thanks in advance,
BarakBarak,
SQL Server 2000 will grab and hold on to as much memory as it needs. SQL
Server 2000, will not, by default, release memory back to the operating
system after using it. This is normal and expected behavior. The basic
reason is that Microsoft expects that SQL Server 2000 is installed as a
stand-alone application on a server. Thus not only is nothing gained by SQL
Server 2000 releasing memory back to the OS, but, in fact, performance is
hurt if SQL Server 2000 was continually requesting and releasing memory ...
the request for and release of memory is fairly obviously an activity that
requires some resources, at the very least CPU time, that could otherwise be
utilized.
Additional information is available in the SQL Server 2000 Books Online in
sections "Memory Architecture" and "Dynamically Managing Memory on Windows
NT and Windows 2000".
Chief Tenaya
"Barak Turovsky" <baraktur@.mail.ru> wrote in message
news:uslssSIHEHA.1432@.TK2MSFTNGP12.phx.gbl...
> Hi,
> Recently i've moved some of our databases from a Windows 2000 Server with
> 512 physical memory to a newer Windows 2003 Server with 1024MB physical
> memory.
> The new server functions OK, but there is a very high usage of memory in
> comparison to the old server: the sqlservr.exe process used around 400MB
of
> memory,
> while the new one is using near 850MB after only 2 weeks (while at the
> beginning it used something like 600 MB).
> Is such a behaviour is normal and can be explained by that the new server
> has larger available amount of physical memory?
> In any case, are there any tools that helps to determine which application
> catch the largest amount of memory used by SQL?
> I think that such a large usage for relatively no so hard-working server
is
> something strange.
> Any help/tips will be highly appreciated!!!
> Thanks in advance,
> Barak
>|||Barak,
BTW, within limits, the more memory available to SQL Server 2000, the better
one can expect performance from SQL Server 2000. Additional memory allows
SQL Server 2000 to create and use a larger memory cache area, thus reducing
physical I/O ... and physical I/O is more often then not what drags down
performance in most any database.
Chief Tenaya
"Barak Turovsky" <baraktur@.mail.ru> wrote in message
news:uslssSIHEHA.1432@.TK2MSFTNGP12.phx.gbl...
> Hi,
> Recently i've moved some of our databases from a Windows 2000 Server with
> 512 physical memory to a newer Windows 2003 Server with 1024MB physical
> memory.
> The new server functions OK, but there is a very high usage of memory in
> comparison to the old server: the sqlservr.exe process used around 400MB
of
> memory,
> while the new one is using near 850MB after only 2 weeks (while at the
> beginning it used something like 600 MB).
> Is such a behaviour is normal and can be explained by that the new server
> has larger available amount of physical memory?
> In any case, are there any tools that helps to determine which application
> catch the largest amount of memory used by SQL?
> I think that such a large usage for relatively no so hard-working server
is
> something strange.
> Any help/tips will be highly appreciated!!!
> Thanks in advance,
> Barak
>

High Memory Usage by SQL 2000

Hi,
Recently i've moved some of our databases from a Windows 2000 Server with
512 physical memory to a newer Windows 2003 Server with 1024MB physical
memory.
The new server functions OK, but there is a very high usage of memory in
comparison to the old server: the sqlservr.exe process used around 400MB of
memory,
while the new one is using near 850MB after only 2 weeks (while at the
beginning it used something like 600 MB).
Is such a behaviour is normal and can be explained by that the new server
has larger available amount of physical memory?
In any case, are there any tools that helps to determine which application
catch the largest amount of memory used by SQL?
I think that such a large usage for relatively no so hard-working server is
something strange.
Any help/tips will be highly appreciated!!!
Thanks in advance,
BarakBarak,
SQL Server 2000 will grab and hold on to as much memory as it needs. SQL
Server 2000, will not, by default, release memory back to the operating
system after using it. This is normal and expected behavior. The basic
reason is that Microsoft expects that SQL Server 2000 is installed as a
stand-alone application on a server. Thus not only is nothing gained by SQL
Server 2000 releasing memory back to the OS, but, in fact, performance is
hurt if SQL Server 2000 was continually requesting and releasing memory ...
the request for and release of memory is fairly obviously an activity that
requires some resources, at the very least CPU time, that could otherwise be
utilized.
Additional information is available in the SQL Server 2000 Books Online in
sections "Memory Architecture" and "Dynamically Managing Memory on Windows
NT and Windows 2000".
Chief Tenaya
"Barak Turovsky" <baraktur@.mail.ru> wrote in message
news:uslssSIHEHA.1432@.TK2MSFTNGP12.phx.gbl...
> Hi,
> Recently i've moved some of our databases from a Windows 2000 Server with
> 512 physical memory to a newer Windows 2003 Server with 1024MB physical
> memory.
> The new server functions OK, but there is a very high usage of memory in
> comparison to the old server: the sqlservr.exe process used around 400MB
of
> memory,
> while the new one is using near 850MB after only 2 weeks (while at the
> beginning it used something like 600 MB).
> Is such a behaviour is normal and can be explained by that the new server
> has larger available amount of physical memory?
> In any case, are there any tools that helps to determine which application
> catch the largest amount of memory used by SQL?
> I think that such a large usage for relatively no so hard-working server
is
> something strange.
> Any help/tips will be highly appreciated!!!
> Thanks in advance,
> Barak
>|||Barak,
BTW, within limits, the more memory available to SQL Server 2000, the better
one can expect performance from SQL Server 2000. Additional memory allows
SQL Server 2000 to create and use a larger memory cache area, thus reducing
physical I/O ... and physical I/O is more often then not what drags down
performance in most any database.
Chief Tenaya
"Barak Turovsky" <baraktur@.mail.ru> wrote in message
news:uslssSIHEHA.1432@.TK2MSFTNGP12.phx.gbl...
> Hi,
> Recently i've moved some of our databases from a Windows 2000 Server with
> 512 physical memory to a newer Windows 2003 Server with 1024MB physical
> memory.
> The new server functions OK, but there is a very high usage of memory in
> comparison to the old server: the sqlservr.exe process used around 400MB
of
> memory,
> while the new one is using near 850MB after only 2 weeks (while at the
> beginning it used something like 600 MB).
> Is such a behaviour is normal and can be explained by that the new server
> has larger available amount of physical memory?
> In any case, are there any tools that helps to determine which application
> catch the largest amount of memory used by SQL?
> I think that such a large usage for relatively no so hard-working server
is
> something strange.
> Any help/tips will be highly appreciated!!!
> Thanks in advance,
> Barak
>

High Memory Usage by SQL 2000

Hi,
Recently i've moved some of our databases from a Windows 2000 Server with
512 physical memory to a newer Windows 2003 Server with 1024MB physical
memory.
The new server functions OK, but there is a very high usage of memory in
comparison to the old server: the sqlservr.exe process used around 400MB of
memory,
while the new one is using near 850MB after only 2 weeks (while at the
beginning it used something like 600 MB).
Is such a behaviour is normal and can be explained by that the new server
has larger available amount of physical memory?
In any case, are there any tools that helps to determine which application
catch the largest amount of memory used by SQL?
I think that such a large usage for relatively no so hard-working server is
something strange.
Any help/tips will be highly appreciated!!!
Thanks in advance,
Barak
Barak,
SQL Server 2000 will grab and hold on to as much memory as it needs. SQL
Server 2000, will not, by default, release memory back to the operating
system after using it. This is normal and expected behavior. The basic
reason is that Microsoft expects that SQL Server 2000 is installed as a
stand-alone application on a server. Thus not only is nothing gained by SQL
Server 2000 releasing memory back to the OS, but, in fact, performance is
hurt if SQL Server 2000 was continually requesting and releasing memory ...
the request for and release of memory is fairly obviously an activity that
requires some resources, at the very least CPU time, that could otherwise be
utilized.
Additional information is available in the SQL Server 2000 Books Online in
sections "Memory Architecture" and "Dynamically Managing Memory on Windows
NT and Windows 2000".
Chief Tenaya
"Barak Turovsky" <baraktur@.mail.ru> wrote in message
news:uslssSIHEHA.1432@.TK2MSFTNGP12.phx.gbl...
> Hi,
> Recently i've moved some of our databases from a Windows 2000 Server with
> 512 physical memory to a newer Windows 2003 Server with 1024MB physical
> memory.
> The new server functions OK, but there is a very high usage of memory in
> comparison to the old server: the sqlservr.exe process used around 400MB
of
> memory,
> while the new one is using near 850MB after only 2 weeks (while at the
> beginning it used something like 600 MB).
> Is such a behaviour is normal and can be explained by that the new server
> has larger available amount of physical memory?
> In any case, are there any tools that helps to determine which application
> catch the largest amount of memory used by SQL?
> I think that such a large usage for relatively no so hard-working server
is
> something strange.
> Any help/tips will be highly appreciated!!!
> Thanks in advance,
> Barak
>
|||Barak,
BTW, within limits, the more memory available to SQL Server 2000, the better
one can expect performance from SQL Server 2000. Additional memory allows
SQL Server 2000 to create and use a larger memory cache area, thus reducing
physical I/O ... and physical I/O is more often then not what drags down
performance in most any database.
Chief Tenaya
"Barak Turovsky" <baraktur@.mail.ru> wrote in message
news:uslssSIHEHA.1432@.TK2MSFTNGP12.phx.gbl...
> Hi,
> Recently i've moved some of our databases from a Windows 2000 Server with
> 512 physical memory to a newer Windows 2003 Server with 1024MB physical
> memory.
> The new server functions OK, but there is a very high usage of memory in
> comparison to the old server: the sqlservr.exe process used around 400MB
of
> memory,
> while the new one is using near 850MB after only 2 weeks (while at the
> beginning it used something like 600 MB).
> Is such a behaviour is normal and can be explained by that the new server
> has larger available amount of physical memory?
> In any case, are there any tools that helps to determine which application
> catch the largest amount of memory used by SQL?
> I think that such a large usage for relatively no so hard-working server
is
> something strange.
> Any help/tips will be highly appreciated!!!
> Thanks in advance,
> Barak
>
|||Chief Tenaya is correct and gave you some good references. You should also
take a look at 321363 INF: SQL Server Memory Usage
http://support.microsoft.com/?id=321363.
Cindy Gross, MCDBA, MCSE
http://cindygross.tripod.com
This posting is provided "AS IS" with no warranties, and confers no rights.

High Memory Usage by SQL 2000

Hi,
Recently i've moved some of our databases from a Windows 2000 Server with
512 physical memory to a newer Windows 2003 Server with 1024MB physical
memory.
The new server functions OK, but there is a very high usage of memory in
comparison to the old server: the sqlservr.exe process used around 400MB of
memory,
while the new one is using near 850MB after only 2 weeks (while at the
beginning it used something like 600 MB).
Is such a behaviour is normal and can be explained by that the new server
has larger available amount of physical memory?
In any case, are there any tools that helps to determine which application
catch the largest amount of memory used by SQL?
I think that such a large usage for relatively no so hard-working server is
something strange.
Any help/tips will be highly appreciated!!!
Thanks in advance,
Barak
Barak,
SQL Server 2000 will grab and hold on to as much memory as it needs. SQL
Server 2000, will not, by default, release memory back to the operating
system after using it. This is normal and expected behavior. The basic
reason is that Microsoft expects that SQL Server 2000 is installed as a
stand-alone application on a server. Thus not only is nothing gained by SQL
Server 2000 releasing memory back to the OS, but, in fact, performance is
hurt if SQL Server 2000 was continually requesting and releasing memory ...
the request for and release of memory is fairly obviously an activity that
requires some resources, at the very least CPU time, that could otherwise be
utilized.
Additional information is available in the SQL Server 2000 Books Online in
sections "Memory Architecture" and "Dynamically Managing Memory on Windows
NT and Windows 2000".
Chief Tenaya
"Barak Turovsky" <baraktur@.mail.ru> wrote in message
news:uslssSIHEHA.1432@.TK2MSFTNGP12.phx.gbl...
> Hi,
> Recently i've moved some of our databases from a Windows 2000 Server with
> 512 physical memory to a newer Windows 2003 Server with 1024MB physical
> memory.
> The new server functions OK, but there is a very high usage of memory in
> comparison to the old server: the sqlservr.exe process used around 400MB
of
> memory,
> while the new one is using near 850MB after only 2 weeks (while at the
> beginning it used something like 600 MB).
> Is such a behaviour is normal and can be explained by that the new server
> has larger available amount of physical memory?
> In any case, are there any tools that helps to determine which application
> catch the largest amount of memory used by SQL?
> I think that such a large usage for relatively no so hard-working server
is
> something strange.
> Any help/tips will be highly appreciated!!!
> Thanks in advance,
> Barak
>
|||Barak,
BTW, within limits, the more memory available to SQL Server 2000, the better
one can expect performance from SQL Server 2000. Additional memory allows
SQL Server 2000 to create and use a larger memory cache area, thus reducing
physical I/O ... and physical I/O is more often then not what drags down
performance in most any database.
Chief Tenaya
"Barak Turovsky" <baraktur@.mail.ru> wrote in message
news:uslssSIHEHA.1432@.TK2MSFTNGP12.phx.gbl...
> Hi,
> Recently i've moved some of our databases from a Windows 2000 Server with
> 512 physical memory to a newer Windows 2003 Server with 1024MB physical
> memory.
> The new server functions OK, but there is a very high usage of memory in
> comparison to the old server: the sqlservr.exe process used around 400MB
of
> memory,
> while the new one is using near 850MB after only 2 weeks (while at the
> beginning it used something like 600 MB).
> Is such a behaviour is normal and can be explained by that the new server
> has larger available amount of physical memory?
> In any case, are there any tools that helps to determine which application
> catch the largest amount of memory used by SQL?
> I think that such a large usage for relatively no so hard-working server
is
> something strange.
> Any help/tips will be highly appreciated!!!
> Thanks in advance,
> Barak
>

high memory usage

Hi, i have an sql server machine reported high memory usage (2.5 GB), and
cause some workstations failed to save data back to sql server (use ADO
connection and client record set, connection failed).
I think reboot will fix the problem, but I want to know what cause problem,
how to check?
Please advice.
Well, you can:
1. Buy more RAM. That's not really that much RAM.
2. Set a maximum memory size for SQL Server so it doesn't use too much RAM.
Here's an article that should help you:
http://msdn.microsoft.com/library/de...onfig_9zfy.asp
"js" <js@.someone.com> wrote in message
news:OwlAQcyUFHA.3076@.TK2MSFTNGP12.phx.gbl...
> Hi, i have an sql server machine reported high memory usage (2.5 GB), and
> cause some workstations failed to save data back to sql server (use ADO
> connection and client record set, connection failed).
> I think reboot will fix the problem, but I want to know what cause
problem,
> how to check?
> Please advice.
>
|||is it an application can only use 4 GB memory?
"Derrick Leggett" <derrickleggett@.yahoo.com> wrote in message
news:uqZZJlyUFHA.3044@.TK2MSFTNGP10.phx.gbl...
> Well, you can:
> 1. Buy more RAM. That's not really that much RAM.
> 2. Set a maximum memory size for SQL Server so it doesn't use too much
> RAM.
> Here's an article that should help you:
> http://msdn.microsoft.com/library/de...onfig_9zfy.asp
>
> "js" <js@.someone.com> wrote in message
> news:OwlAQcyUFHA.3076@.TK2MSFTNGP12.phx.gbl...
> problem,
>
|||Can I see the dropped connection in sql log?
"Derrick Leggett" <derrickleggett@.yahoo.com> wrote in message
news:uqZZJlyUFHA.3044@.TK2MSFTNGP10.phx.gbl...
> Well, you can:
> 1. Buy more RAM. That's not really that much RAM.
> 2. Set a maximum memory size for SQL Server so it doesn't use too much
> RAM.
> Here's an article that should help you:
> http://msdn.microsoft.com/library/de...onfig_9zfy.asp
>
> "js" <js@.someone.com> wrote in message
> news:OwlAQcyUFHA.3076@.TK2MSFTNGP12.phx.gbl...
> problem,
>
|||What's will happen when memory reach max server memory(2 GB) in SQL server?
"Derrick Leggett" <derrickleggett@.yahoo.com> wrote in message
news:uqZZJlyUFHA.3044@.TK2MSFTNGP10.phx.gbl...
> Well, you can:
> 1. Buy more RAM. That's not really that much RAM.
> 2. Set a maximum memory size for SQL Server so it doesn't use too much
> RAM.
> Here's an article that should help you:
> http://msdn.microsoft.com/library/de...onfig_9zfy.asp
>
> "js" <js@.someone.com> wrote in message
> news:OwlAQcyUFHA.3076@.TK2MSFTNGP12.phx.gbl...
> problem,
>
|||If you really have a problem with the SQL Server, then you should see some
kind of errors in either the event viewer logs or the SQL Server log. Is
this a dedicated SQL Server? How much total RAM do you have in the machine?
What else is running on it? What version do you have of Windows and SQL
Server?
"js" <js@.someone.com> wrote in message
news:O73EI6yUFHA.2664@.TK2MSFTNGP15.phx.gbl...[vbcol=seagreen]
> Can I see the dropped connection in sql log?
> "Derrick Leggett" <derrickleggett@.yahoo.com> wrote in message
> news:uqZZJlyUFHA.3044@.TK2MSFTNGP10.phx.gbl...
http://msdn.microsoft.com/library/de...onfig_9zfy.asp[vbcol=seagreen]
and
>
|||If the server has enough RAM, then nothing will happen. If the SQL Server
actually needs more RAM, then you will see paging activity, a decrease in
performance, more recompiles of code, etc. The only way to fix it though if
you are on SQL Server standard edition and this is a dedicated SQL Server is
to either 1. tune to the server using profiler, looking at execution plans,
etc; or 2. Buy Enterprise Edition and more RAM.
You might want to check out http://www.sql-server-performance.com for
information on how to tune SQL Server, better manage memory, etc.
"js" <js@.someone.com> wrote in message
news:uP9NkQ0UFHA.3184@.TK2MSFTNGP15.phx.gbl...
> What's will happen when memory reach max server memory(2 GB) in SQL
server?[vbcol=seagreen]
> "Derrick Leggett" <derrickleggett@.yahoo.com> wrote in message
> news:uqZZJlyUFHA.3044@.TK2MSFTNGP10.phx.gbl...
http://msdn.microsoft.com/library/de...onfig_9zfy.asp[vbcol=seagreen]
and
>
sql

high memory usage

Hi, i have an sql server machine reported high memory usage (2.5 GB), and
cause some workstations failed to save data back to sql server (use ADO
connection and client record set, connection failed).
I think reboot will fix the problem, but I want to know what cause problem,
how to check?
Please advice.
Well, you can:
1. Buy more RAM. That's not really that much RAM.
2. Set a maximum memory size for SQL Server so it doesn't use too much RAM.
Here's an article that should help you:
http://msdn.microsoft.com/library/de...onfig_9zfy.asp
"js" <js@.someone.com> wrote in message
news:OwlAQcyUFHA.3076@.TK2MSFTNGP12.phx.gbl...
> Hi, i have an sql server machine reported high memory usage (2.5 GB), and
> cause some workstations failed to save data back to sql server (use ADO
> connection and client record set, connection failed).
> I think reboot will fix the problem, but I want to know what cause
problem,
> how to check?
> Please advice.
>
|||is it an application can only use 4 GB memory?
"Derrick Leggett" <derrickleggett@.yahoo.com> wrote in message
news:uqZZJlyUFHA.3044@.TK2MSFTNGP10.phx.gbl...
> Well, you can:
> 1. Buy more RAM. That's not really that much RAM.
> 2. Set a maximum memory size for SQL Server so it doesn't use too much
> RAM.
> Here's an article that should help you:
> http://msdn.microsoft.com/library/de...onfig_9zfy.asp
>
> "js" <js@.someone.com> wrote in message
> news:OwlAQcyUFHA.3076@.TK2MSFTNGP12.phx.gbl...
> problem,
>
|||Can I see the dropped connection in sql log?
"Derrick Leggett" <derrickleggett@.yahoo.com> wrote in message
news:uqZZJlyUFHA.3044@.TK2MSFTNGP10.phx.gbl...
> Well, you can:
> 1. Buy more RAM. That's not really that much RAM.
> 2. Set a maximum memory size for SQL Server so it doesn't use too much
> RAM.
> Here's an article that should help you:
> http://msdn.microsoft.com/library/de...onfig_9zfy.asp
>
> "js" <js@.someone.com> wrote in message
> news:OwlAQcyUFHA.3076@.TK2MSFTNGP12.phx.gbl...
> problem,
>
|||What's will happen when memory reach max server memory(2 GB) in SQL server?
"Derrick Leggett" <derrickleggett@.yahoo.com> wrote in message
news:uqZZJlyUFHA.3044@.TK2MSFTNGP10.phx.gbl...
> Well, you can:
> 1. Buy more RAM. That's not really that much RAM.
> 2. Set a maximum memory size for SQL Server so it doesn't use too much
> RAM.
> Here's an article that should help you:
> http://msdn.microsoft.com/library/de...onfig_9zfy.asp
>
> "js" <js@.someone.com> wrote in message
> news:OwlAQcyUFHA.3076@.TK2MSFTNGP12.phx.gbl...
> problem,
>
|||If you really have a problem with the SQL Server, then you should see some
kind of errors in either the event viewer logs or the SQL Server log. Is
this a dedicated SQL Server? How much total RAM do you have in the machine?
What else is running on it? What version do you have of Windows and SQL
Server?
"js" <js@.someone.com> wrote in message
news:O73EI6yUFHA.2664@.TK2MSFTNGP15.phx.gbl...[vbcol=seagreen]
> Can I see the dropped connection in sql log?
> "Derrick Leggett" <derrickleggett@.yahoo.com> wrote in message
> news:uqZZJlyUFHA.3044@.TK2MSFTNGP10.phx.gbl...
http://msdn.microsoft.com/library/de...onfig_9zfy.asp[vbcol=seagreen]
and
>
|||If the server has enough RAM, then nothing will happen. If the SQL Server
actually needs more RAM, then you will see paging activity, a decrease in
performance, more recompiles of code, etc. The only way to fix it though if
you are on SQL Server standard edition and this is a dedicated SQL Server is
to either 1. tune to the server using profiler, looking at execution plans,
etc; or 2. Buy Enterprise Edition and more RAM.
You might want to check out http://www.sql-server-performance.com for
information on how to tune SQL Server, better manage memory, etc.
"js" <js@.someone.com> wrote in message
news:uP9NkQ0UFHA.3184@.TK2MSFTNGP15.phx.gbl...
> What's will happen when memory reach max server memory(2 GB) in SQL
server?[vbcol=seagreen]
> "Derrick Leggett" <derrickleggett@.yahoo.com> wrote in message
> news:uqZZJlyUFHA.3044@.TK2MSFTNGP10.phx.gbl...
http://msdn.microsoft.com/library/de...onfig_9zfy.asp[vbcol=seagreen]
and
>

high memory usage

Hi, i have an sql server machine reported high memory usage (2.5 GB), and
cause some workstations failed to save data back to sql server (use ADO
connection and client record set, connection failed).
I think reboot will fix the problem, but I want to know what cause problem,
how to check?
Please advice.Well, you can:
1. Buy more RAM. That's not really that much RAM.
2. Set a maximum memory size for SQL Server so it doesn't use too much RAM.
Here's an article that should help you:
g_9zfy.asp" target="_blank">http://msdn.microsoft.com/library/d... />
g_9zfy.asp
"js" <js@.someone.com> wrote in message
news:OwlAQcyUFHA.3076@.TK2MSFTNGP12.phx.gbl...
> Hi, i have an sql server machine reported high memory usage (2.5 GB), and
> cause some workstations failed to save data back to sql server (use ADO
> connection and client record set, connection failed).
> I think reboot will fix the problem, but I want to know what cause
problem,
> how to check?
> Please advice.
>|||is it an application can only use 4 GB memory?
"Derrick Leggett" <derrickleggett@.yahoo.com> wrote in message
news:uqZZJlyUFHA.3044@.TK2MSFTNGP10.phx.gbl...
> Well, you can:
> 1. Buy more RAM. That's not really that much RAM.
> 2. Set a maximum memory size for SQL Server so it doesn't use too much
> RAM.
> Here's an article that should help you:
> fig_9zfy.asp" target="_blank">http://msdn.microsoft.com/library/d...>
fig_9zfy.asp
>
> "js" <js@.someone.com> wrote in message
> news:OwlAQcyUFHA.3076@.TK2MSFTNGP12.phx.gbl...
> problem,
>|||Can I see the dropped connection in sql log?
"Derrick Leggett" <derrickleggett@.yahoo.com> wrote in message
news:uqZZJlyUFHA.3044@.TK2MSFTNGP10.phx.gbl...
> Well, you can:
> 1. Buy more RAM. That's not really that much RAM.
> 2. Set a maximum memory size for SQL Server so it doesn't use too much
> RAM.
> Here's an article that should help you:
> fig_9zfy.asp" target="_blank">http://msdn.microsoft.com/library/d...>
fig_9zfy.asp
>
> "js" <js@.someone.com> wrote in message
> news:OwlAQcyUFHA.3076@.TK2MSFTNGP12.phx.gbl...
> problem,
>|||What's will happen when memory reach max server memory(2 GB) in SQL server?
"Derrick Leggett" <derrickleggett@.yahoo.com> wrote in message
news:uqZZJlyUFHA.3044@.TK2MSFTNGP10.phx.gbl...
> Well, you can:
> 1. Buy more RAM. That's not really that much RAM.
> 2. Set a maximum memory size for SQL Server so it doesn't use too much
> RAM.
> Here's an article that should help you:
> fig_9zfy.asp" target="_blank">http://msdn.microsoft.com/library/d...>
fig_9zfy.asp
>
> "js" <js@.someone.com> wrote in message
> news:OwlAQcyUFHA.3076@.TK2MSFTNGP12.phx.gbl...
> problem,
>|||If you really have a problem with the SQL Server, then you should see some
kind of errors in either the event viewer logs or the SQL Server log. Is
this a dedicated SQL Server? How much total RAM do you have in the machine?
What else is running on it? What version do you have of Windows and SQL
Server?
"js" <js@.someone.com> wrote in message
news:O73EI6yUFHA.2664@.TK2MSFTNGP15.phx.gbl...
> Can I see the dropped connection in sql log?
> "Derrick Leggett" <derrickleggett@.yahoo.com> wrote in message
> news:uqZZJlyUFHA.3044@.TK2MSFTNGP10.phx.gbl...
http://msdn.microsoft.com/library/d...config_9zfy.asp
[vbcol=seagreen]
and[vbcol=seagreen]
>|||If the server has enough RAM, then nothing will happen. If the SQL Server
actually needs more RAM, then you will see paging activity, a decrease in
performance, more recompiles of code, etc. The only way to fix it though if
you are on SQL Server standard edition and this is a dedicated SQL Server is
to either 1. tune to the server using profiler, looking at execution plans,
etc; or 2. Buy Enterprise Edition and more RAM.
You might want to check out http://www.sql-server-performance.com for
information on how to tune SQL Server, better manage memory, etc.
"js" <js@.someone.com> wrote in message
news:uP9NkQ0UFHA.3184@.TK2MSFTNGP15.phx.gbl...
> What's will happen when memory reach max server memory(2 GB) in SQL
server?
> "Derrick Leggett" <derrickleggett@.yahoo.com> wrote in message
> news:uqZZJlyUFHA.3044@.TK2MSFTNGP10.phx.gbl...
http://msdn.microsoft.com/library/d...config_9zfy.asp
[vbcol=seagreen]
and[vbcol=seagreen]
>

Monday, March 19, 2012

High CPU utilization after SQL Server reboot.

Hi,
The production database server indexes are rebuilt, usage updated and
the server is rebooted once a month. For about two days after this
maintenance window, performance isn't as good as it used to be and I've
noticed CPU utilization increases by 15-25%. I realize that all SPs
have to be recompiled after a SQL Server restart and this consumes
additional resources. But is it normal to see such marked performance
degradation after a reboot? Is there anything I can do to avoid this
short-term performance dip?
Thanks,
SmithaHi Smitha,
Check the SQL Server Errorlog, in Query Analyser you can run
master..xp_readerrorlog and check if any databases are still doing recovery.
SP's etc... will only get recompiled when they are called.
You really don't need to reboot servers anymore, i'd suggest you keep it up
and running and only reboot on implementing hot fixes, this will keep the
SQL cache balanced for your load, connection pooling primed etc...
If you want to diagnose the problem then on your next reboot, before
rebooting set the MSSQLServer service to manually start, then on the server
rebooting wait until everything else has loaded, get SQL profiler up and
ready then quickly fire off a trace as soon as you have restarted the
MSSSQLSErver service, you could set up a trace using proc's and set to auto
start.
Hope that helps.
Tony.
Tony Rogerson
SQL Server MVP
http://sqlserverfaq.com - free video tutorials
<smithabreddy@.gmail.com> wrote in message
news:1146575204.390850.296000@.e56g2000cwe.googlegroups.com...
> Hi,
> The production database server indexes are rebuilt, usage updated and
> the server is rebooted once a month. For about two days after this
> maintenance window, performance isn't as good as it used to be and I've
> noticed CPU utilization increases by 15-25%. I realize that all SPs
> have to be recompiled after a SQL Server restart and this consumes
> additional resources. But is it normal to see such marked performance
> degradation after a reboot? Is there anything I can do to avoid this
> short-term performance dip?
> Thanks,
> Smitha
>|||I am just making a couple of guesses here regarding what would explain this,
but here goes...
When the server first comes up, a couple things are different:
1. Execution plans are no longer cached, so there is a little overhead as
every new SQL statement executes and is cached.
Generating execution plans I would expect to take a very small amount of
overhead and account for only a small amount of what you are seeing.
Nonetheless, some of the extra CPU would be explained by this.
2. Data is no longer cached, so everything needs to be read from disk,
rather than from memory.
Reading from disk data that is normally cached in memory I would expect to
take much more resources. All queries will run slower until the data is
cached in memory, and this means more concurrent processes, thus more CPU.
It is possible that running a couple queries that will read a lot of this
data from disk and place it in cache may make a difference. The more memory
you have, the more I would expect this to make a difference. Of course,
figuring out what data to query for this purpose is a challenge in itself,
assuming that this is really part of the issue.
A possible test would be to try duplicating this on a test machine. If you
can mimic the load well enough to duplicate this after a reboot, you can try
purging the data cache and the execution plans separately then measure the
results of each. I'm not sure of the commands for each of these functions
(or if they can be done separately) but it seems it should confirm or rule
out each of these as an issue.
<smithabreddy@.gmail.com> wrote in message
news:1146575204.390850.296000@.e56g2000cwe.googlegroups.com...
> Hi,
> The production database server indexes are rebuilt, usage updated and
> the server is rebooted once a month. For about two days after this
> maintenance window, performance isn't as good as it used to be and I've
> noticed CPU utilization increases by 15-25%. I realize that all SPs
> have to be recompiled after a SQL Server restart and this consumes
> additional resources. But is it normal to see such marked performance
> degradation after a reboot? Is there anything I can do to avoid this
> short-term performance dip?
> Thanks,
> Smitha
>|||Tony and Jim,
Thank you for the responses.
I will start a trace next time the server is rebooted to determine what
causes the CPU load and then perhaps see if we can get away with not
rebooting the server. If not, then perhaps compile a list of oft used
SPs (the ones which perform data lookups) and fire them after a reboot
and see if that will help.|||Please post your findings back on the newgroup. I am interested in knowing
the results, and others might have the same issue.
<smithabreddy@.gmail.com> wrote in message
news:1146590994.348548.277780@.e56g2000cwe.googlegroups.com...
> Tony and Jim,
> Thank you for the responses.
> I will start a trace next time the server is rebooted to determine what
> causes the CPU load and then perhaps see if we can get away with not
> rebooting the server. If not, then perhaps compile a list of oft used
> SPs (the ones which perform data lookups) and fire them after a reboot
> and see if that will help.
>|||once a month would correspond to when MS released patches :)
"Tony Rogerson" <tonyrogerson@.sqlserverfaq.com> wrote in message
news:uYq6vCfbGHA.4716@.TK2MSFTNGP03.phx.gbl...
> Hi Smitha,
> Check the SQL Server Errorlog, in Query Analyser you can run
> master..xp_readerrorlog and check if any databases are still doing
> recovery.
> SP's etc... will only get recompiled when they are called.
> You really don't need to reboot servers anymore, i'd suggest you keep it
> up and running and only reboot on implementing hot fixes, this will keep
> the SQL cache balanced for your load, connection pooling primed etc...
> If you want to diagnose the problem then on your next reboot, before
> rebooting set the MSSQLServer service to manually start, then on the
> server rebooting wait until everything else has loaded, get SQL profiler
> up and ready then quickly fire off a trace as soon as you have restarted
> the MSSSQLSErver service, you could set up a trace using proc's and set to
> auto start.
> Hope that helps.
> Tony.
> --
> Tony Rogerson
> SQL Server MVP
> http://sqlserverfaq.com - free video tutorials
>
> <smithabreddy@.gmail.com> wrote in message
> news:1146575204.390850.296000@.e56g2000cwe.googlegroups.com...
>|||I did not reboot the server this time but instead only performed DBCC
DBREINDEX on all tables, updated all stats and updated usage as part of
monthly maintenance.
Also added 3 additional data files for TempDB to use (this machine has
4 processors) so the number of data files for TempDb = number of
physical processors on the box.
After everything, went into the application and clicked through some of
the most common pages - there was a delay of a second or two the first
time - but everything ran fine the second time around.
Performance seems fine so far (better than normal which I would expect
after performing maintenance on the DB) - no erratic CPU usage spikes.
Because recompiles are forced only after a SQL Server restart - perhaps
this was causing the high CPU utilization. Sorry - did not run the
trace. I realize that traces help identify issues, but it makes me
nervous to add additional load on the production DB server when the
issue is heavy load on the server. (In the past, I've had the trace
run on a second server and stored results on this second server but
still saw a small performance decline in the prod db server which held
steady during the duration of this trace.)
Thanks again for all of your input.
Jim Underwood wrote:
> Please post your findings back on the newgroup. I am interested in knowin
g
> the results, and others might have the same issue.
> <smithabreddy@.gmail.com> wrote in message
> news:1146590994.348548.277780@.e56g2000cwe.googlegroups.com...
Jim Underwood wrote:
> Please post your findings back on the newgroup. I am interested in knowin
g
> the results, and others might have the same issue.
> <smithabreddy@.gmail.com> wrote in message
> news:1146590994.348548.277780@.e56g2000cwe.googlegroups.com...

High CPU Usage SSIS Package

Anyone aware of anything I can look at to improve performance of my SSIS packages? i basically have two master packages that call about 15 child package tasks each. All the package tasks are called sequentially and all they do is just load flat files into temporary tables and execute a simple stored proc. Each package tasks check to see if a file is there and if so it processes it else it just ends. The packages are scheduled to run every minute to check for files. If I watch the CPU during execution i can see the two instances of DTSEXEC.exe running and consuming about 25-30% of the CPU. The box that this runs on is a Quad processor with 4 GB of ram.

Any ideas on how to improve the performance of this? I have tried playing with the transaction level, but that didn't seem to do much performance wise.

Thanks for the help!

I thoroughly recommend you digest this SSSI Performance Whitepaper: http://www.microsoft.com/technet/prodtechnol/sql/2005/ssisperf.mspx. Its a really interesting read and has some great tips on performance tuning.

Understanding how SSIS processes data is half the battle and this paper really helps towards this.

What performance problems are you having? As ever, performance tuning is all about identifying where your bottlenecks are.

-Jamie

|||

Yeah I browsed through that whitepaper and didn't see anything that triggered me for this situation.

Basically my setup is the following...

There is a directory that contains data that an AS/400 sends over via FTP. So for a customer change there is a file called customer.txt. Then there is an SSIS package that checks if that file exists, if so it processes it, if not it just ends (used a precendence constraint). There are probably 30 different files that get put into the FTP directory. So there are two master packages and each one calls about 15 child packages that do just as the package I described above. Most of the time there aren't very many files there so most of the packages fire up, see there isnt' a file there and end. Seems like the bottleneck is in the packages spinning up as once it gets going it seems like it goes pretty fast. I just don't understand how checking for a file can take up that much CPU. Are there other strategies for implementing the kind of architecture we have?

Thanks!

|||

OK, well you're right that there is an overhead to executing packages. Witness: http://blogs.conchango.com/jamiethomson/archive/2005/03/01/1100.aspx

(But caveat those observations with what I've said here: http://blogs.conchango.com/jamiethomson/archive/2005/03/02/1107.aspx)

Basically, if you can reduce the number of packages that have to be executed that would be very beneficial. That would be my strategy anyway. Is there a reason that everything can't happen within 1 package?

Interested to know how you progress with this.

-Jamie

|||

Those are some interesting findings that you have there.

I could implement most of these in one package and skip the child packages. The reason for having them is for versioning and concurrent development. Now that we are pretty much done with developing these I could probably transform this into one package that doesn't use any child packages.

So any thoughts on the best way to implement this into one package then? Wondering if I can have a config file that has all the file names to check for and based on that perform the appropriate transformation which is nothing more than trimming off the spaces, loading it into a temp table and executing a stored proc that update the master table.

Seems kind of ridiculous that loading these child packages takes that much overhead, but I guess I can see that.

|||

Yeah, the overhead of spinning up the packages is only really a problem if its proportion of total package execution time is large. In other words, if you aint doing much, don't try and put everything into seperate packages.

I would recommend using sequence containers to logically seperate your processing in the control-flow. So, where you had a child package before, you now have a sequence container. These can of course run in parallel. And also, only execute those "stuff" for which a file exists - I'm sure you've already gone down that road already anyway.

-Jamie

|||Maybe what I shoudl try first is to put the check if the file exists in the master package and not in the child package. That way it will have to spin up a lot fewer packages.|||

Stark77 wrote:

Maybe what I shoudl try first is to put the check if the file exists in the master package and not in the child package. That way it will have to spin up a lot fewer packages.

Yep, another good idea. Although if you do that your child packages will be (correct me if I'm wrong) only loading up a file and executing a sproc on the data. Such a small amount of work doesn't really justify moving it out into another package.

-Jamie

|||

Ha...finally getting around to doing this. You have any best practices on doing something like this?

Basically here is what I am doing. We get about 30-40 different files in an ftp directory and when they arrive we need to process them. Each file has a different layout. Basically we grab the file, load it to a temp table that matches the file layout and then fire a stored procedure to process the records in the temp table. At the end we delete the file if it was processed successfully. We process the files in a specific order (orderheader, then orderdetail, shipmentheader, shipmentdetail, etc). The only other task we do is to trim all the string variables.

Basically I am wondering the best way to lay this out or if there is a best practice for processing something like this. I thought about looping through all the .txt files in the directory, but that won't process them in the order I want.

Thanks for the help!

|||

I use a table in a management db that contains the files. This way you have utmost control. In addition you can use it to log failures against the files, processing history etc.

You can improve the overhead time a package has by delaying validation on tasks/components. This way the validation only occurs when needed.

|||

Any chance you can script me out your table you use so I can brain storm off of it? Not really following what you are suggesting. How does the package flow out then? Do you have a for each that reads that table? Would be cool to send me over that table and a screenshot of the package or the package itself.

Thanks for the input. I am definitely going to throw an article together once I get this one done.

High CPU Usage SSIS Package

Anyone aware of anything I can look at to improve performance of my SSIS packages? i basically have two master packages that call about 15 child package tasks each. All the package tasks are called sequentially and all they do is just load flat files into temporary tables and execute a simple stored proc. Each package tasks check to see if a file is there and if so it processes it else it just ends. The packages are scheduled to run every minute to check for files. If I watch the CPU during execution i can see the two instances of DTSEXEC.exe running and consuming about 25-30% of the CPU. The box that this runs on is a Quad processor with 4 GB of ram.

Any ideas on how to improve the performance of this? I have tried playing with the transaction level, but that didn't seem to do much performance wise.

Thanks for the help!

I thoroughly recommend you digest this SSSI Performance Whitepaper: http://www.microsoft.com/technet/prodtechnol/sql/2005/ssisperf.mspx. Its a really interesting read and has some great tips on performance tuning.

Understanding how SSIS processes data is half the battle and this paper really helps towards this.

What performance problems are you having? As ever, performance tuning is all about identifying where your bottlenecks are.

-Jamie

|||

Yeah I browsed through that whitepaper and didn't see anything that triggered me for this situation.

Basically my setup is the following...

There is a directory that contains data that an AS/400 sends over via FTP. So for a customer change there is a file called customer.txt. Then there is an SSIS package that checks if that file exists, if so it processes it, if not it just ends (used a precendence constraint). There are probably 30 different files that get put into the FTP directory. So there are two master packages and each one calls about 15 child packages that do just as the package I described above. Most of the time there aren't very many files there so most of the packages fire up, see there isnt' a file there and end. Seems like the bottleneck is in the packages spinning up as once it gets going it seems like it goes pretty fast. I just don't understand how checking for a file can take up that much CPU. Are there other strategies for implementing the kind of architecture we have?

Thanks!

|||

OK, well you're right that there is an overhead to executing packages. Witness: http://blogs.conchango.com/jamiethomson/archive/2005/03/01/1100.aspx

(But caveat those observations with what I've said here: http://blogs.conchango.com/jamiethomson/archive/2005/03/02/1107.aspx)

Basically, if you can reduce the number of packages that have to be executed that would be very beneficial. That would be my strategy anyway. Is there a reason that everything can't happen within 1 package?

Interested to know how you progress with this.

-Jamie

|||

Those are some interesting findings that you have there.

I could implement most of these in one package and skip the child packages. The reason for having them is for versioning and concurrent development. Now that we are pretty much done with developing these I could probably transform this into one package that doesn't use any child packages.

So any thoughts on the best way to implement this into one package then? Wondering if I can have a config file that has all the file names to check for and based on that perform the appropriate transformation which is nothing more than trimming off the spaces, loading it into a temp table and executing a stored proc that update the master table.

Seems kind of ridiculous that loading these child packages takes that much overhead, but I guess I can see that.

|||

Yeah, the overhead of spinning up the packages is only really a problem if its proportion of total package execution time is large. In other words, if you aint doing much, don't try and put everything into seperate packages.

I would recommend using sequence containers to logically seperate your processing in the control-flow. So, where you had a child package before, you now have a sequence container. These can of course run in parallel. And also, only execute those "stuff" for which a file exists - I'm sure you've already gone down that road already anyway.

-Jamie

|||Maybe what I shoudl try first is to put the check if the file exists in the master package and not in the child package. That way it will have to spin up a lot fewer packages.|||

Stark77 wrote:

Maybe what I shoudl try first is to put the check if the file exists in the master package and not in the child package. That way it will have to spin up a lot fewer packages.

Yep, another good idea. Although if you do that your child packages will be (correct me if I'm wrong) only loading up a file and executing a sproc on the data. Such a small amount of work doesn't really justify moving it out into another package.

-Jamie

|||

Ha...finally getting around to doing this. You have any best practices on doing something like this?

Basically here is what I am doing. We get about 30-40 different files in an ftp directory and when they arrive we need to process them. Each file has a different layout. Basically we grab the file, load it to a temp table that matches the file layout and then fire a stored procedure to process the records in the temp table. At the end we delete the file if it was processed successfully. We process the files in a specific order (orderheader, then orderdetail, shipmentheader, shipmentdetail, etc). The only other task we do is to trim all the string variables.

Basically I am wondering the best way to lay this out or if there is a best practice for processing something like this. I thought about looping through all the .txt files in the directory, but that won't process them in the order I want.

Thanks for the help!

|||

I use a table in a management db that contains the files. This way you have utmost control. In addition you can use it to log failures against the files, processing history etc.

You can improve the overhead time a package has by delaying validation on tasks/components. This way the validation only occurs when needed.

|||

Any chance you can script me out your table you use so I can brain storm off of it? Not really following what you are suggesting. How does the package flow out then? Do you have a for each that reads that table? Would be cool to send me over that table and a screenshot of the package or the package itself.

Thanks for the input. I am definitely going to throw an article together once I get this one done.

High CPU usage on 64bit (8 Way dual core) machine

Hi
I'm running SQL server 2005 (64 bit) on windows 2003 server (64 bit) in a
clustered environment (both machines in the cluster are identical) and the
machines have 12GB or RAM each and 8 dual core CPU's (3.Ghz) (DL580 G4's) -
presenting 16 cpus to the OS . We moved this instance from an 8 way xeon
cluster with 4GB memory, which pottered along quite happily at 50-70% CPU.
However the new cluster - which would have expected to manage the workload
much better epsecially in terms of CPU is constantly at about 80-90% cpu on
all processors, and frequntly in the high 90's - I've seen a couple of KB
articles relating to problems like this - however none seems to quite fit out
situation.
Anyone got any ideas ?I suspect that the lower CPU usage was due to slower disks and less memory.
In that case, the CPU has to wait for data. If it's not in cache, then it
has to go to disk. During this time, the CPU waits, and CPU usage goes to
0.
In your newer system, you have more cache, which reduces the need to go to
disk as often. Because the data is already available, then the CPU can be
put to work right away to service the query.
Has query performance suffered or are you just concerned that you see higher
CPU usage?
--
Tom
----
Thomas A. Moreau, BSc, PhD, MCSE, MCDBA, MCITP, MCTS
SQL Server MVP
Toronto, ON Canada
https://mvp.support.microsoft.com/profile/Tom.Moreau
"Rabbers" <Rabbers@.discussions.microsoft.com> wrote in message
news:ACEBF5A1-099B-4C13-A0CF-9677E48C2F4C@.microsoft.com...
Hi
I'm running SQL server 2005 (64 bit) on windows 2003 server (64 bit) in a
clustered environment (both machines in the cluster are identical) and the
machines have 12GB or RAM each and 8 dual core CPU's (3.Ghz) (DL580 G4's) -
presenting 16 cpus to the OS . We moved this instance from an 8 way xeon
cluster with 4GB memory, which pottered along quite happily at 50-70% CPU.
However the new cluster - which would have expected to manage the workload
much better epsecially in terms of CPU is constantly at about 80-90% cpu on
all processors, and frequntly in the high 90's - I've seen a couple of KB
articles relating to problems like this - however none seems to quite fit
out
situation.
Anyone got any ideas ?|||Rabbers
Can you tell us what did you check so far? Long running queries? Blocking
/Loking?
select top 50
sum(qs.total_worker_time) as total_cpu_time,
sum(qs.execution_count) as total_execution_count,
count(*) as number_of_statements,
qs.plan_handle
from
sys.dm_exec_query_stats qs
group by qs.plan_handle
order by sum(qs.total_worker_time) desc
"Rabbers" <Rabbers@.discussions.microsoft.com> wrote in message
news:ACEBF5A1-099B-4C13-A0CF-9677E48C2F4C@.microsoft.com...
> Hi
> I'm running SQL server 2005 (64 bit) on windows 2003 server (64 bit) in a
> clustered environment (both machines in the cluster are identical) and the
> machines have 12GB or RAM each and 8 dual core CPU's (3.Ghz) (DL580
> G4's) -
> presenting 16 cpus to the OS . We moved this instance from an 8 way xeon
> cluster with 4GB memory, which pottered along quite happily at 50-70% CPU.
> However the new cluster - which would have expected to manage the workload
> much better epsecially in terms of CPU is constantly at about 80-90% cpu
> on
> all processors, and frequntly in the high 90's - I've seen a couple of KB
> articles relating to problems like this - however none seems to quite fit
> out
> situation.
> Anyone got any ideas ?
>|||I agree with Tom in that you most likely had other bottlenecks that
prevented your CPU's from being constantly higher in the last system. High
CPU usage usually means you are getting a lot of work done and is not
necessarily a bad thing. But it can also indicate you have poorly optimized
queries and are getting a lot of compiles or recompiles.
--
Andrew J. Kelly SQL MVP
"Rabbers" <Rabbers@.discussions.microsoft.com> wrote in message
news:ACEBF5A1-099B-4C13-A0CF-9677E48C2F4C@.microsoft.com...
> Hi
> I'm running SQL server 2005 (64 bit) on windows 2003 server (64 bit) in a
> clustered environment (both machines in the cluster are identical) and the
> machines have 12GB or RAM each and 8 dual core CPU's (3.Ghz) (DL580
> G4's) -
> presenting 16 cpus to the OS . We moved this instance from an 8 way xeon
> cluster with 4GB memory, which pottered along quite happily at 50-70% CPU.
> However the new cluster - which would have expected to manage the workload
> much better epsecially in terms of CPU is constantly at about 80-90% cpu
> on
> all processors, and frequntly in the high 90's - I've seen a couple of KB
> articles relating to problems like this - however none seems to quite fit
> out
> situation.
> Anyone got any ideas ?
>|||Interesting - I had not thought of it in those terms, but my team are also
telling me that query performance has suffered. Incidently the disk sub
system is the same as on the old sub system it is an EMC CX300 SAN dual path
in active/active configuration, There does not appear to be any IO waiting
going on so I'm not sure this is oir was an issue. Workload is unchanged.
No blocking or locking going on. Although some big numbers from Uri's query.
14131421676 9670 32 0x0500050053D8DE7840039DC5000000000000000000000000
8347158363 30999 9 0x050005009014281C40C3ADB5000000000000000000000000
2376266762 1270 7 0x05000B00A39BCF6240C3AEE7000000000000000000000000
2202179405 4480 32 0x050005001AB4EA7740C3EE07010000000000000000000000
1997588436 2379 8 0x05000B00F82EF35F4003F5A9020000000000000000000000
1597112198 30677 3 0x05000500A3463C3F40A307AC000000000000000000000000
1456992451 19956 3 0x05000500541E7401406392BC000000000000000000000000
984281776 43805 13 0x05000500052FD03740A3A1C6000000000000000000000000
962083727 112113 18 0x05000700942C7D3E406339EA000000000000000000000000
843149798 577 7 0x05000B003153E7604023B6E9000000000000000000000000
752809408 5511 1 0x0500050088E47D5740631FC1000000000000000000000000
730983354 594 7 0x050005005AC2F3344083D8D7000000000000000000000000
538219354 3533651 5 0x050005009C23030F400304CF000000000000000000000000
405080101 463791 2 0x050005002ADB1A0D40233ADC000000000000000000000000
404464827 8912 3 0x05000500DAA5E13A406343BC000000000000000000000000
354858971 12888 9 0x050005006B2FB9524083CBDB000000000000000000000000
354788856 20840 2 0x0500070022E4943C40C3B2AF000000000000000000000000
311644988 223882 2 0x05000500D5F2636C40234B8A000000000000000000000000
278456532 1001 2 0x05000500354FCD53406346E1000000000000000000000000
270333901 837513 1 0x05000500286B5F2B4003AFAB000000000000000000000000
219689259 548657 3 0x050007008240FE2340C3F5DE000000000000000000000000
209456887 313 36 0x05000500C520C77A40639DBF010000000000000000000000
203146055 70 6 0x05000B006A77DB614003F289010000000000000000000000
187539837 8882 3 0x05000B0002B2A34140A38900010000000000000000000000
183248652 158 1 0x060005006B51F52940439A48020000000000000000000000
151190159 4904 12 0x050005003EA8575C40437BDD000000000000000000000000
148130918 2873 1 0x05000B0029EA182840A3DFE1020000000000000000000000
129011430 591 10 0x05000500FBC78E0540A3AB0A020000000000000000000000
122411767 9145 38 0x05000500A9B1977E4023BCD7000000000000000000000000
103452720 1947 4 0x05000500FC0D240E40A34CD7000000000000000000000000
99285280 433203 1 0x05000500162A4D6D404339BE000000000000000000000000
97455900 77 1 0x060005003E8B77394063E000030000000000000000000000
96981407 90 1 0x0600050061E19A1B40A3C087000000000000000000000000
94693286 9 9 0x05000600309B09704003E311030000000000000000000000
89323421 74 1 0x060005009BAE130440A3B7D8020000000000000000000000
75662449 1117974 2 0x05000500D547F70F40837AD0000000000000000000000000
74451891 84 4 0x06000B00CFFD2D3840C3FBF9000000000000000000000000
72410847 80 4 0x06000B0047B9FB1940A3FB6F010000000000000000000000
72244371 58 1 0x0600050005018F074083E895020000000000000000000000
70044701 5036752 1 0x05000B00B0B71B6A40A374E7000000000000000000000000
68605351 42 7 0x060005003488F60C4003CEE5020000000000000000000000
65495930 158 1 0x06000500FD821E3240233246020000000000000000000000
65255754 7922 8 0x05000500F9E6D05040C3D2D9000000000000000000000000
62199510 107 1 0x06000500A41D2C1E40E34E1A020000000000000000000000
60560005 34 2 0x06000B00D37CFA2240E3FAA7000000000000000000000000
56735759 127554 9 0x05000500FB72FB614043C1AB000000000000000000000000
54106604 110763 6 0x05000500735F631740E3A1C6000000000000000000000000
53255770 6 6 0x06000500C5224C2640833421020000000000000000000000
50681958 202 8 0x05000500B7F7095F40A337BA000000000000000000000000
50605253 1222 9 0x05000B00C3949C6940C30FC2000000000000000000000000
"Tom Moreau" wrote:
> I suspect that the lower CPU usage was due to slower disks and less memory.
> In that case, the CPU has to wait for data. If it's not in cache, then it
> has to go to disk. During this time, the CPU waits, and CPU usage goes to
> 0.
> In your newer system, you have more cache, which reduces the need to go to
> disk as often. Because the data is already available, then the CPU can be
> put to work right away to service the query.
> Has query performance suffered or are you just concerned that you see higher
> CPU usage?
> --
> Tom
> ----
> Thomas A. Moreau, BSc, PhD, MCSE, MCDBA, MCITP, MCTS
> SQL Server MVP
> Toronto, ON Canada
> https://mvp.support.microsoft.com/profile/Tom.Moreau
>
> "Rabbers" <Rabbers@.discussions.microsoft.com> wrote in message
> news:ACEBF5A1-099B-4C13-A0CF-9677E48C2F4C@.microsoft.com...
> Hi
> I'm running SQL server 2005 (64 bit) on windows 2003 server (64 bit) in a
> clustered environment (both machines in the cluster are identical) and the
> machines have 12GB or RAM each and 8 dual core CPU's (3.Ghz) (DL580 G4's) -
> presenting 16 cpus to the OS . We moved this instance from an 8 way xeon
> cluster with 4GB memory, which pottered along quite happily at 50-70% CPU.
> However the new cluster - which would have expected to manage the workload
> much better epsecially in terms of CPU is constantly at about 80-90% cpu on
> all processors, and frequntly in the high 90's - I've seen a couple of KB
> articles relating to problems like this - however none seems to quite fit
> out
> situation.
> Anyone got any ideas ?
>
>|||You may want to start profiling for long-running queries and/or those
queries that are using a lot of CPU.
BTW what service pack are you running?
--
Tom
----
Thomas A. Moreau, BSc, PhD, MCSE, MCDBA, MCITP, MCTS
SQL Server MVP
Toronto, ON Canada
https://mvp.support.microsoft.com/profile/Tom.Moreau
"Rabbers" <Rabbers@.discussions.microsoft.com> wrote in message
news:105535AF-20A9-41A4-A320-AD35C4CF28CA@.microsoft.com...
Interesting - I had not thought of it in those terms, but my team are also
telling me that query performance has suffered. Incidently the disk sub
system is the same as on the old sub system it is an EMC CX300 SAN dual path
in active/active configuration, There does not appear to be any IO waiting
going on so I'm not sure this is oir was an issue. Workload is unchanged.
No blocking or locking going on. Although some big numbers from Uri's query.
14131421676 9670 32 0x0500050053D8DE7840039DC5000000000000000000000000
8347158363 30999 9 0x050005009014281C40C3ADB5000000000000000000000000
2376266762 1270 7 0x05000B00A39BCF6240C3AEE7000000000000000000000000
2202179405 4480 32 0x050005001AB4EA7740C3EE07010000000000000000000000
1997588436 2379 8 0x05000B00F82EF35F4003F5A9020000000000000000000000
1597112198 30677 3 0x05000500A3463C3F40A307AC000000000000000000000000
1456992451 19956 3 0x05000500541E7401406392BC000000000000000000000000
984281776 43805 13 0x05000500052FD03740A3A1C6000000000000000000000000
962083727 112113 18 0x05000700942C7D3E406339EA000000000000000000000000
843149798 577 7 0x05000B003153E7604023B6E9000000000000000000000000
752809408 5511 1 0x0500050088E47D5740631FC1000000000000000000000000
730983354 594 7 0x050005005AC2F3344083D8D7000000000000000000000000
538219354 3533651 5 0x050005009C23030F400304CF000000000000000000000000
405080101 463791 2 0x050005002ADB1A0D40233ADC000000000000000000000000
404464827 8912 3 0x05000500DAA5E13A406343BC000000000000000000000000
354858971 12888 9 0x050005006B2FB9524083CBDB000000000000000000000000
354788856 20840 2 0x0500070022E4943C40C3B2AF000000000000000000000000
311644988 223882 2 0x05000500D5F2636C40234B8A000000000000000000000000
278456532 1001 2 0x05000500354FCD53406346E1000000000000000000000000
270333901 837513 1 0x05000500286B5F2B4003AFAB000000000000000000000000
219689259 548657 3 0x050007008240FE2340C3F5DE000000000000000000000000
209456887 313 36 0x05000500C520C77A40639DBF010000000000000000000000
203146055 70 6 0x05000B006A77DB614003F289010000000000000000000000
187539837 8882 3 0x05000B0002B2A34140A38900010000000000000000000000
183248652 158 1 0x060005006B51F52940439A48020000000000000000000000
151190159 4904 12 0x050005003EA8575C40437BDD000000000000000000000000
148130918 2873 1 0x05000B0029EA182840A3DFE1020000000000000000000000
129011430 591 10 0x05000500FBC78E0540A3AB0A020000000000000000000000
122411767 9145 38 0x05000500A9B1977E4023BCD7000000000000000000000000
103452720 1947 4 0x05000500FC0D240E40A34CD7000000000000000000000000
99285280 433203 1 0x05000500162A4D6D404339BE000000000000000000000000
97455900 77 1 0x060005003E8B77394063E000030000000000000000000000
96981407 90 1 0x0600050061E19A1B40A3C087000000000000000000000000
94693286 9 9 0x05000600309B09704003E311030000000000000000000000
89323421 74 1 0x060005009BAE130440A3B7D8020000000000000000000000
75662449 1117974 2 0x05000500D547F70F40837AD0000000000000000000000000
74451891 84 4 0x06000B00CFFD2D3840C3FBF9000000000000000000000000
72410847 80 4 0x06000B0047B9FB1940A3FB6F010000000000000000000000
72244371 58 1 0x0600050005018F074083E895020000000000000000000000
70044701 5036752 1 0x05000B00B0B71B6A40A374E7000000000000000000000000
68605351 42 7 0x060005003488F60C4003CEE5020000000000000000000000
65495930 158 1 0x06000500FD821E3240233246020000000000000000000000
65255754 7922 8 0x05000500F9E6D05040C3D2D9000000000000000000000000
62199510 107 1 0x06000500A41D2C1E40E34E1A020000000000000000000000
60560005 34 2 0x06000B00D37CFA2240E3FAA7000000000000000000000000
56735759 127554 9 0x05000500FB72FB614043C1AB000000000000000000000000
54106604 110763 6 0x05000500735F631740E3A1C6000000000000000000000000
53255770 6 6 0x06000500C5224C2640833421020000000000000000000000
50681958 202 8 0x05000500B7F7095F40A337BA000000000000000000000000
50605253 1222 9 0x05000B00C3949C6940C30FC2000000000000000000000000
"Tom Moreau" wrote:
> I suspect that the lower CPU usage was due to slower disks and less
> memory.
> In that case, the CPU has to wait for data. If it's not in cache, then it
> has to go to disk. During this time, the CPU waits, and CPU usage goes to
> 0.
> In your newer system, you have more cache, which reduces the need to go to
> disk as often. Because the data is already available, then the CPU can be
> put to work right away to service the query.
> Has query performance suffered or are you just concerned that you see
> higher
> CPU usage?
> --
> Tom
> ----
> Thomas A. Moreau, BSc, PhD, MCSE, MCDBA, MCITP, MCTS
> SQL Server MVP
> Toronto, ON Canada
> https://mvp.support.microsoft.com/profile/Tom.Moreau
>
> "Rabbers" <Rabbers@.discussions.microsoft.com> wrote in message
> news:ACEBF5A1-099B-4C13-A0CF-9677E48C2F4C@.microsoft.com...
> Hi
> I'm running SQL server 2005 (64 bit) on windows 2003 server (64 bit) in a
> clustered environment (both machines in the cluster are identical) and the
> machines have 12GB or RAM each and 8 dual core CPU's (3.Ghz) (DL580
> G4's) -
> presenting 16 cpus to the OS . We moved this instance from an 8 way xeon
> cluster with 4GB memory, which pottered along quite happily at 50-70% CPU.
> However the new cluster - which would have expected to manage the workload
> much better epsecially in terms of CPU is constantly at about 80-90% cpu
> on
> all processors, and frequntly in the high 90's - I've seen a couple of KB
> articles relating to problems like this - however none seems to quite fit
> out
> situation.
> Anyone got any ideas ?
>
>|||Have already looked at the query side of things - what surpirises me is that
CPU usage was much lower on the former server with the same workload..
Version is 9.00.3050.00 which is SP2
"Tom Moreau" wrote:
> You may want to start profiling for long-running queries and/or those
> queries that are using a lot of CPU.
> BTW what service pack are you running?
> --
> Tom
> ----
> Thomas A. Moreau, BSc, PhD, MCSE, MCDBA, MCITP, MCTS
> SQL Server MVP
> Toronto, ON Canada
> https://mvp.support.microsoft.com/profile/Tom.Moreau
>
> "Rabbers" <Rabbers@.discussions.microsoft.com> wrote in message
> news:105535AF-20A9-41A4-A320-AD35C4CF28CA@.microsoft.com...
> Interesting - I had not thought of it in those terms, but my team are also
> telling me that query performance has suffered. Incidently the disk sub
> system is the same as on the old sub system it is an EMC CX300 SAN dual path
> in active/active configuration, There does not appear to be any IO waiting
> going on so I'm not sure this is oir was an issue. Workload is unchanged.
> No blocking or locking going on. Although some big numbers from Uri's query.
> 14131421676 9670 32 0x0500050053D8DE7840039DC5000000000000000000000000
> 8347158363 30999 9 0x050005009014281C40C3ADB5000000000000000000000000
> 2376266762 1270 7 0x05000B00A39BCF6240C3AEE7000000000000000000000000
> 2202179405 4480 32 0x050005001AB4EA7740C3EE07010000000000000000000000
> 1997588436 2379 8 0x05000B00F82EF35F4003F5A9020000000000000000000000
> 1597112198 30677 3 0x05000500A3463C3F40A307AC000000000000000000000000
> 1456992451 19956 3 0x05000500541E7401406392BC000000000000000000000000
> 984281776 43805 13 0x05000500052FD03740A3A1C6000000000000000000000000
> 962083727 112113 18 0x05000700942C7D3E406339EA000000000000000000000000
> 843149798 577 7 0x05000B003153E7604023B6E9000000000000000000000000
> 752809408 5511 1 0x0500050088E47D5740631FC1000000000000000000000000
> 730983354 594 7 0x050005005AC2F3344083D8D7000000000000000000000000
> 538219354 3533651 5 0x050005009C23030F400304CF000000000000000000000000
> 405080101 463791 2 0x050005002ADB1A0D40233ADC000000000000000000000000
> 404464827 8912 3 0x05000500DAA5E13A406343BC000000000000000000000000
> 354858971 12888 9 0x050005006B2FB9524083CBDB000000000000000000000000
> 354788856 20840 2 0x0500070022E4943C40C3B2AF000000000000000000000000
> 311644988 223882 2 0x05000500D5F2636C40234B8A000000000000000000000000
> 278456532 1001 2 0x05000500354FCD53406346E1000000000000000000000000
> 270333901 837513 1 0x05000500286B5F2B4003AFAB000000000000000000000000
> 219689259 548657 3 0x050007008240FE2340C3F5DE000000000000000000000000
> 209456887 313 36 0x05000500C520C77A40639DBF010000000000000000000000
> 203146055 70 6 0x05000B006A77DB614003F289010000000000000000000000
> 187539837 8882 3 0x05000B0002B2A34140A38900010000000000000000000000
> 183248652 158 1 0x060005006B51F52940439A48020000000000000000000000
> 151190159 4904 12 0x050005003EA8575C40437BDD000000000000000000000000
> 148130918 2873 1 0x05000B0029EA182840A3DFE1020000000000000000000000
> 129011430 591 10 0x05000500FBC78E0540A3AB0A020000000000000000000000
> 122411767 9145 38 0x05000500A9B1977E4023BCD7000000000000000000000000
> 103452720 1947 4 0x05000500FC0D240E40A34CD7000000000000000000000000
> 99285280 433203 1 0x05000500162A4D6D404339BE000000000000000000000000
> 97455900 77 1 0x060005003E8B77394063E000030000000000000000000000
> 96981407 90 1 0x0600050061E19A1B40A3C087000000000000000000000000
> 94693286 9 9 0x05000600309B09704003E311030000000000000000000000
> 89323421 74 1 0x060005009BAE130440A3B7D8020000000000000000000000
> 75662449 1117974 2 0x05000500D547F70F40837AD0000000000000000000000000
> 74451891 84 4 0x06000B00CFFD2D3840C3FBF9000000000000000000000000
> 72410847 80 4 0x06000B0047B9FB1940A3FB6F010000000000000000000000
> 72244371 58 1 0x0600050005018F074083E895020000000000000000000000
> 70044701 5036752 1 0x05000B00B0B71B6A40A374E7000000000000000000000000
> 68605351 42 7 0x060005003488F60C4003CEE5020000000000000000000000
> 65495930 158 1 0x06000500FD821E3240233246020000000000000000000000
> 65255754 7922 8 0x05000500F9E6D05040C3D2D9000000000000000000000000
> 62199510 107 1 0x06000500A41D2C1E40E34E1A020000000000000000000000
> 60560005 34 2 0x06000B00D37CFA2240E3FAA7000000000000000000000000
> 56735759 127554 9 0x05000500FB72FB614043C1AB000000000000000000000000
> 54106604 110763 6 0x05000500735F631740E3A1C6000000000000000000000000
> 53255770 6 6 0x06000500C5224C2640833421020000000000000000000000
> 50681958 202 8 0x05000500B7F7095F40A337BA000000000000000000000000
> 50605253 1222 9 0x05000B00C3949C6940C30FC2000000000000000000000000
> "Tom Moreau" wrote:
> > I suspect that the lower CPU usage was due to slower disks and less
> > memory.
> > In that case, the CPU has to wait for data. If it's not in cache, then it
> > has to go to disk. During this time, the CPU waits, and CPU usage goes to
> > 0.
> >
> > In your newer system, you have more cache, which reduces the need to go to
> > disk as often. Because the data is already available, then the CPU can be
> > put to work right away to service the query.
> >
> > Has query performance suffered or are you just concerned that you see
> > higher
> > CPU usage?
> >
> > --
> > Tom
> >
> > ----
> > Thomas A. Moreau, BSc, PhD, MCSE, MCDBA, MCITP, MCTS
> > SQL Server MVP
> > Toronto, ON Canada
> > https://mvp.support.microsoft.com/profile/Tom.Moreau
> >
> >
> > "Rabbers" <Rabbers@.discussions.microsoft.com> wrote in message
> > news:ACEBF5A1-099B-4C13-A0CF-9677E48C2F4C@.microsoft.com...
> > Hi
> >
> > I'm running SQL server 2005 (64 bit) on windows 2003 server (64 bit) in a
> > clustered environment (both machines in the cluster are identical) and the
> > machines have 12GB or RAM each and 8 dual core CPU's (3.Ghz) (DL580
> > G4's) -
> > presenting 16 cpus to the OS . We moved this instance from an 8 way xeon
> > cluster with 4GB memory, which pottered along quite happily at 50-70% CPU.
> >
> > However the new cluster - which would have expected to manage the workload
> > much better epsecially in terms of CPU is constantly at about 80-90% cpu
> > on
> > all processors, and frequntly in the high 90's - I've seen a couple of KB
> > articles relating to problems like this - however none seems to quite fit
> > out
> > situation.
> >
> > Anyone got any ideas ?
> >
> >
> >
>|||OK - I'll have a look at the query side of things...
"Andrew J. Kelly" wrote:
> I agree with Tom in that you most likely had other bottlenecks that
> prevented your CPU's from being constantly higher in the last system. High
> CPU usage usually means you are getting a lot of work done and is not
> necessarily a bad thing. But it can also indicate you have poorly optimized
> queries and are getting a lot of compiles or recompiles.
> --
> Andrew J. Kelly SQL MVP
> "Rabbers" <Rabbers@.discussions.microsoft.com> wrote in message
> news:ACEBF5A1-099B-4C13-A0CF-9677E48C2F4C@.microsoft.com...
> > Hi
> >
> > I'm running SQL server 2005 (64 bit) on windows 2003 server (64 bit) in a
> > clustered environment (both machines in the cluster are identical) and the
> > machines have 12GB or RAM each and 8 dual core CPU's (3.Ghz) (DL580
> > G4's) -
> > presenting 16 cpus to the OS . We moved this instance from an 8 way xeon
> > cluster with 4GB memory, which pottered along quite happily at 50-70% CPU.
> >
> > However the new cluster - which would have expected to manage the workload
> > much better epsecially in terms of CPU is constantly at about 80-90% cpu
> > on
> > all processors, and frequntly in the high 90's - I've seen a couple of KB
> > articles relating to problems like this - however none seems to quite fit
> > out
> > situation.
> >
> > Anyone got any ideas ?
> >
> >
>
>

High CPU usage help

Im at my wits end. The SQL server (SQL2000 using 2005 Studio to view )
activity log is showing several ProcessIDs that say "AWAITING COMMAND" and it
also says the Status is SLEEPING, However the CPU# is 235789 and the Phyical
I/O is 36345 and Memory is 912
So if the status is Sleeping and AWAITNG COMMAND, why are these numbers not
0? If its waiting it shouldnt be doing much of anything. Our SQL server will
just randomly start crawling
JP
..NET Software Developer
The information you are seeing is cumulative for that spid. It isnt what it
is using at that point in time, but those statistics are the resources used
for the life of that spid.
AndyP,
Sr. Database Administrator,
MCDBA 2003 &
Sybase Certified Pro DBA (AA115, SD115, AA12, AP12)
"JP" wrote:

> Im at my wits end. The SQL server (SQL2000 using 2005 Studio to view )
> activity log is showing several ProcessIDs that say "AWAITING COMMAND" and it
> also says the Status is SLEEPING, However the CPU# is 235789 and the Phyical
> I/O is 36345 and Memory is 912
> So if the status is Sleeping and AWAITNG COMMAND, why are these numbers not
> 0? If its waiting it shouldnt be doing much of anything. Our SQL server will
> just randomly start crawling
> --
> JP
> .NET Software Developer

High CPU usage help

Im at my wits end. The SQL server (SQL2000 using 2005 Studio to view )
activity log is showing several ProcessIDs that say "AWAITING COMMAND" and it
also says the Status is SLEEPING, However the CPU# is 235789 and the Phyical
I/O is 36345 and Memory is 912
So if the status is Sleeping and AWAITNG COMMAND, why are these numbers not
0? If its waiting it shouldnt be doing much of anything. Our SQL server will
just randomly start crawling
--
JP
.NET Software DeveloperThe information you are seeing is cumulative for that spid. It isnt what it
is using at that point in time, but those statistics are the resources used
for the life of that spid.
--
AndyP,
Sr. Database Administrator,
MCDBA 2003 &
Sybase Certified Pro DBA (AA115, SD115, AA12, AP12)
"JP" wrote:
> Im at my wits end. The SQL server (SQL2000 using 2005 Studio to view )
> activity log is showing several ProcessIDs that say "AWAITING COMMAND" and it
> also says the Status is SLEEPING, However the CPU# is 235789 and the Phyical
> I/O is 36345 and Memory is 912
> So if the status is Sleeping and AWAITNG COMMAND, why are these numbers not
> 0? If its waiting it shouldnt be doing much of anything. Our SQL server will
> just randomly start crawling
> --
> JP
> .NET Software Developer

High CPU usage help

Im at my wits end. The SQL server (SQL2000 using 2005 Studio to view )
activity log is showing several ProcessIDs that say "AWAITING COMMAND" and i
t
also says the Status is SLEEPING, However the CPU# is 235789 and the Phyical
I/O is 36345 and Memory is 912
So if the status is Sleeping and AWAITNG COMMAND, why are these numbers not
0? If its waiting it shouldnt be doing much of anything. Our SQL server will
just randomly start crawling
JP
.NET Software DeveloperThe information you are seeing is cumulative for that spid. It isnt what it
is using at that point in time, but those statistics are the resources used
for the life of that spid.
AndyP,
Sr. Database Administrator,
MCDBA 2003 &
Sybase Certified Pro DBA (AA115, SD115, AA12, AP12)
"JP" wrote:

> Im at my wits end. The SQL server (SQL2000 using 2005 Studio to view )
> activity log is showing several ProcessIDs that say "AWAITING COMMAND" and
it
> also says the Status is SLEEPING, However the CPU# is 235789 and the Phyic
al
> I/O is 36345 and Memory is 912
> So if the status is Sleeping and AWAITNG COMMAND, why are these numbers no
t
> 0? If its waiting it shouldnt be doing much of anything. Our SQL server wi
ll
> just randomly start crawling
> --
> JP
> .NET Software Developer

High Cpu Usage During Logon

MY PROBLEM IS SOME HOW STRANGE TO ME AS I AM NOT AN EXPERT...

WE USE AN ERP PROGRAM ON A SQL7 DATABASE....WHEN USERS LOGON AND TRY TO WORK I HAVE 100% CPU USAGE AND THE SERVER IS NOT RESPONDING UNTIL I KILL THE APP. IF THE USER HAS ADMINISTRATOR RIGHTS EVERYTHING WORKS SMOOTHLY...ADMINISTRATOR RIGHTS FOR THE DATABASE...

WHAT I MANAGED TO FINDOUT IS THAT WHAT IS RUNNING DURING THE SERVER LOCKS IS SP_CURSOROPEN

ANY SUGESTIONS?.....Howdy

Sounds like you have possible wrong level of access for the app - try adding it to the db_datareader & db_datawriter database roles & see what happens. If that doesn't make any difference, remove from all roles ( except public ) then only to db_owner database role & see what happens.

Cheers,

SG.|||nothing.......

some new facts on the matter are that the users eventualy log on and can work but the speed is very slow when they try to change menu options.....the problem occured after 3 years of smoothly operation and that is the strang thing....also when i say admin right i mean from the apps option since on the DB i have only one user (SA) who is used by the app......|||Howdy

Well, sa will give you all the access you need.

How are you for disk space - especially for the TEMPDB database & the production db? Also, sounds possible you may be paging memory madly out to disk , hence the slow speed. Have you run perfmon to check the efficiency of the server?

Sounds odd that after 3 years of smooth operation all goes to custard....

What else has changed recently? Any service pack updates etc?

Cheers,

SG|||nothing except a shrink and a reorganize.....the second was running on a schedule for some time now......
as for space everything is ok....
performance has nothing to do also i believe it is something else....something about the app... since the only user that logs on sql is sa nomatter if i give him rights using the app or not....|||Howdy

Common sense tells me SOMETHING has to have changed. If you havent run out of disk space, the server isnt under powered then someone, somewhere has to have altered something.

Try running Profiler and see what activity is occurring in the database just as it starts to slow down. Is there a particular sequence of events that happen as this problem starts, or is it an ongoing problem? Has the number of users gone up? have new functionality been added to the system?

It wouldnt be the first time someone has altered a system but not told anyone....

Cheers,

SG|||at first i would like to say thanks for your help....

i managed to fix the problem but i dont know yet what coused it....
the solution was to reindex the tables that are involve to security for the app and manualy updatestats for the production db.....
know i have a small delay which is acceptable and everybody is working ok...

thanks again...|||Howdy,

....well without sounding a little comical...that would have been one of my next suggestions......

Check the settings for the database - they should be ( by default ) set to update stats automatically. Its a bit scary that this was turned off...You can create a separate Maintenance Plan to do this - we do and sample 25% of the database. Also a reindex of each index in the database is useful - we do it weekly & it keeps the index fragmentation right down and speed up.

You can also turn off autoshrink for the databse which will lift the speed a bit more but you will need to schedule a job to do this at least weekly as this will keep the databases from getting too big & slowing down.

Also, turn off the autoclose for the database - this will remove a small overhead on performance.

Cheers,

SG.

High CPU usage after backup

We've noticed on our server that the CPU usage goes from its normal 60% to
90% during a backup job, and stays at 90% even after the job is complete.
The backup job takes 30 seconds to a minute to complete, but CPU usage stays
at 90% (of which the "sqlservr.exe" process is taking 70%). The backup job
updates the statistics and backs up the transaction log.
The "really" strange thing I've found, is that by just running Enterprise
Manager and accessing the Properties dialog of the server (I don't need to
toggle any settings), will cause CPU usage to go back down to it's normal
60%. This seems to work whether I'm doing this on the server via Remote
Desktop, or using my local Enterprise Manager to connect to the server
remotely.
Has anyone observed this occurring before?
We're running SQL Server 2000 Standard Edition SP3 (I know we need to update
this), on Windows Server 2003. Server is a Xeon 2.8 with 2gb ram.
In terms of database size, its at 1gb, with 4gb allocated. Transaction log
is at 2gb with 7gb allocated.Hi
I have not experienced this, but if this was on a SAN my guess would be a
replication issue/hardware bottleneck. Check out the perfmon counters and SAN
stats to see if there are problems.
Moving the latest service pack/patch level may be useful.
John
"Ken" wrote:
> We've noticed on our server that the CPU usage goes from its normal 60% to
> 90% during a backup job, and stays at 90% even after the job is complete.
> The backup job takes 30 seconds to a minute to complete, but CPU usage stays
> at 90% (of which the "sqlservr.exe" process is taking 70%). The backup job
> updates the statistics and backs up the transaction log.
> The "really" strange thing I've found, is that by just running Enterprise
> Manager and accessing the Properties dialog of the server (I don't need to
> toggle any settings), will cause CPU usage to go back down to it's normal
> 60%. This seems to work whether I'm doing this on the server via Remote
> Desktop, or using my local Enterprise Manager to connect to the server
> remotely.
> Has anyone observed this occurring before?
> We're running SQL Server 2000 Standard Edition SP3 (I know we need to update
> this), on Windows Server 2003. Server is a Xeon 2.8 with 2gb ram.
> In terms of database size, its at 1gb, with 4gb allocated. Transaction log
> is at 2gb with 7gb allocated.
>
>