We are running the June CTP.
When some users go to htttp://theserver/Reports, all they see is the
word 'Home' and a line across the page.
This happens even though I have added these users as both
Administrators and Users.
I imagine that this has something to do with the domain, but I don't
see what arrangement would allow them to get to a partially completed
page.
Any ideas?
JimMake sure the RS services are running under a domain log in and that the
login has permissions on the Tempfiles directory under ths MSSQL.3
directory.
--
Wayne Snyder, MCDBA, SQL Server MVP
Mariner, Charlotte, NC
www.mariner-usa.com
(Please respond only to the newsgroups.)
I support the Professional Association of SQL Server (PASS) and it's
community of SQL Server professionals.
www.sqlpass.org
<jhcorey@.yahoo.com> wrote in message
news:1125062702.187423.141040@.g14g2000cwa.googlegroups.com...
> We are running the June CTP.
> When some users go to htttp://theserver/Reports, all they see is the
> word 'Home' and a line across the page.
> This happens even though I have added these users as both
> Administrators and Users.
> I imagine that this has something to do with the domain, but I don't
> see what arrangement would allow them to get to a partially completed
> page.
> Any ideas?
> Jim
>|||The services are running under a login that is an admin in the domain,
as well as an admin on the machine.
We notice that there is a user group on the machine called.
SQLServer2005ReportServerUser$MSSQLServer. Users that we've added to
this can use RS normally. But as I noted, users that we've added from
the web site cannot. These are all users in the same domain.|||My guess here is that they have not been setup in a role. The windows groups
are for validation of who they are, not what they can do. You need to assign
users or groups to a role (in report manager). Read up in books online about
roles. What I do is have a local group to that local group I add domain
groups and invidual users. I then assign the local group to the browser
role.
Bruce Loehle-Conger
MVP SQL Server Reporting Services
<jhcorey@.yahoo.com> wrote in message
news:1125330865.896229.33860@.f14g2000cwb.googlegroups.com...
> The services are running under a login that is an admin in the domain,
> as well as an admin on the machine.
> We notice that there is a user group on the machine called.
> SQLServer2005ReportServerUser$MSSQLServer. Users that we've added to
> this can use RS normally. But as I noted, users that we've added from
> the web site cannot. These are all users in the same domain.
>|||Ah yes, it was simple of course once one knew where to look -- in the
properties for the home page.
I was thinking that System security would take care of everything.
Showing posts with label word. Show all posts
Showing posts with label word. Show all posts
Thursday, March 29, 2012
Tuesday, March 27, 2012
Ho do I embed a word document into a rdl or rdlc?
Hello,
I'm trying embed some text from a msword document into my sql reporting
services report. This is actually working with a report that I had
converted from acesss. Within access the msword portion is referred to as
an ole object.
ThanksYou are going to need to create an assembly to get this information and
return it into the "Code.<Function>" call.
=-Chris
"Rob Dob" <robdob20012002@.yahoo.com> wrote in message
news:eEQFxLr$GHA.3396@.TK2MSFTNGP02.phx.gbl...
> Hello,
> I'm trying embed some text from a msword document into my sql reporting
> services report. This is actually working with a report that I had
> converted from acesss. Within access the msword portion is referred to as
> an ole object.
> Thanks
>|||Hi,
> You are going to need to create an assembly to get this information and
> return it into the "Code.<Function>" call.
>
you mean I can't just drag and drop..? wow, very aggravating.. do you
have an example of how I would do this? All I am trying to do is embed the
terms and conditions of a contract that was prepared within MSWord within
the body of my reporting services report.
thanks,
I'm trying embed some text from a msword document into my sql reporting
services report. This is actually working with a report that I had
converted from acesss. Within access the msword portion is referred to as
an ole object.
ThanksYou are going to need to create an assembly to get this information and
return it into the "Code.<Function>" call.
=-Chris
"Rob Dob" <robdob20012002@.yahoo.com> wrote in message
news:eEQFxLr$GHA.3396@.TK2MSFTNGP02.phx.gbl...
> Hello,
> I'm trying embed some text from a msword document into my sql reporting
> services report. This is actually working with a report that I had
> converted from acesss. Within access the msword portion is referred to as
> an ole object.
> Thanks
>|||Hi,
> You are going to need to create an assembly to get this information and
> return it into the "Code.<Function>" call.
>
you mean I can't just drag and drop..? wow, very aggravating.. do you
have an example of how I would do this? All I am trying to do is embed the
terms and conditions of a contract that was prepared within MSWord within
the body of my reporting services report.
thanks,
Monday, March 12, 2012
High CPU
I have a client that is experiencing high CPU. In their word
everything was OK until a couple of weeks ago. It was around 50% and
then it jumped to 100%. At that point, they have:
=95 released a new version of the product
=95 upgraded server to Cumulative Update 5
=95 made a change on the way that they are doing connection pooling
=95 They are also getting (daily) new customers
Scenario
They have hosted system that collects data from wireless devices
(through MSMQ) and Windows clients that are monitoring the state by
refreshing screen every 30 sec - 5 min. From 9 - 5 they are not able
to cope with the load.
At the time the problem occurred they had around 700 customers and
corresponding databases. Now they have more (around 900).
SQL Server 2005 EE 32-bit, 4 (2x2) CPUs, 12 GB RAM
Investigating Changes
We have tried to investigate changes that they made.
I have reviewed changes that they made in the database and there is
nothing that draws attention. We have tried to disable some features
but we didn't see much impact.
They have rolled back Cumulative Update 5. Again with no visible
impact.
Originally their db connections were going directly to customer
databases. In new release, they are executing everything though master
database:
Exec DB0234.dbo.spBlaBla
They have rolled back this change - still CPU is pegged 9-5.
Investigating
Compilation
We have looked into this. There is around 600 batches (sps) executed
per sec. We have < 10 % of compilations. I've found somewhere that
that is acceptable.
Also our Cash hit ratio for SQL Server objects is over 90%.
Re-compilation
We looked into this. Very low.
IO optimizations
We spent lot of time on this. There were sps that we were able to
optimize dramatically. Typically by rewriting queries or by replacing
indexes by better ones.
Locking and blocking
We looked into this. We are under impression that this is not a
problem. We do not see excessive blocking and on top of them all
readers are running with nolocks hint anyway.
CPU Pressure
We do believe that we would do better with more CPUs.
signal_wait_time_ms %signal (cpu) waits resource_wait_time_ms
%resource waits
-- -- --
--
14801338 37.32 24863340 62.68
"In conclusion, if Signal Waits are a significant percentage of total
waits, you have CPU pressure which may be alleviated by faster or more
CPUs."
Two highest wait types that we are seeing are sos_scheduler_yield and
sos_scheduler_yield. They are also indication of CPU pressure.
It is relatively simple to put this on faster server. In fact they are
planning to do that this weekend. Instead of 4 (2x2) CPU (32-bit) we
will have 8 (2x4) CPUs (64-bit). However, we are concerned that it may
not be enough. Or that we may have the same problem in two months. We
also cannot explain why it jumped from around 50% to 100%.
What are we missing? Any ideas?
Dejan265I would try to find out the queries that the top CPU consumers. If you are
running SP2 or later, Performance Dashboard is quick and dirty for this.
Linchi
"dejan265@.gmail.com" wrote:
> I have a client that is experiencing high CPU. In their word
> everything was OK until a couple of weeks ago. It was around 50% and
> then it jumped to 100%. At that point, they have:
> â?¢ released a new version of the product
> â?¢ upgraded server to Cumulative Update 5
> â?¢ made a change on the way that they are doing connection pooling
> â?¢ They are also getting (daily) new customers
> Scenario
> They have hosted system that collects data from wireless devices
> (through MSMQ) and Windows clients that are monitoring the state by
> refreshing screen every 30 sec - 5 min. From 9 - 5 they are not able
> to cope with the load.
> At the time the problem occurred they had around 700 customers and
> corresponding databases. Now they have more (around 900).
> SQL Server 2005 EE 32-bit, 4 (2x2) CPUs, 12 GB RAM
> Investigating Changes
> We have tried to investigate changes that they made.
> I have reviewed changes that they made in the database and there is
> nothing that draws attention. We have tried to disable some features
> but we didn't see much impact.
> They have rolled back Cumulative Update 5. Again with no visible
> impact.
> Originally their db connections were going directly to customer
> databases. In new release, they are executing everything though master
> database:
> Exec DB0234.dbo.spBlaBla
> They have rolled back this change - still CPU is pegged 9-5.
>
> Investigating
> Compilation
> We have looked into this. There is around 600 batches (sps) executed
> per sec. We have < 10 % of compilations. I've found somewhere that
> that is acceptable.
> Also our Cash hit ratio for SQL Server objects is over 90%.
> Re-compilation
> We looked into this. Very low.
> IO optimizations
> We spent lot of time on this. There were sps that we were able to
> optimize dramatically. Typically by rewriting queries or by replacing
> indexes by better ones.
> Locking and blocking
> We looked into this. We are under impression that this is not a
> problem. We do not see excessive blocking and on top of them all
> readers are running with nolocks hint anyway.
> CPU Pressure
> We do believe that we would do better with more CPUs.
> signal_wait_time_ms %signal (cpu) waits resource_wait_time_ms
> %resource waits
> -- -- --
> --
> 14801338 37.32 24863340 62.68
> "In conclusion, if Signal Waits are a significant percentage of total
> waits, you have CPU pressure which may be alleviated by faster or more
> CPUs."
> Two highest wait types that we are seeing are sos_scheduler_yield and
> sos_scheduler_yield. They are also indication of CPU pressure.
> It is relatively simple to put this on faster server. In fact they are
> planning to do that this weekend. Instead of 4 (2x2) CPU (32-bit) we
> will have 8 (2x4) CPUs (64-bit). However, we are concerned that it may
> not be enough. Or that we may have the same problem in two months. We
> also cannot explain why it jumped from around 50% to 100%.
> What are we missing? Any ideas?
> Dejan265
>|||> Or that we may have the same problem in two months. We
> also cannot explain why it jumped from around 50% to 100%.
> What are we missing? Any ideas?
I have been in a similar case once, and I have 2 ideas that was an issue in
my case.
1. Fragmented indexes. Are you regularly reorganizing/rebuilding your
indices? If not, it could be because of indexes being very fragmented. You
can take a look at sys.dm_db_index_physical_stats to see the general state
of your index fragmentation. I use this rule of thumb: If the index has more
than 500 pages in it (the page_count column), and if the physical
fragmentation is above 20, then I rebuild the index. I found an index on one
of our very hot tables, that were 99% fragmented. After rebuilding that
index, the cpu instantly dropped to normal.
2. I have also been in situtuations, where the query optimizer unfortunately
compiles an ineffective execution plan, making some queries take 100 ms
insted of the 0.1 ms they used to do. If such a query are called 10-20 times
pr. sec, then it COULD make the cpu jump to a 100%. To see if this is the
case, you can take a look at the plan cache dmv (I cannot remenber the exact
name, but it should be possible to find via google). The plan cache dmv has
a plan_handle, that can be used in an other dmf to lookup the
execution_sql - wich will give you the exact sql statement of the query. And
the last dmv to look at, has the statistics of plan use. It will show stats
about when the plan was created, how many times it has been used since the
creation time, what the avg cpu time, duration, logical reads and so on are.
Then you can look at those queries taking up the most cpu-time in total, or
those queries that are called ie 100 times pr sec.
In my case we had a sp's having an avg cpu time of 150ms, that only should
take 2-3 ms. I recompiled the execution plan for that sp via "sp_recompile
myStoredProcedure", and the cpu load dropped instantly.
I have a query that can be used to join these dmv's, but it's on my other
computer. I will find it later today, and post the code here.
The reason for the cpu load, could be a lot of other things. These to things
are just what I have experienced, wich made the cpu load jump.
Linchi Shea suggested to look at the Performance Dashboard, if you are on a
SQL 2005 SP2. If you are, this custom report COULD give you some of the
answers.
/Sjang|||> I have a query that can be used to join these dmv's, but it's on my other
> computer. I will find it later today, and post the code here.
I came to think of a thing.
IF the problem is because of a bad execution plan, and IF you are willing to
do a very dirty trick, AND if you can live with the system being even more
slow for af few minutes, THEN you could flush the whole procedure cache,
making them all recompile. You can do this by "DBCC FREEPROCCACHE". BUT BUT
BUT!! Be aware, that this makes every stored procedure recompile once,
giving a overhead for a few minutes - until all procedures again can be used
from the cache.
It's a dirty trick, I know, but IF the cpu problem is related to this issue,
THEN it could be an easy fix
/Sjang|||> I have a query that can be used to join these dmv's, but it's on my other
> computer. I will find it later today, and post the code here.
Here is the code I have used. Tweak it as you like:
WITH queryCTE
AS
(
SELECT
t1.plan_handle,
MAX(text) AS sql_text,
MIN(creation_time) AS plan_creation_date,
MAX(last_execution_time) AS last_execution_date,
MIN(execution_count) AS number_of_executions,
SUM(last_worker_time)/1000.0 AS last_cpu_time_in_ms, --the dmv returns the
time i micro seconds.
SUM(total_worker_time)/MIN(execution_count)/1000.0 As
avg_cpu_time_in_ms, --some sql steps are executed several times in a batch.
ie in a cursor loop.
SUM(total_worker_time)/1000.0 As total_cpu_time_in_ms, --the dmv returns the
time i micro seconds.
SUM(last_elapsed_time)/1000.0 AS last_duration_in_ms, --the dmv returns the
time i micro seconds.
SUM(total_elapsed_time)/MIN(execution_count)/1000.0 AS
avg_duration_in_ms, --some sql steps are executed several times in a batch.
ie in a cursor loop.
SUM(total_elapsed_time)/1000.0 AS total_duration_in_ms, --the dmv returns
the time i micro seconds.
SUM(last_logical_reads) AS last_logical_reads,
SUM(total_logical_reads)/MIN(execution_count) AS avg_logical_reads,
SUM(total_logical_reads) AS total_logical_reads
FROM
sys.dm_exec_cached_plans t1
LEFT JOIN sys.dm_exec_query_stats t2 ON t1.plan_handle = t2.plan_handle
OUTER APPLY sys.dm_exec_sql_text(sql_handle) --This finds the actual sql
text.
GROUP BY t1.plan_handle --I am only interested in the top level
execution/batch. Not the actual steps in the execution.
)
SELECT
plan_handle,
query_plan,
sql_text,
t3.name as object_name,
plan_creation_date,
last_execution_date,
number_of_executions,
CASE
WHEN plan_creation_date IS NOT NULL AND DATEDIFF(ss, plan_creation_date,
last_execution_date) <> 0 THEN (number_of_executions * 1.0)/DATEDIFF(ss,
plan_creation_date, last_execution_date)
ELSE NULL END AS executions_pr_sec, --This calculates the number of
executions pr. sec
last_cpu_time_in_ms,
avg_cpu_time_in_ms,
total_cpu_time_in_ms,
last_duration_in_ms,
avg_duration_in_ms,
total_duration_in_ms,
last_logical_reads,
avg_logical_reads,
total_logical_reads,
CASE WHEN plan_creation_date IS NOT NULL AND DATEDIFF(ss,
plan_creation_date, GETDATE()) <> 0 THEN (total_cpu_time_in_ms *
1.0)/DATEDIFF(ss, plan_creation_date, GETDATE())
ELSE NULL END AS avg_cpu_time_in_ms_pr_sec -- this calculates how many ms
execution time the query takes en avg each second.
-- Be aware, that multi core cpu's can have more than a thousand ms
execution time pr. sec.
-- ie an 8 core machine would be maxed out on the cpu if the sum of
avg_cpu_time_in_ms_pr_sec is 8000.
INTO #execution_stats --We save the result in a temporary table, to do ad
hoc querys.
FROM queryCTE t1
OUTER APPLY sys.dm_exec_query_plan(plan_handle) t2 -- The reason for this
outer apply is,
-- that the resultset contains a column with the actual
-- execution plan in xml.
LEFT JOIN sys.objects t3 on t2.objectid = t3.object_id -- This gives us
information about the object_name,
-- ie the name of the stored procedure.
--Now take a look at the expensive queries:
SELECT TOP 50 *
FROM #execution_stats
WHERE sql_text NOT LIKE '%#execution_stats%' --We filter out our own
queryCTE-queries, since they are pretty heavy
ORDER BY avg_cpu_time_in_ms_pr_sec DESC
SELECT TOP 50 *
FROM #execution_stats
WHERE sql_text NOT LIKE '%#execution_stats%' --We filter out our own
queryCTE-queries, since they are pretty heavy
ORDER BY avg_cpu_time_in_ms DESC
SELECT TOP 50 *
FROM #execution_stats
WHERE sql_text NOT LIKE '%#execution_stats%' --We filter out our own
queryCTE-queries, since they are pretty heavy
ORDER BY avg_duration_time_in_ms DESC
SELECT TOP 50 *
FROM #execution_stats
WHERE sql_text NOT LIKE '%#execution_stats%' --We filter out our own
queryCTE-queries, since they are pretty heavy
ORDER BY avg_logical_reads DESC
-- Drops the temporary table
DROP TABLE #execution_stats
/Sjang|||This is an amazing query. thank you.
I did somethign simmilar myself.
Instead of going against queries in memory, I was recording them with
Profiler.
I stripped them from paramters and database names, so I got a list of
most expensive queries.
Than we optimizied some/many of them.
Unfortunately, CPU is still 100%.
Dejan265|||> Unfortunately, CPU is still 100%.
Have you looked at the index fragmentation?
Try this:
SELECT *
FROM sys.dm_db_index_physical_stats(null, null, null, null, null)
WHERE
page_count > 500
AND avg_fragmentation_in_percent > 20
ORDER BY avg_fragmentation_in_percent DESC
If you only have one (or few) databases on your server, you can use this
instead pr database. This gives you the name of the table and index, ordered
by the fragmentation:
USE YourDatabaseName
GO
SELECT
sys.objects.name as ObjectName,
sys.indexes.name as IndexName,
t1.*
FROM sys.dm_db_index_physical_stats(DB_ID('YourDatabaseName'), null, null,
null, NULL) t1
INNER JOIN sys.objects ON t1.object_id = sys.objects.object_id
INNER JOIN sys.indexes ON t1.object_id = sys.indexes.object_id and
t1.index_id = sys.indexes.index_id
WHERE
page_count > 500
AND avg_fragmentation_in_percent > 20
ORDER BY avg_fragmentation_in_percent DESC
/Sjang|||On May 2, 11:21=A0am, "Sjang" <sjang_remo...@.sjang.dk> wrote:
> > Unfortunately, CPU is still 100%.
> Have you looked at the index fragmentation?
We are rebuilding indexes every night.
I assume that this is not a problem.
Dejan265|||>We are rebuilding indexes every night.
>I assume that this is not a problem.
Ok.
If the jump from 50% to 100% came suddenly then I would expect something to
be wrong somewhere. But if the change came over a short time - perhaps it
could be as simple, as a general bigger workload? You say there are new
customers all the time, and I guess also more data, so the pressure on the
server should rise. Perhaps nothing special is wrong, except a too slow
server for the workload that are being thrown at it now.
/Sjang|||> If the jump from 50% to 100% came suddenly then I would expect something to
> be wrong somewhere.
They claim that it came suddenly.
Dejan265|||I tried query as well.
I got an error that there is not enaough memory to run it.
(We have 12 gb on the server, but I assume that only 3GB can be used
at one time since it's 32-bit.)
Dejan265|||You can use up to 32 GB (64 with DataCenter). In BOOT.INI, use /PAE and
reboot. Then run sp_configure 'awe', 1 to turn on AWE. Be sure to set the
max server memory, with sp_configure.
--
Tom
----
Thomas A. Moreau, BSc, PhD, MCSE, MCDBA
SQL Server MVP
Toronto, ON Canada
<dejan265@.gmail.com> wrote in message
news:75e9277e-461e-4801-b604-0034590dac75@.l64g2000hse.googlegroups.com...
I tried query as well.
I got an error that there is not enaough memory to run it.
(We have 12 gb on the server, but I assume that only 3GB can be used
at one time since it's 32-bit.)
Dejan265|||> (We have 12 gb on the server, but I assume that only 3GB can be used
> at one time since it's 32-bit.)
I don't know why you think that. Unless your server is running 32-bit
Vista.
A|||On May 2, 11:31=A0am, "Aaron Bertrand [SQL Server]"
<ten...@.dnartreb.noraa> wrote:
> > (We have 12 gb on the server, but I assume that only 3GB can be used
> > at one time since it's 32-bit.)
> I don't know why you think that. =A0Unless your server is running 32-bit
> Vista.
> A
Another thought - check max degree of parallelism. If it is set to 0
- try setting it to 2 and see if that improves performance.|||It is really sounding to me like you simply need to hire consultant for a
quick performance review. Actually you should have done this long ago -
from your OP this situation has been ongoing for "a couple of weeks". I
guarantee you that any competent perf guru could isolate one or more causes
of the observed behavior in a matter of hours if not minutes.
--
Kevin G. Boles
Indicium Resources, Inc.
SQL Server MVP
kgboles a earthlink dt net
<dejan265@.gmail.com> wrote in message
news:75e9277e-461e-4801-b604-0034590dac75@.l64g2000hse.googlegroups.com...
>I tried query as well.
> I got an error that there is not enaough memory to run it.
> (We have 12 gb on the server, but I assume that only 3GB can be used
> at one time since it's 32-bit.)
> Dejan265|||We are using AWE.
I just mean that SQL Server has to swap chunck in and out.
Dejan265|||We are using AWE.
However, it is just swapping cunks in and out from 3GB.
Dejan265|||> We are using AWE.
> However, it is just swapping cunks in and out from 3GB.
How do you know that? Are you verifying using Task Manager? Stop doing
that and use Performance Monitor. Task Manager isn't always going to be
telling the truth, even if you have correctly enabled 3GB and AWE.|||On May 2, 11:58 pm, "TheSQLGuru" <kgbo...@.earthlink.net> wrote:
> It is really sounding to me like you simply need to hire consultant for a
> quick performance review.
We were able to identify couple of dozens of procedures that we
optimized.
We made some design changes - denormalized couple of tables,add some
indexes, drop some idnexes, normalize some tables...
We reconfigured couple of things, dropped CU6,...
In some cases we were able to optimize things for an order or two of
magnitude.
And over the weekend we moved everything to a new server (this one was
more then years old).
We are able to process everything again.
I am sure that there are more things that we can optimize.
One question is will management going to see benefit in what we did or
consider everything simply as result of server upgrade.
Dejan265|||> One question is will management going to see benefit in what we did or
> consider everything simply as result of server upgrade.
If you did everything in one swoop and did not compare individual changes
independently, then no, there is no way for you to prove or disprove where
the performance gains came from.
everything was OK until a couple of weeks ago. It was around 50% and
then it jumped to 100%. At that point, they have:
=95 released a new version of the product
=95 upgraded server to Cumulative Update 5
=95 made a change on the way that they are doing connection pooling
=95 They are also getting (daily) new customers
Scenario
They have hosted system that collects data from wireless devices
(through MSMQ) and Windows clients that are monitoring the state by
refreshing screen every 30 sec - 5 min. From 9 - 5 they are not able
to cope with the load.
At the time the problem occurred they had around 700 customers and
corresponding databases. Now they have more (around 900).
SQL Server 2005 EE 32-bit, 4 (2x2) CPUs, 12 GB RAM
Investigating Changes
We have tried to investigate changes that they made.
I have reviewed changes that they made in the database and there is
nothing that draws attention. We have tried to disable some features
but we didn't see much impact.
They have rolled back Cumulative Update 5. Again with no visible
impact.
Originally their db connections were going directly to customer
databases. In new release, they are executing everything though master
database:
Exec DB0234.dbo.spBlaBla
They have rolled back this change - still CPU is pegged 9-5.
Investigating
Compilation
We have looked into this. There is around 600 batches (sps) executed
per sec. We have < 10 % of compilations. I've found somewhere that
that is acceptable.
Also our Cash hit ratio for SQL Server objects is over 90%.
Re-compilation
We looked into this. Very low.
IO optimizations
We spent lot of time on this. There were sps that we were able to
optimize dramatically. Typically by rewriting queries or by replacing
indexes by better ones.
Locking and blocking
We looked into this. We are under impression that this is not a
problem. We do not see excessive blocking and on top of them all
readers are running with nolocks hint anyway.
CPU Pressure
We do believe that we would do better with more CPUs.
signal_wait_time_ms %signal (cpu) waits resource_wait_time_ms
%resource waits
-- -- --
--
14801338 37.32 24863340 62.68
"In conclusion, if Signal Waits are a significant percentage of total
waits, you have CPU pressure which may be alleviated by faster or more
CPUs."
Two highest wait types that we are seeing are sos_scheduler_yield and
sos_scheduler_yield. They are also indication of CPU pressure.
It is relatively simple to put this on faster server. In fact they are
planning to do that this weekend. Instead of 4 (2x2) CPU (32-bit) we
will have 8 (2x4) CPUs (64-bit). However, we are concerned that it may
not be enough. Or that we may have the same problem in two months. We
also cannot explain why it jumped from around 50% to 100%.
What are we missing? Any ideas?
Dejan265I would try to find out the queries that the top CPU consumers. If you are
running SP2 or later, Performance Dashboard is quick and dirty for this.
Linchi
"dejan265@.gmail.com" wrote:
> I have a client that is experiencing high CPU. In their word
> everything was OK until a couple of weeks ago. It was around 50% and
> then it jumped to 100%. At that point, they have:
> â?¢ released a new version of the product
> â?¢ upgraded server to Cumulative Update 5
> â?¢ made a change on the way that they are doing connection pooling
> â?¢ They are also getting (daily) new customers
> Scenario
> They have hosted system that collects data from wireless devices
> (through MSMQ) and Windows clients that are monitoring the state by
> refreshing screen every 30 sec - 5 min. From 9 - 5 they are not able
> to cope with the load.
> At the time the problem occurred they had around 700 customers and
> corresponding databases. Now they have more (around 900).
> SQL Server 2005 EE 32-bit, 4 (2x2) CPUs, 12 GB RAM
> Investigating Changes
> We have tried to investigate changes that they made.
> I have reviewed changes that they made in the database and there is
> nothing that draws attention. We have tried to disable some features
> but we didn't see much impact.
> They have rolled back Cumulative Update 5. Again with no visible
> impact.
> Originally their db connections were going directly to customer
> databases. In new release, they are executing everything though master
> database:
> Exec DB0234.dbo.spBlaBla
> They have rolled back this change - still CPU is pegged 9-5.
>
> Investigating
> Compilation
> We have looked into this. There is around 600 batches (sps) executed
> per sec. We have < 10 % of compilations. I've found somewhere that
> that is acceptable.
> Also our Cash hit ratio for SQL Server objects is over 90%.
> Re-compilation
> We looked into this. Very low.
> IO optimizations
> We spent lot of time on this. There were sps that we were able to
> optimize dramatically. Typically by rewriting queries or by replacing
> indexes by better ones.
> Locking and blocking
> We looked into this. We are under impression that this is not a
> problem. We do not see excessive blocking and on top of them all
> readers are running with nolocks hint anyway.
> CPU Pressure
> We do believe that we would do better with more CPUs.
> signal_wait_time_ms %signal (cpu) waits resource_wait_time_ms
> %resource waits
> -- -- --
> --
> 14801338 37.32 24863340 62.68
> "In conclusion, if Signal Waits are a significant percentage of total
> waits, you have CPU pressure which may be alleviated by faster or more
> CPUs."
> Two highest wait types that we are seeing are sos_scheduler_yield and
> sos_scheduler_yield. They are also indication of CPU pressure.
> It is relatively simple to put this on faster server. In fact they are
> planning to do that this weekend. Instead of 4 (2x2) CPU (32-bit) we
> will have 8 (2x4) CPUs (64-bit). However, we are concerned that it may
> not be enough. Or that we may have the same problem in two months. We
> also cannot explain why it jumped from around 50% to 100%.
> What are we missing? Any ideas?
> Dejan265
>|||> Or that we may have the same problem in two months. We
> also cannot explain why it jumped from around 50% to 100%.
> What are we missing? Any ideas?
I have been in a similar case once, and I have 2 ideas that was an issue in
my case.
1. Fragmented indexes. Are you regularly reorganizing/rebuilding your
indices? If not, it could be because of indexes being very fragmented. You
can take a look at sys.dm_db_index_physical_stats to see the general state
of your index fragmentation. I use this rule of thumb: If the index has more
than 500 pages in it (the page_count column), and if the physical
fragmentation is above 20, then I rebuild the index. I found an index on one
of our very hot tables, that were 99% fragmented. After rebuilding that
index, the cpu instantly dropped to normal.
2. I have also been in situtuations, where the query optimizer unfortunately
compiles an ineffective execution plan, making some queries take 100 ms
insted of the 0.1 ms they used to do. If such a query are called 10-20 times
pr. sec, then it COULD make the cpu jump to a 100%. To see if this is the
case, you can take a look at the plan cache dmv (I cannot remenber the exact
name, but it should be possible to find via google). The plan cache dmv has
a plan_handle, that can be used in an other dmf to lookup the
execution_sql - wich will give you the exact sql statement of the query. And
the last dmv to look at, has the statistics of plan use. It will show stats
about when the plan was created, how many times it has been used since the
creation time, what the avg cpu time, duration, logical reads and so on are.
Then you can look at those queries taking up the most cpu-time in total, or
those queries that are called ie 100 times pr sec.
In my case we had a sp's having an avg cpu time of 150ms, that only should
take 2-3 ms. I recompiled the execution plan for that sp via "sp_recompile
myStoredProcedure", and the cpu load dropped instantly.
I have a query that can be used to join these dmv's, but it's on my other
computer. I will find it later today, and post the code here.
The reason for the cpu load, could be a lot of other things. These to things
are just what I have experienced, wich made the cpu load jump.
Linchi Shea suggested to look at the Performance Dashboard, if you are on a
SQL 2005 SP2. If you are, this custom report COULD give you some of the
answers.
/Sjang|||> I have a query that can be used to join these dmv's, but it's on my other
> computer. I will find it later today, and post the code here.
I came to think of a thing.
IF the problem is because of a bad execution plan, and IF you are willing to
do a very dirty trick, AND if you can live with the system being even more
slow for af few minutes, THEN you could flush the whole procedure cache,
making them all recompile. You can do this by "DBCC FREEPROCCACHE". BUT BUT
BUT!! Be aware, that this makes every stored procedure recompile once,
giving a overhead for a few minutes - until all procedures again can be used
from the cache.
It's a dirty trick, I know, but IF the cpu problem is related to this issue,
THEN it could be an easy fix
/Sjang|||> I have a query that can be used to join these dmv's, but it's on my other
> computer. I will find it later today, and post the code here.
Here is the code I have used. Tweak it as you like:
WITH queryCTE
AS
(
SELECT
t1.plan_handle,
MAX(text) AS sql_text,
MIN(creation_time) AS plan_creation_date,
MAX(last_execution_time) AS last_execution_date,
MIN(execution_count) AS number_of_executions,
SUM(last_worker_time)/1000.0 AS last_cpu_time_in_ms, --the dmv returns the
time i micro seconds.
SUM(total_worker_time)/MIN(execution_count)/1000.0 As
avg_cpu_time_in_ms, --some sql steps are executed several times in a batch.
ie in a cursor loop.
SUM(total_worker_time)/1000.0 As total_cpu_time_in_ms, --the dmv returns the
time i micro seconds.
SUM(last_elapsed_time)/1000.0 AS last_duration_in_ms, --the dmv returns the
time i micro seconds.
SUM(total_elapsed_time)/MIN(execution_count)/1000.0 AS
avg_duration_in_ms, --some sql steps are executed several times in a batch.
ie in a cursor loop.
SUM(total_elapsed_time)/1000.0 AS total_duration_in_ms, --the dmv returns
the time i micro seconds.
SUM(last_logical_reads) AS last_logical_reads,
SUM(total_logical_reads)/MIN(execution_count) AS avg_logical_reads,
SUM(total_logical_reads) AS total_logical_reads
FROM
sys.dm_exec_cached_plans t1
LEFT JOIN sys.dm_exec_query_stats t2 ON t1.plan_handle = t2.plan_handle
OUTER APPLY sys.dm_exec_sql_text(sql_handle) --This finds the actual sql
text.
GROUP BY t1.plan_handle --I am only interested in the top level
execution/batch. Not the actual steps in the execution.
)
SELECT
plan_handle,
query_plan,
sql_text,
t3.name as object_name,
plan_creation_date,
last_execution_date,
number_of_executions,
CASE
WHEN plan_creation_date IS NOT NULL AND DATEDIFF(ss, plan_creation_date,
last_execution_date) <> 0 THEN (number_of_executions * 1.0)/DATEDIFF(ss,
plan_creation_date, last_execution_date)
ELSE NULL END AS executions_pr_sec, --This calculates the number of
executions pr. sec
last_cpu_time_in_ms,
avg_cpu_time_in_ms,
total_cpu_time_in_ms,
last_duration_in_ms,
avg_duration_in_ms,
total_duration_in_ms,
last_logical_reads,
avg_logical_reads,
total_logical_reads,
CASE WHEN plan_creation_date IS NOT NULL AND DATEDIFF(ss,
plan_creation_date, GETDATE()) <> 0 THEN (total_cpu_time_in_ms *
1.0)/DATEDIFF(ss, plan_creation_date, GETDATE())
ELSE NULL END AS avg_cpu_time_in_ms_pr_sec -- this calculates how many ms
execution time the query takes en avg each second.
-- Be aware, that multi core cpu's can have more than a thousand ms
execution time pr. sec.
-- ie an 8 core machine would be maxed out on the cpu if the sum of
avg_cpu_time_in_ms_pr_sec is 8000.
INTO #execution_stats --We save the result in a temporary table, to do ad
hoc querys.
FROM queryCTE t1
OUTER APPLY sys.dm_exec_query_plan(plan_handle) t2 -- The reason for this
outer apply is,
-- that the resultset contains a column with the actual
-- execution plan in xml.
LEFT JOIN sys.objects t3 on t2.objectid = t3.object_id -- This gives us
information about the object_name,
-- ie the name of the stored procedure.
--Now take a look at the expensive queries:
SELECT TOP 50 *
FROM #execution_stats
WHERE sql_text NOT LIKE '%#execution_stats%' --We filter out our own
queryCTE-queries, since they are pretty heavy
ORDER BY avg_cpu_time_in_ms_pr_sec DESC
SELECT TOP 50 *
FROM #execution_stats
WHERE sql_text NOT LIKE '%#execution_stats%' --We filter out our own
queryCTE-queries, since they are pretty heavy
ORDER BY avg_cpu_time_in_ms DESC
SELECT TOP 50 *
FROM #execution_stats
WHERE sql_text NOT LIKE '%#execution_stats%' --We filter out our own
queryCTE-queries, since they are pretty heavy
ORDER BY avg_duration_time_in_ms DESC
SELECT TOP 50 *
FROM #execution_stats
WHERE sql_text NOT LIKE '%#execution_stats%' --We filter out our own
queryCTE-queries, since they are pretty heavy
ORDER BY avg_logical_reads DESC
-- Drops the temporary table
DROP TABLE #execution_stats
/Sjang|||This is an amazing query. thank you.
I did somethign simmilar myself.
Instead of going against queries in memory, I was recording them with
Profiler.
I stripped them from paramters and database names, so I got a list of
most expensive queries.
Than we optimizied some/many of them.
Unfortunately, CPU is still 100%.
Dejan265|||> Unfortunately, CPU is still 100%.
Have you looked at the index fragmentation?
Try this:
SELECT *
FROM sys.dm_db_index_physical_stats(null, null, null, null, null)
WHERE
page_count > 500
AND avg_fragmentation_in_percent > 20
ORDER BY avg_fragmentation_in_percent DESC
If you only have one (or few) databases on your server, you can use this
instead pr database. This gives you the name of the table and index, ordered
by the fragmentation:
USE YourDatabaseName
GO
SELECT
sys.objects.name as ObjectName,
sys.indexes.name as IndexName,
t1.*
FROM sys.dm_db_index_physical_stats(DB_ID('YourDatabaseName'), null, null,
null, NULL) t1
INNER JOIN sys.objects ON t1.object_id = sys.objects.object_id
INNER JOIN sys.indexes ON t1.object_id = sys.indexes.object_id and
t1.index_id = sys.indexes.index_id
WHERE
page_count > 500
AND avg_fragmentation_in_percent > 20
ORDER BY avg_fragmentation_in_percent DESC
/Sjang|||On May 2, 11:21=A0am, "Sjang" <sjang_remo...@.sjang.dk> wrote:
> > Unfortunately, CPU is still 100%.
> Have you looked at the index fragmentation?
We are rebuilding indexes every night.
I assume that this is not a problem.
Dejan265|||>We are rebuilding indexes every night.
>I assume that this is not a problem.
Ok.
If the jump from 50% to 100% came suddenly then I would expect something to
be wrong somewhere. But if the change came over a short time - perhaps it
could be as simple, as a general bigger workload? You say there are new
customers all the time, and I guess also more data, so the pressure on the
server should rise. Perhaps nothing special is wrong, except a too slow
server for the workload that are being thrown at it now.
/Sjang|||> If the jump from 50% to 100% came suddenly then I would expect something to
> be wrong somewhere.
They claim that it came suddenly.
Dejan265|||I tried query as well.
I got an error that there is not enaough memory to run it.
(We have 12 gb on the server, but I assume that only 3GB can be used
at one time since it's 32-bit.)
Dejan265|||You can use up to 32 GB (64 with DataCenter). In BOOT.INI, use /PAE and
reboot. Then run sp_configure 'awe', 1 to turn on AWE. Be sure to set the
max server memory, with sp_configure.
--
Tom
----
Thomas A. Moreau, BSc, PhD, MCSE, MCDBA
SQL Server MVP
Toronto, ON Canada
<dejan265@.gmail.com> wrote in message
news:75e9277e-461e-4801-b604-0034590dac75@.l64g2000hse.googlegroups.com...
I tried query as well.
I got an error that there is not enaough memory to run it.
(We have 12 gb on the server, but I assume that only 3GB can be used
at one time since it's 32-bit.)
Dejan265|||> (We have 12 gb on the server, but I assume that only 3GB can be used
> at one time since it's 32-bit.)
I don't know why you think that. Unless your server is running 32-bit
Vista.
A|||On May 2, 11:31=A0am, "Aaron Bertrand [SQL Server]"
<ten...@.dnartreb.noraa> wrote:
> > (We have 12 gb on the server, but I assume that only 3GB can be used
> > at one time since it's 32-bit.)
> I don't know why you think that. =A0Unless your server is running 32-bit
> Vista.
> A
Another thought - check max degree of parallelism. If it is set to 0
- try setting it to 2 and see if that improves performance.|||It is really sounding to me like you simply need to hire consultant for a
quick performance review. Actually you should have done this long ago -
from your OP this situation has been ongoing for "a couple of weeks". I
guarantee you that any competent perf guru could isolate one or more causes
of the observed behavior in a matter of hours if not minutes.
--
Kevin G. Boles
Indicium Resources, Inc.
SQL Server MVP
kgboles a earthlink dt net
<dejan265@.gmail.com> wrote in message
news:75e9277e-461e-4801-b604-0034590dac75@.l64g2000hse.googlegroups.com...
>I tried query as well.
> I got an error that there is not enaough memory to run it.
> (We have 12 gb on the server, but I assume that only 3GB can be used
> at one time since it's 32-bit.)
> Dejan265|||We are using AWE.
I just mean that SQL Server has to swap chunck in and out.
Dejan265|||We are using AWE.
However, it is just swapping cunks in and out from 3GB.
Dejan265|||> We are using AWE.
> However, it is just swapping cunks in and out from 3GB.
How do you know that? Are you verifying using Task Manager? Stop doing
that and use Performance Monitor. Task Manager isn't always going to be
telling the truth, even if you have correctly enabled 3GB and AWE.|||On May 2, 11:58 pm, "TheSQLGuru" <kgbo...@.earthlink.net> wrote:
> It is really sounding to me like you simply need to hire consultant for a
> quick performance review.
We were able to identify couple of dozens of procedures that we
optimized.
We made some design changes - denormalized couple of tables,add some
indexes, drop some idnexes, normalize some tables...
We reconfigured couple of things, dropped CU6,...
In some cases we were able to optimize things for an order or two of
magnitude.
And over the weekend we moved everything to a new server (this one was
more then years old).
We are able to process everything again.
I am sure that there are more things that we can optimize.
One question is will management going to see benefit in what we did or
consider everything simply as result of server upgrade.
Dejan265|||> One question is will management going to see benefit in what we did or
> consider everything simply as result of server upgrade.
If you did everything in one swoop and did not compare individual changes
independently, then no, there is no way for you to prove or disprove where
the performance gains came from.
Subscribe to:
Posts (Atom)