Tuesday, March 27, 2012
History/Data Change File Approach
Who, When, What Field and New Value of Fields
When changes occur to an existing record.
The purpose is for users to occassionally view the changes. They'll want to be able to see the history of the record - who changed what and when.
I figured I'd add the needed code to the stored procedure that's doing the update for the record.
When the stored procedure is called to do the update, the PK and parameters are sent.
The SP could first retain the current state of the record from the disk,
then do the update, then "spin" thru the fields comparing the record state prior to the update and after. Differences could be parsed to a "Changes string" and in the end, this string is saved in a history record along with a few other fields:
Name, DateTime, Changes
FK to Changed Record: some int value
Name: Joe Blow
Date: 1/1/05 12:02pm
Changes: Severity: 23 Project: Everest Assigned Lab: 204
How does the above approach sound?
Is there a better way you'd suggest?
Any sample code for a system that spins thru the fields comparing 1 temporary record with another looking for changes?
Thanks,
PeterHave you considered using a trigger? You can use the inserted and deleted tables to compare your data without having to save it manually. Then insert your data into the history table as you suggested.
We often just save the before image to the history table along with the type of operation performed, the id that was used to perform it and a time stamp. Inserted records are not recorded (because all their data is already recorded on the live table) but deleted ones are. The differences for updated records can be determined at any time by comparing the before image to the next or previous stored image or the current actual record. We rarely actually look at this sort of history however unless data disappears or the customer tells us that there is something else wrong with the data and we need to trace what happened to it.|||ejustuss - thanks for the thoughts.
Good idea about simply saving the before image prior to the actual save of the new one.
Our users are used to systems where they can click a button and see essentially the change history of the record. This particular system is a Work Order system. The status of the WO changes over time, etc.
In other non-SQL Server systems I've developed I have a routine that prior to actual save:
1. saves "before save" copy of record
2. updates the record with new values into the DB
3. peels off an "After save" copy of record
4. runs a routine that compares side by side each field. Any changes are noted in a text variable (field name, new value).
5. Once all the fields are spun thru (compared), if there is any information in the text variable, a "change table" record is created with FK to the parent record, Who Changed it, When changed, and a single text field describing all the changes.
Weeks later when a user is viewing the particular record, they can press a button and have a query run against the change table to bring up a simple list of what changed when.
One wrinkle is that a Work Order has a huge text area. Once a WO is accepted by a lab, this text are becomes "frozen". So if we simply peel off a before save copy each time a user specifies an update - I wouldn't want to needlessly include this particular field due to space considerations.
Bottom line - I was assuming someone might have a canned routine for spinning thru a record comparing all field values against an identical layed out record. I figured there might be a system function or 2 to:
1. Identify how many fields are in a table
2. identify the content of a field - something like @.@.Field(i)
where i=1 to number of fields in the table.
Peter|||I don't know of any function like that although I am sure you can write one.
In triggers you can use IF UPDATE(column) to test if a column is updated but you still have to use the column names which means writing a different trigger for each case . The data in text fields will probably not be accessible within the trigger.
You don't have to save all the fields if you do a before image either just the fields you want to compare in case they are updated.
History/archive database
We have an OLTP database that we'd like to keep small in the interest
of speed, and on the other hand need to keep virtually all records
around for seven years for auditing and tax purposes. So, the obvious
thing is to have a process that removes closed or otherwise obsolete
logical entities from the OLTP database, and some other process that
moves them to the archive, before or after being removed from OLTP.
(I have done this before and understand the many problems involved in
trying to declare something "closed or otherwise obsolete", but am
looking for discussion, latest ideas, best practices, etc)
We've had suggestions on one side to keep only the hottest records on
the OLTP side, and then try to make it invisible when a query needs to
access something "historical" (like yesterday). I'm more accustomed
to end-of-year processes to clean stuff out. But, we may also want to
duplicate (or replicate) stuff out on a daily or intra-day basis, in
order to run as many reports as possible on the archive instead of the
OLTP, potentially even current YTD reports.
So, many questions, like should the archive database share the OLTP
schema, or be a DW-style star, or something else? Thinking seems to
be that for now, yes we'll keep the archive on the same schema, and
sometime in the next year or three we'll make the transition to a
star. ETL should be custom-written DTS/SQL code rather than, say,
merge replication. Latency should be about 24 hours from OLTP to
archive for duplication, and about once a year for removal from OLTP.
Sound reasonable, or does anything here sound like a mistake?
Thanks.
Josh
Josh,
It's pretty tough to give enough info in a newsgroup post to warrant the
best advise for something like this. There are many things to consider and
knowing more about how your environment is set up and works on a daily basis
will play a big role in how to do this. Some things to keep in mind though.
Smaller tables do not necessarily mean faster. With proper indexing size
does not matter as much for a true OLTP system. Maintenance gets trickier as
the size increases in general. One thing you may want to look at is SQL2005
and the partitioning features. You don't say how the rows are determined to
be "Closed" but partitioning may give you some options that will change your
mind.
Andrew J. Kelly SQL MVP
"jxstern" <jxstern@.nowhere.xyz> wrote in message
news:hgf7o1pasmbsi3um88a6cmq6511d447dd6@.4ax.com...
> Anybody have any favorite approach to recommend regarding archiving?
> We have an OLTP database that we'd like to keep small in the interest
> of speed, and on the other hand need to keep virtually all records
> around for seven years for auditing and tax purposes. So, the obvious
> thing is to have a process that removes closed or otherwise obsolete
> logical entities from the OLTP database, and some other process that
> moves them to the archive, before or after being removed from OLTP.
> (I have done this before and understand the many problems involved in
> trying to declare something "closed or otherwise obsolete", but am
> looking for discussion, latest ideas, best practices, etc)
> We've had suggestions on one side to keep only the hottest records on
> the OLTP side, and then try to make it invisible when a query needs to
> access something "historical" (like yesterday). I'm more accustomed
> to end-of-year processes to clean stuff out. But, we may also want to
> duplicate (or replicate) stuff out on a daily or intra-day basis, in
> order to run as many reports as possible on the archive instead of the
> OLTP, potentially even current YTD reports.
> So, many questions, like should the archive database share the OLTP
> schema, or be a DW-style star, or something else? Thinking seems to
> be that for now, yes we'll keep the archive on the same schema, and
> sometime in the next year or three we'll make the transition to a
> star. ETL should be custom-written DTS/SQL code rather than, say,
> merge replication. Latency should be about 24 hours from OLTP to
> archive for duplication, and about once a year for removal from OLTP.
> Sound reasonable, or does anything here sound like a mistake?
> Thanks.
> Josh
>
|||On Wed, 23 Nov 2005 09:56:07 -0500, "Andrew J. Kelly"
<sqlmvpnooospam@.shadhawk.com> wrote:
>It's pretty tough to give enough info in a newsgroup post to warrant the
>best advise for something like this. There are many things to consider and
>knowing more about how your environment is set up and works on a daily basis
>will play a big role in how to do this. Some things to keep in mind though.
>Smaller tables do not necessarily mean faster. With proper indexing size
>does not matter as much for a true OLTP system. Maintenance gets trickier as
>the size increases in general. One thing you may want to look at is SQL2005
>and the partitioning features. You don't say how the rows are determined to
>be "Closed" but partitioning may give you some options that will change your
>mind.
Andrew,
Thanks for the comments.
I'm busy trying to identify all the info that I can in our
environment, so couldn't even give it all yet in any case.
On size, I agree that a well-designed database should be relatively
insensitive to size, and I sure hope that's true, because in this our
new application we've been running it against about a 3gb database,
but as we're ramping up production it should start growing about
1gb/month now, and 5gb/month soon, so if we're going to be able to
even store one year's data in the main OLTP database, it better scale
gracefully from 3gb to about 60gb! My history concerns are in this
context, that we need to start planning now on how to hold multi-60gb
years of data. Certainly partitioning is going to be called for,
views right now in SQL2K, the new partitioned tables (I suppose) when
we get to SQL2005.
Should we keep it all in something as similar as possible to the OLTP
schema, or in a DW schema, or in a special archive schema? Questions
like that.
Just looking for odd hints and war stories.
Happy Thanksgiving to all!
Josh
History/archive database
We have an OLTP database that we'd like to keep small in the interest
of speed, and on the other hand need to keep virtually all records
around for seven years for auditing and tax purposes. So, the obvious
thing is to have a process that removes closed or otherwise obsolete
logical entities from the OLTP database, and some other process that
moves them to the archive, before or after being removed from OLTP.
(I have done this before and understand the many problems involved in
trying to declare something "closed or otherwise obsolete", but am
looking for discussion, latest ideas, best practices, etc)
We've had suggestions on one side to keep only the hottest records on
the OLTP side, and then try to make it invisible when a query needs to
access something "historical" (like yesterday). I'm more accustomed
to end-of-year processes to clean stuff out. But, we may also want to
duplicate (or replicate) stuff out on a daily or intra-day basis, in
order to run as many reports as possible on the archive instead of the
OLTP, potentially even current YTD reports.
So, many questions, like should the archive database share the OLTP
schema, or be a DW-style star, or something else? Thinking seems to
be that for now, yes we'll keep the archive on the same schema, and
sometime in the next year or three we'll make the transition to a
star. ETL should be custom-written DTS/SQL code rather than, say,
merge replication. Latency should be about 24 hours from OLTP to
archive for duplication, and about once a year for removal from OLTP.
Sound reasonable, or does anything here sound like a mistake?
Thanks.
JoshJosh,
It's pretty tough to give enough info in a newsgroup post to warrant the
best advise for something like this. There are many things to consider and
knowing more about how your environment is set up and works on a daily basis
will play a big role in how to do this. Some things to keep in mind though.
Smaller tables do not necessarily mean faster. With proper indexing size
does not matter as much for a true OLTP system. Maintenance gets trickier as
the size increases in general. One thing you may want to look at is SQL2005
and the partitioning features. You don't say how the rows are determined to
be "Closed" but partitioning may give you some options that will change your
mind.
Andrew J. Kelly SQL MVP
"jxstern" <jxstern@.nowhere.xyz> wrote in message
news:hgf7o1pasmbsi3um88a6cmq6511d447dd6@.
4ax.com...
> Anybody have any favorite approach to recommend regarding archiving?
> We have an OLTP database that we'd like to keep small in the interest
> of speed, and on the other hand need to keep virtually all records
> around for seven years for auditing and tax purposes. So, the obvious
> thing is to have a process that removes closed or otherwise obsolete
> logical entities from the OLTP database, and some other process that
> moves them to the archive, before or after being removed from OLTP.
> (I have done this before and understand the many problems involved in
> trying to declare something "closed or otherwise obsolete", but am
> looking for discussion, latest ideas, best practices, etc)
> We've had suggestions on one side to keep only the hottest records on
> the OLTP side, and then try to make it invisible when a query needs to
> access something "historical" (like yesterday). I'm more accustomed
> to end-of-year processes to clean stuff out. But, we may also want to
> duplicate (or replicate) stuff out on a daily or intra-day basis, in
> order to run as many reports as possible on the archive instead of the
> OLTP, potentially even current YTD reports.
> So, many questions, like should the archive database share the OLTP
> schema, or be a DW-style star, or something else? Thinking seems to
> be that for now, yes we'll keep the archive on the same schema, and
> sometime in the next year or three we'll make the transition to a
> star. ETL should be custom-written DTS/SQL code rather than, say,
> merge replication. Latency should be about 24 hours from OLTP to
> archive for duplication, and about once a year for removal from OLTP.
> Sound reasonable, or does anything here sound like a mistake?
> Thanks.
> Josh
>|||On Wed, 23 Nov 2005 09:56:07 -0500, "Andrew J. Kelly"
<sqlmvpnooospam@.shadhawk.com> wrote:
>It's pretty tough to give enough info in a newsgroup post to warrant the
>best advise for something like this. There are many things to consider and
>knowing more about how your environment is set up and works on a daily basi
s
>will play a big role in how to do this. Some things to keep in mind though.
>Smaller tables do not necessarily mean faster. With proper indexing size
>does not matter as much for a true OLTP system. Maintenance gets trickier a
s
>the size increases in general. One thing you may want to look at is SQL200
5
>and the partitioning features. You don't say how the rows are determined to
>be "Closed" but partitioning may give you some options that will change you
r
>mind.
Andrew,
Thanks for the comments.
I'm busy trying to identify all the info that I can in our
environment, so couldn't even give it all yet in any case.
On size, I agree that a well-designed database should be relatively
insensitive to size, and I sure hope that's true, because in this our
new application we've been running it against about a 3gb database,
but as we're ramping up production it should start growing about
1gb/month now, and 5gb/month soon, so if we're going to be able to
even store one year's data in the main OLTP database, it better scale
gracefully from 3gb to about 60gb! My history concerns are in this
context, that we need to start planning now on how to hold multi-60gb
years of data. Certainly partitioning is going to be called for,
views right now in SQL2K, the new partitioned tables (I suppose) when
we get to SQL2005.
Should we keep it all in something as similar as possible to the OLTP
schema, or in a DW schema, or in a special archive schema? Questions
like that.
Just looking for odd hints and war stories.
Happy Thanksgiving to all!
Josh
History/archive database
We have an OLTP database that we'd like to keep small in the interest
of speed, and on the other hand need to keep virtually all records
around for seven years for auditing and tax purposes. So, the obvious
thing is to have a process that removes closed or otherwise obsolete
logical entities from the OLTP database, and some other process that
moves them to the archive, before or after being removed from OLTP.
(I have done this before and understand the many problems involved in
trying to declare something "closed or otherwise obsolete", but am
looking for discussion, latest ideas, best practices, etc)
We've had suggestions on one side to keep only the hottest records on
the OLTP side, and then try to make it invisible when a query needs to
access something "historical" (like yesterday). I'm more accustomed
to end-of-year processes to clean stuff out. But, we may also want to
duplicate (or replicate) stuff out on a daily or intra-day basis, in
order to run as many reports as possible on the archive instead of the
OLTP, potentially even current YTD reports.
So, many questions, like should the archive database share the OLTP
schema, or be a DW-style star, or something else? Thinking seems to
be that for now, yes we'll keep the archive on the same schema, and
sometime in the next year or three we'll make the transition to a
star. ETL should be custom-written DTS/SQL code rather than, say,
merge replication. Latency should be about 24 hours from OLTP to
archive for duplication, and about once a year for removal from OLTP.
Sound reasonable, or does anything here sound like a mistake?
Thanks.
JoshJosh,
It's pretty tough to give enough info in a newsgroup post to warrant the
best advise for something like this. There are many things to consider and
knowing more about how your environment is set up and works on a daily basis
will play a big role in how to do this. Some things to keep in mind though.
Smaller tables do not necessarily mean faster. With proper indexing size
does not matter as much for a true OLTP system. Maintenance gets trickier as
the size increases in general. One thing you may want to look at is SQL2005
and the partitioning features. You don't say how the rows are determined to
be "Closed" but partitioning may give you some options that will change your
mind.
--
Andrew J. Kelly SQL MVP
"jxstern" <jxstern@.nowhere.xyz> wrote in message
news:hgf7o1pasmbsi3um88a6cmq6511d447dd6@.4ax.com...
> Anybody have any favorite approach to recommend regarding archiving?
> We have an OLTP database that we'd like to keep small in the interest
> of speed, and on the other hand need to keep virtually all records
> around for seven years for auditing and tax purposes. So, the obvious
> thing is to have a process that removes closed or otherwise obsolete
> logical entities from the OLTP database, and some other process that
> moves them to the archive, before or after being removed from OLTP.
> (I have done this before and understand the many problems involved in
> trying to declare something "closed or otherwise obsolete", but am
> looking for discussion, latest ideas, best practices, etc)
> We've had suggestions on one side to keep only the hottest records on
> the OLTP side, and then try to make it invisible when a query needs to
> access something "historical" (like yesterday). I'm more accustomed
> to end-of-year processes to clean stuff out. But, we may also want to
> duplicate (or replicate) stuff out on a daily or intra-day basis, in
> order to run as many reports as possible on the archive instead of the
> OLTP, potentially even current YTD reports.
> So, many questions, like should the archive database share the OLTP
> schema, or be a DW-style star, or something else? Thinking seems to
> be that for now, yes we'll keep the archive on the same schema, and
> sometime in the next year or three we'll make the transition to a
> star. ETL should be custom-written DTS/SQL code rather than, say,
> merge replication. Latency should be about 24 hours from OLTP to
> archive for duplication, and about once a year for removal from OLTP.
> Sound reasonable, or does anything here sound like a mistake?
> Thanks.
> Josh
>|||On Wed, 23 Nov 2005 09:56:07 -0500, "Andrew J. Kelly"
<sqlmvpnooospam@.shadhawk.com> wrote:
>It's pretty tough to give enough info in a newsgroup post to warrant the
>best advise for something like this. There are many things to consider and
>knowing more about how your environment is set up and works on a daily basis
>will play a big role in how to do this. Some things to keep in mind though.
>Smaller tables do not necessarily mean faster. With proper indexing size
>does not matter as much for a true OLTP system. Maintenance gets trickier as
>the size increases in general. One thing you may want to look at is SQL2005
>and the partitioning features. You don't say how the rows are determined to
>be "Closed" but partitioning may give you some options that will change your
>mind.
Andrew,
Thanks for the comments.
I'm busy trying to identify all the info that I can in our
environment, so couldn't even give it all yet in any case.
On size, I agree that a well-designed database should be relatively
insensitive to size, and I sure hope that's true, because in this our
new application we've been running it against about a 3gb database,
but as we're ramping up production it should start growing about
1gb/month now, and 5gb/month soon, so if we're going to be able to
even store one year's data in the main OLTP database, it better scale
gracefully from 3gb to about 60gb! My history concerns are in this
context, that we need to start planning now on how to hold multi-60gb
years of data. Certainly partitioning is going to be called for,
views right now in SQL2K, the new partitioned tables (I suppose) when
we get to SQL2005.
Should we keep it all in something as similar as possible to the OLTP
schema, or in a DW schema, or in a special archive schema? Questions
like that.
Just looking for odd hints and war stories.
Happy Thanksgiving to all!
Josh
history, subscription links on manager throwing below error
When I click on Scheduling, History, new subscription hyperlinks the below error message is popping up, so I couldn't schedule a report. please help me in this regard what needs to be done.
EXECUTE permission denied on object 'xp_sqlagent_notify', database 'master', owner 'dbo'.
ThanksHmm... on the database server, do both the Report Server NT service account
and the ASP.Net service account have the RSExec role on the MSDB database?
-Lukasz
"Subba" <Subba@.discussions.microsoft.com> wrote in message
news:ACD31262-144F-4EE9-8B29-B15AF7B520D2@.microsoft.com...
> Hi,
> When I click on Scheduling, History, new subscription hyperlinks the below
> error message is popping up, so I couldn't schedule a report. please help
> me in this regard what needs to be done.
> EXECUTE permission denied on object 'xp_sqlagent_notify', database
> 'master', owner 'dbo'.
> Thanks|||Use enterprise manager to connect to the SQL Server instance that hosts the
report server database. Navigate to the security folder in the tree and
find roles - it should give you a list of roles in the system and which
users have those roles granted to them.
Whatever account ASP.Net is running as (look in the Machine.config file for
ASP.Net) and whatever account you chose during setup for the ReportServer
service (look in MMC), need to have the RSExec role on the following
databases - ReportServer, ReportServerTempDB, MSDB.
-Lukasz
"Subba" <Subba@.discussions.microsoft.com> wrote in message
news:F8339FD7-7CFE-4527-9D99-780C4280B9A9@.microsoft.com...
> Iam not able to understand what exactly needs to be done, can you be
> please more specific what exactly needs to be done.
> "Subba" wrote:
>> Hi,
>> When I click on Scheduling, History, new subscription hyperlinks the
>> below error message is popping up, so I couldn't schedule a report.
>> please help me in this regard what needs to be done.
>> EXECUTE permission denied on object 'xp_sqlagent_notify', database
>> 'master', owner 'dbo'.
>> Thanks|||Additional information on this:
Service accounts (in my case I have 3: NT Authority\Local Service, NT Authority\Network Service, NT Authority\System) need to have execute permission for 'xp_sqlagent_notify', which is in Master | Sytem Stored Procedures. The normal way to do this is to have these accounts be within the RSExec role and make sure that RSExec has execute permissions on the above stored procedure. When I tried this, it didn't fix my problem. When I gave execute permissions to all of the above service accounts (instead of relying on them getting the permissions through their membership in RSExec) the problem was fixed.
From http://www.developmentnow.com/g/115_2004_7_0_0_448752/history-subscription-links-on-manager-throwing-below-error.ht
Posted via DevelopmentNow.com Group
http://www.developmentnow.com|||Update to my previous post - there must be something wrong with my SQL Server installation 'cause I kept getting permission errors. For some objects (tables, stored procedures) giving explicit permissions to the service accounts was not enough. I had to give Execute or Select permissions to the Public group (not real good for security, but it was the only way to get my installation to work).
Each time I would add another "Public" permission I'd need to attempt to add a subscription, get the error message, look in the log file for the next object that needed permission.
Oh well, at least I can add subscriptions now
From http://www.developmentnow.com/g/115_2004_7_0_0_448752/history-subscription-links-on-manager-throwing-below-error.ht
Posted via DevelopmentNow.com Group
http://www.developmentnow.comsql
History Tables
But, what kind of a history element is a date of birth? Everyone has only one date of birth and this information is always valid - 7 years ago, today, next year and in 500 years ...|||Perhaps i didn't explain it correctly.
i need to make a DOB field on an existing order form a required field so that this order may be pulled at a later date to record how many people over a certain age ordered the product. I thought it might be wise to create a history table so that running the query for the report would not cause a drain on the application. am I wrong?|||If you are building a data warehouse, then it would probably make sense. Otherwise, storing "date of birth" column along with "orders" would be an example of denormalized design. Unless there is a good reason to do so, I'd rather see "date of birth" along with customer data (name, address, ...); orders should contain a foreign key column to join "order" with a customer.
History Table Design Issue
Hi all,
this is more of a design issue for a History table.
Suppose if i have a transaction table and then based on the transactions i want to keep a history of those do i need to define Primary Key and Foreign Key for history table.
Regards,
General Problem
As a general rule of thumb, EVERY table should have a primary key.
I'm sure that there are exceptions (it's a scary world out there!), but the design assumption should be "A primary key is required!" until proven otherwise.
As for foreign keys, that's a different issue.
Let's say you have an employee table.
If you set up an FK from your history table to your employee table, you will not be able to delete an employee from the employee table as long as you maintain those history recorrds.
If you intended for the employee table to be defined as "A list of current employees of the business.", you have just changed the definition to "A list of current and past employees of the business."
That may, or may not, be what you want!
Now, if the primary key of the employee is a meaningless number, and you need to know which employee did that transaction, you might be stuck with keeping past employees in the employee table. Or, of course, you could add the employee ssn and name to the history table. If you used a natural key (like their SSN), then someone could always look it up in the paper trail. Lots of options here!
History table
CREATE TRIGGER [Helpdesk_History_Copy] ON [dbo].[Helpdesk]
FOR INSERT, UPDATE AS
INSERT into dbo.Helpdesk_History
(
Helpdesk_History.Computer_Idn,
Helpdesk_History.DeviceName,
Helpdesk_History.InsertDate,
Helpdesk_History.ProblemTitle,
Helpdesk_History.Duration,
Helpdesk_History.ProblemDetails,
Helpdesk_History.ProblemSolution,
Helpdesk_History.CallDispatcher,
Helpdesk_History.Responsible,
Helpdesk_History.Status
)
SELECT
Helpdesk.Computer_Idn,
Helpdesk.DeviceName,
Helpdesk.InsertDate,
Helpdesk.ProblemTitle,
Helpdesk.Duration,
Helpdesk.ProblemDetails,
Helpdesk.ProblemSolution,
Helpdesk.CallDispatcher,
Helpdesk.Responsible,
Helpdesk.Status
from Helpdesk
With my trigger all values will be written in the helpdesk_history table.
But I want that only new oder changed collums will should be insert in the
history table.
Any idea ?
Thx
WolfgangI think this is what you want
CREATE TRIGGER [Helpdesk_History_Copy] ON [dbo].[Helpdesk]
FOR INSERT, UPDATE AS
INSERT into dbo.Helpdesk_History
(
Helpdesk_History.Computer_Idn,
Helpdesk_History.DeviceName,
Helpdesk_History.InsertDate,
Helpdesk_History.ProblemTitle,
Helpdesk_History.Duration,
Helpdesk_History.ProblemDetails,
Helpdesk_History.ProblemSolution,
Helpdesk_History.CallDispatcher,
Helpdesk_History.Responsible,
Helpdesk_History.Status
)
SELECT
d.Computer_Idn,
d.DeviceName,
d.InsertDate, -- this would probably be getdate() if you want
today's date
d.ProblemTitle,
d.Duration,
d.ProblemDetails,
d.ProblemSolution,
d.CallDispatcher,
d.Responsible,
d.Status
from deleted d
http://sqlservercode.blogspot.com/|||Hi in case of update this works perfect,
but if I insert a new record the complete table helpdesk will be copied to
the helpdesk_history table.
Any idea?
thx
Wolfgang
"SQL" wrote:
> I think this is what you want
> CREATE TRIGGER [Helpdesk_History_Copy] ON [dbo].[Helpdesk]
> FOR INSERT, UPDATE AS
> INSERT into dbo.Helpdesk_History
>
> (
> Helpdesk_History.Computer_Idn,
> Helpdesk_History.DeviceName,
> Helpdesk_History.InsertDate,
> Helpdesk_History.ProblemTitle,
> Helpdesk_History.Duration,
> Helpdesk_History.ProblemDetails,
> Helpdesk_History.ProblemSolution,
> Helpdesk_History.CallDispatcher,
> Helpdesk_History.Responsible,
> Helpdesk_History.Status
> )
> SELECT
>
> d.Computer_Idn,
> d.DeviceName,
> d.InsertDate, -- this would probably be getdate() if you want
> today's date
> d.ProblemTitle,
> d.Duration,
> d.ProblemDetails,
> d.ProblemSolution,
> d.CallDispatcher,
> d.Responsible,
> d.Status
>
> from deleted d
>
> http://sqlservercode.blogspot.com/
>|||When you do an insert the deleted table should not be available
Is there possible another trigger on the table that updates after
inserting?
http://sqlservercode.blogspot.com/|||Hi Wolfgang !
That would mean to build a block of code, comparing each column to the
new one, and keeping the information that this value was changed and
then issueing something like a dynamic sql statement, to only store the
information that were changed.
HTH, Jens Suessmeyer.|||Hi,
yes there is another trigger:
CREATE TRIGGER [Computer_idn] ON dbo.Helpdesk
FOR INSERT
AS
UPDATE Helpdesk SET Computer_idn =
(Select Computer.Computer_idn
from Computer
where
computer.devicename = helpdesk.devicename
and computer.SWLastScanDate =
(SELECT MAX(computer.SWLastScanDate) FROM computer))
This trigger updates the Computer_Idn ...and with this trigger it updates
the whole table. How can I change this trigger that only the Computer_Idn of
the new inserted value will be updated ?
Thanks
Wolfgang
"SQL" wrote:
> When you do an insert the deleted table should not be available
> Is there possible another trigger on the table that updates after
> inserting?
> http://sqlservercode.blogspot.com/
>|||On Wed, 9 Nov 2005 00:10:11 -0800, Wolfgang Dausend wrote:
>Hi,
>yes there is another trigger:
>
>CREATE TRIGGER [Computer_idn] ON dbo.Helpdesk
>FOR INSERT
>AS
>UPDATE Helpdesk SET Computer_idn =
>(Select Computer.Computer_idn
>from Computer
>where
>computer.devicename = helpdesk.devicename
>and computer.SWLastScanDate =
>(SELECT MAX(computer.SWLastScanDate) FROM computer))
>
>This trigger updates the Computer_Idn ...and with this trigger it updates
>the whole table. How can I change this trigger that only the Computer_Idn o
f
>the new inserted value will be updated ?
Hi Wolfgang,
Add
WHERE EXISTS
(SELECT *
FROM inserted
WHERE inserted.WhateverYourKeyIs = Helpdesk.WhateverYourKeyIs)
at the end of the UPDATE statement.
Best, Hugo
--
(Remove _NO_ and _SPAM_ to get my e-mail address)
History rows
Perhaps those other jobs execute so frequently so that you get to 1000 rows between executions of
this job?
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
Blog: http://solidqualitylearning.com/blogs/tibor/
"mecn" <mecn2002@.yahoo.com> wrote in message news:e1lCD$EGGHA.2652@.tk2msftngp13.phx.gbl...
>I have job history for other jobs in the same server
>
History rows
this job?
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
Blog: http://solidqualitylearning.com/blogs/tibor/
"mecn" <mecn2002@.yahoo.com> wrote in message news:e1lCD$EGGHA.2652@.tk2msftngp13.phx.gbl...
>I have job history for other jobs in the same server
>sql
History rows
between executions of
this job?
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
Blog: http://solidqualitylearning.com/blogs/tibor/
"mecn" <mecn2002@.yahoo.com> wrote in message news:e1lCD$EGGHA.2652@.tk2msftngp13.phx.gbl...[v
bcol=seagreen]
>I have job history for other jobs in the same server
>[/vbcol]
History of Report Definition File for Published Reports...
I would like to know if there is a way to maintain the history of changes to the reports that have been published to the report server?
I know that the report definitions get saved onto ReportServer database. But let's say a user makes a change to the published report and then saves it back to the server. And that the latest change was incorrect and I have to revert back to the previous version of the published report. Is there a way to do that? Does the report server maintain a history of previous versions.
There is a history for each report and I think that corresponds to the history of report executions (output data). But I am talking about the history of actual report definition.
Thanks for you help.
The report definition is stored as XML in the ReportServer database.
There is no concept of source management in Report Server. How would the user be making a change to the published report?
If they are accessing the report through Visual Studio, you would perform all source control operations there.
Otherwise you may be able to setup a trigger to copy the report definition to another table, though it would mean some hacking around with SQL profiler to determine what fields are required.
You could also setup a job that runs the RS Scripter utility to export definitions on a regular basis.
cheers,
Andrew
|||I should have told "for reports created using the report builder". These reports are published to the report server. Once published, the users can open the report again from the report builder and then make changes to the report and then publish it again. In such a case, is there a way to track the history of changes in that published report? Because if some user edits a report and messes it up, how would I go back to the previous version of the report?
Thanks for your response.
history of Process info
in SQL Server Enterprise Manager -- Management -- Current Activity --
Process Info i can see the current info for SPIDs and users and all sorts of
great info.
How can i view that same info from yesterday at a specific time?
thanks a million.
You could use a log reader tool of which there are many to choose eg
Lumigent's LogExplorer.
Cheers,
Paul Ibison SQL Server MVP, www.replicationanswers.com .
|||"Paul Ibison" wrote:
> You could use a log reader tool of which there are many to choose eg
> Lumigent's LogExplorer.
> Cheers,
> Paul Ibison SQL Server MVP, www.replicationanswers.com .
>
> Lumgent wants $995
|||Ok there's a new low-cost ($195) alternative from Redgate:
http://www.red-gate.com/products/SQL_Log_Rescue/index.htm.
BTW I'm assuming that you actually have the log as a backup, or have been
using full recovery mode. if you've used simple then this strategy won't
work for the particular transaction you're looking for.
Cheers,
Paul Ibison SQL Server MVP, www.replicationanswers.com .
history of Process info
in SQL Server Enterprise Manager -- Management -- Current Activity --
Process Info i can see the current info for SPIDs and users and all sorts of
great info.
How can i view that same info from yesterday at a specific time?
thanks a million.You could use a log reader tool of which there are many to choose eg
Lumigent's LogExplorer.
Cheers,
Paul Ibison SQL Server MVP, www.replicationanswers.com .|||"Paul Ibison" wrote:
> You could use a log reader tool of which there are many to choose eg
> Lumigent's LogExplorer.
> Cheers,
> Paul Ibison SQL Server MVP, www.replicationanswers.com .
>
> Lumgent wants $995|||Ok there's a new low-cost ($195) alternative from Redgate:
http://www.red-gate.com/products/SQ...scue/index.htm.
BTW I'm assuming that you actually have the log as a backup, or have been
using full recovery mode. if you've used simple then this strategy won't
work for the particular transaction you're looking for.
Cheers,
Paul Ibison SQL Server MVP, www.replicationanswers.com .
History of logins or Userids
Does SQL Sever 2000 keep track of all the logins/userids that were
deleted over the last year? For example, If I deleted a login "Joe"
from a SQL 2000 server, (and of course the corresponding userid "Joe"
got deleted from the database) would that be recorded somewhere in the
system or the production database? i.e. the information such as the
login "joe" deleted on such and such date from such and such database?
Long Live SOX :)
Thanks,
Raziq.
*** Sent via Developersdex http://www.developersdex.com ***Raziq Shekha (raziq_shekha@.anadarko.com) writes:
> Does SQL Sever 2000 keep track of all the logins/userids that were
> deleted over the last year? For example, If I deleted a login "Joe"
> from a SQL 2000 server, (and of course the corresponding userid "Joe"
> got deleted from the database) would that be recorded somewhere in the
> system or the production database? i.e. the information such as the
> login "joe" deleted on such and such date from such and such database?
The only place for this is in the transaction log. So if you have your
master in full recovery - and simple is the default - and you never have
truncated the transaction log, you have the information about the login.
If you then get a log reader - see www.lumigent.com or www.logpi.com -
you can get hold of the information.
Same applies to the production database, except that in case I am quite
sure that you have truncated the log. But of course, if you have all
full backups and log backups since last year saved... :-)
It is possible to set up SQL Server to audit such information with the
C2 setting and a trace. But that has to be thought of in advance.
--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp|||Are there any third party tools that will do this?
Raziq.
*** Sent via Developersdex http://www.developersdex.com ***|||Raziq Shekha (raziq_shekha@.anadarko.com) writes:
> Are there any third party tools that will do this?
Have a look at Lumigent's Entegra, http://www.lumigent.com. I don't know
if they have what they are asking for, but I would expect it.
--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pr...oads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodin...ions/books.mspx|||Are there any third party tools that will do this?
Raziq.
*** Sent via Developersdex http://www.developersdex.com ***sql
History of data for documents
I have following situation:
CREATE TABLE [dbo].[Address] (
[Id] uniqueidentifier ROWGUIDCOL NOT NULL ,
[CityId] [uniqueidentifier] NOT NULL ,
[CountryId] [uniqueidentifier] NOT NULL ,
[StreetName] [nvarchar] (100) NOT NULL ,
[StreetNo] [nvarchar] (10) NOT NULL ,
[LocalNo] [nvarchar] (10) NOT NULL ,
[PostalCode] [nvarchar] (20) NOT NULL
) ON [PRIMARY]
GO
CREATE TABLE [dbo].[Contractor] (
[Id] uniqueidentifier ROWGUIDCOL NOT NULL ,
[AddressId] [uniqueidentifier] NOT NULL ,
[Symbol] [nvarchar] (50) NOT NULL ,
[Name] [nvarchar] (200) NOT NULL
) ON [PRIMARY]
GO
CREATE TABLE [dbo].[Order] (
[Id] uniqueidentifier ROWGUIDCOL NOT NULL ,
[OwnerContractorId] [uniqueidentifier] NOT NULL ,
[TargetContractorId] [uniqueidentifier] NOT NULL ,
[Symbol] [nvarchar] (50) NOT NULL
) ON [PRIMARY]
GO
CREATE TABLE [dbo].[OrderProduct] (
[Id] [uniqueidentifier] ROWGUIDCOL NOT NULL ,
[OrderId] [uniqueidentifier] NOT NULL ,
[ProductId] [uniqueidentifier] NOT NULL ,
[Quantity] [float] NOT NULL
) ON [PRIMARY]
GO
CREATE TABLE [dbo].[Product] (
[Id] uniqueidentifier ROWGUIDCOL NOT NULL ,
[Symbol] [nvarchar] (50) NOT NULL
) ON [PRIMARY]
GO
CREATE TABLE [dbo].[ProductName] (
[Id] uniqueidentifier ROWGUIDCOL NOT NULL ,
[ProductId] [uniqueidentifier] NOT NULL ,
[CultureName] [nvarchar] (20) NOT NULL ,
[Name] [nvarchar] (200) NOT NULL
) ON [PRIMARY]
GO
The Problem is:
Order - needs information about contractor and his address (address is
related to the contractor by one-to-many
relation). Furthermore, the order needs information about products (its
names, which are stored in
separate table - each product can have several names, depending on language)
.
If a name of a product would change, it will be changed in all orders. But
order is the document and I need to
keep it as it was at the moment of creation (constractor data - names,
address etc. , product data - names etc.).
To do this, I have to design some kind of history of data connection to each
other. It is bad situation to me,
when I change address of some contractor - all orders will have changed
addresses, the proper data connections
will collapse.
My idea is to copy all related data for the record which is edited. For
example if an address for contractor is
changed, the new contractor record is created with new id (guid) and the
same symbol (is unique for all active
records) (old one have flag "history" for example - is deactivated) and the
new address is related to this new contractor record. The same story applies
when a name of product is edited.
This mechanism lets to store correct data for each order in the database -
all documents(orders) are connected to the same data as at the moment of
their creation.
Reports in this case are performed by queries operationg in the "Symbol" of
the contractor - this gives the
possibility to find all orders of one contractor - each having correct
address for the moment of creation of
order.
My question is: is this design corresponding to the "rules of art" somehow?
What are your ideas for solving such problems?
I will be thankful for your opinions
Adam Rozycki & friends>> My question is: is this design corresponding to the "rules of art"
No
I am not sure where you got the idea of using a globally unique identifier
type for every key in your tables.
We all work within existing computational constraints; so no business
segment requires a machine generated global identifier for a table unless
you are researching on such values. Other than the "
hype, there is nothing simple, pragmatic or meaningful about using a
uniqueidentifier column as a key for Orders or Products table as in your
situation.
One approach to your problem, based on your narratives, can be like:
CREATE TABLE Orders (
Order_nbr INT NOT NULL PRIMARY KEY,
Product_id INT NOT NULL,
REFERENCES Products ( Product_id )
Contractor_id INT NOT NULL
Address_id INT NOT NULL,
REFERENCES Contractors ( Contractor_id, Address_id )
.. ) ;
CREATE TABLE Contractors (
Contractor_id INT NOT NULL,
Address_id INT NOT NULL,
REFERENCES Addresses ( Address_id ) ,
Name...
PRIMARY KEY ( Contractor_id, Address_id )
) ;
CREATE TABLE Addresses (
Address_id INT NOT NULL PRIMARY KEY,
Address VARCHAR( 40 ) NOT NULL,
City_state_zip VARCHAR( 40 ) NOT NULL,
UNIQUE ( Address, City_state_zip )
.. ) ;
CREATE TABLE Products (
Product_id INT NOT NULL PRIMARY KEY,
Product_name ...
) ;
To keep track of history of change in Product names use another table with
temporal datatypes like:
CREATE TABLE ProductHistory (
Product_id INT NOT NULL,
Product_name VARCHAR ( 100 ) NOT NULL,
Assigned_date DATETIME NOT NULL,
Withdrawn_date DATETIME NOT NULL,
..
PRIMARY KEY ( Product_id, Assigned_date )
CHECK ( Withdrawn_date >= Assigned_date )
);
The same approach can be used for changes in other attributes like symbols
which you mentioned as well.
Anith|||
"Anith Sen" wrote:
> I am not sure where you got the idea of using a globally unique identifier
> type for every key in your tables.
My idea was to single keys instead of complex ones.
> We all work within existing computational constraints; so no business
> segment requires a machine generated global identifier for a table unless
> you are researching on such values.
The reason why I used GUIDs is that this database has to be replicable and
data exchangable with several separate databases. There is to be one master
DB and several slave DBs. Since data can be added in some independent places
- I think I need to use the global identifiers.
My problem is that I have to have a data corresponding to Order, just the
same as it was at the moment of creating of Order. I cannot have situation
when changed contractor data changes data in all orders.
The database has to store names for products in several languages, since
that I cannot put all information about products in one table.
I would like to achieve rather some sort of document revision than history
of action on documents.
Adam|||>> My idea was to single keys instead of complex ones.
Good. Simple keys are a recommended consideration for a primary key.
However, having a uniqueidentifier in a table as the only key in your table
does nothing for entity identification, which is the main purpose of a key
in the first place.
Not necessarily. You could opt for any arbitrary namespace to determine
independent databases distributed over different servers. While
uniqueidentifier type guarantees the value to be unique globally, it cannot
guarantee the uniqueness of the corresponding entity.
For instance, a customer by name Adam in a table in database A cannot be
distinguished from another customer by same name Adam in similar table in a
replicable database B or even in the same table in the same database, just
by virtue of arbitrary GUIDs alone. All you'll have is duplicated entries of
Adam in the table with different GUIDs associated with them. How do you
identify the row corresponding to Adam? How will you enforce entity
integrity? How do you track down an alleged error, for instance in data
entry? Can you use GUIDs for referencing keys reliably without cascading
changes?
It is mostly hard to provide any specific meaningful suggestions here. While
you are familiar with your business model regarding the orders, customers,
languages, symbols, revisions etc., others in this newsgroup have no clue on
what they are or how they are related. You did provide a set of CREATE TABLE
statements without any keys, constraints, references etc. however databases
cannot be designed based on such. Esp. when only a couple of lines of
narrative are provided, there is a high chance that the overall business
model and rules are miscommunicated, misrepresented and/or misunderstood.
As a general suggestion, your approach to use GUIDs all over the table as
primary keys with no identifying attribute seems inherently flawed. However,
if you have made provisions for entity identification using UNIQUE NOT NULL
constraints, perhaps you might be able to work it out to some extent.
Consider using temporal datatypes for tracking historical information unless
you are using them already.
Also a few general design rules of thumb, if it helps:
* When you have a one-to-one relationship between two entity types, unless
there are any non-dependency preserving relationships, you may represent
them in a single table.
* When you have a many-to-one relationship between two entity types, you
should use a referential integrity constraint ( FK ) between the tables
representing these entity types
* When you have a many-to-many relationship between two or more entity
types, you should introduce an "association" table which reduces the schema
to two or more many-to-one relationships on each table representing these
entity types.
Anith|||
"Anith Sen" wrote:
> Good. Simple keys are a recommended consideration for a primary key.
> However, having a uniqueidentifier in a table as the only key in your tabl
e
> does nothing for entity identification, which is the main purpose of a key
> in the first place.[/color]
The rule is - application and database identifies entities by GUID (Id) and
users identifies entities by Symbol.
> All you'll have is duplicated entries of Adam in the table with different
GUIDs
>associated with them. How do you identify the row corresponding to Adam? Ho
w
>will you enforce entity integrity?[/color]
Such data will be input by aware users only - some special roles in
application. It depends on requirements whether database should be able to
store duplicated records or not. I think it should do so for history purpose
s.
> Can you use GUIDs for referencing keys reliably without cascading
> changes?[/color]
Data entites are represented as obiects in application. Identifiers are read
from these obiects - Bussiness Logic takes these identifiers (GUID) and do
with them whatever is needed (search, modify, delete etc.). Bussiness Logic
will take care about all of changes.
> others in this newsgroup have no clue on what they are or how they are related.[/c
olor]
In my first post I have put creationof tables - for general view on my DB
structure (small part of it in fact). Here you are relations added to it:
CREATE TABLE [dbo].[Address] (
[Id] uniqueidentifier ROWGUIDCOL NOT NULL ,
[CityId] uniqueidentifier NOT NULL ,
[CountryId] uniqueidentifier NOT NULL ,
[StreetName] nvarchar (100) NOT NULL ,
[StreetNo] nvarchar (10) NOT NULL ,
[LocalNo] nvarchar (10) NOT NULL ,
[PostalCode] nvarchar (20) NOT NULL ,
[ChangeStamp] timestamp NOT NULL
) ON [PRIMARY]
GO
CREATE TABLE [dbo].[Contractor] (
[Id] uniqueidentifier ROWGUIDCOL NOT NULL ,
[AddressId] uniqueidentifier NOT NULL ,
[Symbol] nvarchar (50) NOT NULL ,
[Name] nvarchar (200) NOT NULL ,
[ChangeStamp] timestamp NOT NULL
) ON [PRIMARY]
GO
CREATE TABLE [dbo].[Order] (
[Id] uniqueidentifier ROWGUIDCOL NOT NULL ,
[ContractorId] uniqueidentifier NOT NULL ,
[Symbol] nvarchar (50) NOT NULL ,
[ChangeStamp] timestamp NOT NULL
) ON [PRIMARY]
GO
CREATE TABLE [dbo].[OrderProduct] (
[Id] uniqueidentifier ROWGUIDCOL NOT NULL ,
[OrderId] uniqueidentifier NOT NULL ,
[ProductId] uniqueidentifier NOT NULL ,
[Quantity] float NOT NULL ,
[ChangeStamp] timestamp NOT NULL
) ON [PRIMARY]
GO
CREATE TABLE [dbo].[Product] (
[Id] uniqueidentifier ROWGUIDCOL NOT NULL ,
[Symbol] nvarchar (50) NOT NULL ,
[ChangeStamp] timestamp NOT NULL
) ON [PRIMARY]
GO
CREATE TABLE [dbo].[ProductName] (
[Id] uniqueidentifier ROWGUIDCOL NOT NULL ,
[ProductId] uniqueidentifier NOT NULL ,
[CultureName] nvarchar (20) NOT NULL ,
[Name] nvarchar (200) NOT NULL ,
[ChangeStamp] timestamp NOT NULL
) ON [PRIMARY]
GO
ALTER TABLE [dbo].[Address] ADD
CONSTRAINT [DF_Address_Id] DEFAULT (newid()) FOR [Id],
CONSTRAINT [PK_Address] PRIMARY KEY CLUSTERED
(
[Id]
) ON [PRIMARY]
GO
ALTER TABLE [dbo].[Contractor] ADD
CONSTRAINT [DF_Contractor_Id] DEFAULT (newid()) FOR [Id],
CONSTRAINT [PK_Contractor] PRIMARY KEY CLUSTERED
(
[Id]
) ON [PRIMARY]
GO
ALTER TABLE [dbo].[Order] ADD
CONSTRAINT [DF_Order_Id] DEFAULT (newid()) FOR [Id],
CONSTRAINT [PK_Order] PRIMARY KEY CLUSTERED
(
[Id]
) ON [PRIMARY]
GO
ALTER TABLE [dbo].[OrderProduct] ADD
CONSTRAINT [DF_OrderProduct_Id] DEFAULT (newid()) FOR [Id],
CONSTRAINT [PK_OrderProduct] PRIMARY KEY CLUSTERED
(
[Id]
) ON [PRIMARY]
GO
ALTER TABLE [dbo].[Product] ADD
CONSTRAINT [DF_Product_Id] DEFAULT (newid()) FOR [Id],
CONSTRAINT [PK_Product] PRIMARY KEY CLUSTERED
(
[Id]
) ON [PRIMARY]
GO
ALTER TABLE [dbo].[ProductName] ADD
CONSTRAINT [DF_ProductName_Id] DEFAULT (newid()) FOR [Id],
CONSTRAINT [PK_ProductName] PRIMARY KEY CLUSTERED
(
[Id]
) ON [PRIMARY]
GO
ALTER TABLE [dbo].[Contractor] ADD
CONSTRAINT [FK_Contractor_Address] FOREIGN KEY
(
[AddressId]
) REFERENCES [dbo].[Address] (
[Id]
)
GO
ALTER TABLE [dbo].[Order] ADD
CONSTRAINT [FK_Order_Contractor] FOREIGN KEY
(
[ContractorId]
) REFERENCES [dbo].[Contractor] (
[Id]
)
GO
ALTER TABLE [dbo].[OrderProduct] ADD
CONSTRAINT [FK_OrderProduct_Order] FOREIGN KEY
(
[OrderId]
) REFERENCES [dbo].[Order] (
[Id]
),
CONSTRAINT [FK_OrderProduct_Product] FOREIGN KEY
(
[ProductId]
) REFERENCES [dbo].[Product] (
[Id]
)
GO
ALTER TABLE [dbo].[ProductName] ADD
CONSTRAINT [FK_ProductName_Product] FOREIGN KEY
(
[ProductId]
) REFERENCES [dbo].[Product] (
[Id]
)
GO
Is it now possible that you provide some judge of my record-history idea or
provide some other ideas how to perform this correctly?
Adam|||"Anith Sen" wrote:
> Good. Simple keys are a recommended consideration for a primary key.
> However, having a uniqueidentifier in a table as the only key in your tabl
e
> does nothing for entity identification, which is the main purpose of a key
> in the first place.[/color]
The rule is - application and database identifies entities by GUID (Id) and
users identifies entities by Symbol.
> All you'll have is duplicated entries of Adam in the table with different
GUIDs
>associated with them. How do you identify the row corresponding to Adam? Ho
w
>will you enforce entity integrity?[/color]
Such data will be input by aware users only - some special roles in
application. It depends on requirements whether database should be able to
store duplicated records or not. I think it should do so for history purpose
s.
> Can you use GUIDs for referencing keys reliably without cascading
> changes?[/color]
Data entites are represented as obiects in application. Identifiers are read
from these obiects - Bussiness Logic takes these identifiers (GUID) and do
with them whatever is needed (search, modify, delete etc.). Bussiness Logic
will take care about all of changes.
> others in this newsgroup have no clue on what they are or how they are related.[/c
olor]
In my first post I have put creation of tables - for general view on my DB
structure (small part of it in fact). Here you are relations added to it:
CREATE TABLE [dbo].[Address] (
[Id] uniqueidentifier ROWGUIDCOL NOT NULL ,
[CityId] uniqueidentifier NOT NULL ,
[CountryId] uniqueidentifier NOT NULL ,
[StreetName] nvarchar (100) NOT NULL ,
[StreetNo] nvarchar (10) NOT NULL ,
[LocalNo] nvarchar (10) NOT NULL ,
[PostalCode] nvarchar (20) NOT NULL ,
[ChangeStamp] timestamp NOT NULL
) ON [PRIMARY]
GO
CREATE TABLE [dbo].[Contractor] (
[Id] uniqueidentifier ROWGUIDCOL NOT NULL ,
[AddressId] uniqueidentifier NOT NULL ,
[Symbol] nvarchar (50) NOT NULL ,
[Name] nvarchar (200) NOT NULL ,
[ChangeStamp] timestamp NOT NULL
) ON [PRIMARY]
GO
CREATE TABLE [dbo].[Order] (
[Id] uniqueidentifier ROWGUIDCOL NOT NULL ,
[ContractorId] uniqueidentifier NOT NULL ,
[Symbol] nvarchar (50) NOT NULL ,
[ChangeStamp] timestamp NOT NULL
) ON [PRIMARY]
GO
CREATE TABLE [dbo].[OrderProduct] (
[Id] uniqueidentifier ROWGUIDCOL NOT NULL ,
[OrderId] uniqueidentifier NOT NULL ,
[ProductId] uniqueidentifier NOT NULL ,
[Quantity] float NOT NULL ,
[ChangeStamp] timestamp NOT NULL
) ON [PRIMARY]
GO
CREATE TABLE [dbo].[Product] (
[Id] uniqueidentifier ROWGUIDCOL NOT NULL ,
[Symbol] nvarchar (50) NOT NULL ,
[ChangeStamp] timestamp NOT NULL
) ON [PRIMARY]
GO
CREATE TABLE [dbo].[ProductName] (
[Id] uniqueidentifier ROWGUIDCOL NOT NULL ,
[ProductId] uniqueidentifier NOT NULL ,
[CultureName] nvarchar (20) NOT NULL ,
[Name] nvarchar (200) NOT NULL ,
[ChangeStamp] timestamp NOT NULL
) ON [PRIMARY]
GO
ALTER TABLE [dbo].[Address] ADD
CONSTRAINT [DF_Address_Id] DEFAULT (newid()) FOR [Id],
CONSTRAINT [PK_Address] PRIMARY KEY CLUSTERED
(
[Id]
) ON [PRIMARY]
GO
ALTER TABLE [dbo].[Contractor] ADD
CONSTRAINT [DF_Contractor_Id] DEFAULT (newid()) FOR [Id],
CONSTRAINT [PK_Contractor] PRIMARY KEY CLUSTERED
(
[Id]
) ON [PRIMARY]
GO
ALTER TABLE [dbo].[Order] ADD
CONSTRAINT [DF_Order_Id] DEFAULT (newid()) FOR [Id],
CONSTRAINT [PK_Order] PRIMARY KEY CLUSTERED
(
[Id]
) ON [PRIMARY]
GO
ALTER TABLE [dbo].[OrderProduct] ADD
CONSTRAINT [DF_OrderProduct_Id] DEFAULT (newid()) FOR [Id],
CONSTRAINT [PK_OrderProduct] PRIMARY KEY CLUSTERED
(
[Id]
) ON [PRIMARY]
GO
ALTER TABLE [dbo].[Product] ADD
CONSTRAINT [DF_Product_Id] DEFAULT (newid()) FOR [Id],
CONSTRAINT [PK_Product] PRIMARY KEY CLUSTERED
(
[Id]
) ON [PRIMARY]
GO
ALTER TABLE [dbo].[ProductName] ADD
CONSTRAINT [DF_ProductName_Id] DEFAULT (newid()) FOR [Id],
CONSTRAINT [PK_ProductName] PRIMARY KEY CLUSTERED
(
[Id]
) ON [PRIMARY]
GO
ALTER TABLE [dbo].[Contractor] ADD
CONSTRAINT [FK_Contractor_Address] FOREIGN KEY
(
[AddressId]
) REFERENCES [dbo].[Address] (
[Id]
)
GO
ALTER TABLE [dbo].[Order] ADD
CONSTRAINT [FK_Order_Contractor] FOREIGN KEY
(
[ContractorId]
) REFERENCES [dbo].[Contractor] (
[Id]
)
GO
ALTER TABLE [dbo].[OrderProduct] ADD
CONSTRAINT [FK_OrderProduct_Order] FOREIGN KEY
(
[OrderId]
) REFERENCES [dbo].[Order] (
[Id]
),
CONSTRAINT [FK_OrderProduct_Product] FOREIGN KEY
(
[ProductId]
) REFERENCES [dbo].[Product] (
[Id]
)
GO
ALTER TABLE [dbo].[ProductName] ADD
CONSTRAINT [FK_ProductName_Product] FOREIGN KEY
(
[ProductId]
) REFERENCES [dbo].[Product] (
[Id]
)
GO
Can you now provide some judgement of my record-history idea or provide some
other ideas how could it be performed correctly?
Adam
Monday, March 26, 2012
History of all SQL Executed
been executed since the last re-start of the Server.
If Profiler is the only way of doing it - what are the performance impact of
running Profiler ( when Profiler writes to a file ) in a production System
Thanks> If Profiler is the only way of doing it - what are the performance impact
> of
> running Profiler ( when Profiler writes to a file ) in a production System
Profiler can have a negative performance impact if you have a lot of
qualifying events. I suggest that you instead create server-side SQL Trace
to log to a file. You can use Profiler to generate the SQL Trace script.
--
Hope this helps.
Dan Guzman
SQL Server MVP
"Sumit Pal" <Sumit Pal@.discussions.microsoft.com> wrote in message
news:4C7A8C06-671D-4DE4-AA2F-23B368191A69@.microsoft.com...
> Is there a way to know in SQLSERVER - the history of all the SQL that has
> been executed since the last re-start of the Server.
> If Profiler is the only way of doing it - what are the performance impact
> of
> running Profiler ( when Profiler writes to a file ) in a production System
> Thanks
History of all SQL Executed
been executed since the last re-start of the Server.
If Profiler is the only way of doing it - what are the performance impact of
running Profiler ( when Profiler writes to a file ) in a production System
Thanks> If Profiler is the only way of doing it - what are the performance impact
> of
> running Profiler ( when Profiler writes to a file ) in a production System
Profiler can have a negative performance impact if you have a lot of
qualifying events. I suggest that you instead create server-side SQL Trace
to log to a file. You can use Profiler to generate the SQL Trace script.
Hope this helps.
Dan Guzman
SQL Server MVP
"Sumit Pal" <Sumit Pal@.discussions.microsoft.com> wrote in message
news:4C7A8C06-671D-4DE4-AA2F-23B368191A69@.microsoft.com...
> Is there a way to know in SQLSERVER - the history of all the SQL that has
> been executed since the last re-start of the Server.
> If Profiler is the only way of doing it - what are the performance impact
> of
> running Profiler ( when Profiler writes to a file ) in a production System
> Thanks
History of all SQL Executed
been executed since the last re-start of the Server.
If Profiler is the only way of doing it - what are the performance impact of
running Profiler ( when Profiler writes to a file ) in a production System
Thanks
> If Profiler is the only way of doing it - what are the performance impact
> of
> running Profiler ( when Profiler writes to a file ) in a production System
Profiler can have a negative performance impact if you have a lot of
qualifying events. I suggest that you instead create server-side SQL Trace
to log to a file. You can use Profiler to generate the SQL Trace script.
Hope this helps.
Dan Guzman
SQL Server MVP
"Sumit Pal" <Sumit Pal@.discussions.microsoft.com> wrote in message
news:4C7A8C06-671D-4DE4-AA2F-23B368191A69@.microsoft.com...
> Is there a way to know in SQLSERVER - the history of all the SQL that has
> been executed since the last re-start of the Server.
> If Profiler is the only way of doing it - what are the performance impact
> of
> running Profiler ( when Profiler writes to a file ) in a production System
> Thanks
History of a SQL Job Deletion
need to track down why a job is going away but I haven't been able to
figure out how to do it.
Thanks!
--
Posted via http://dbforums.comThe SQL Jobs are all stored within the msdb database. If you have been
backing up msdb you should be able to get the job back
--
HTH
Ryan Waight, MCDBA, MCSE
"MrSanity" <member40366@.dbforums.com> wrote in message
news:3390116.1063975524@.dbforums.com...
> Does anyone know if there is a record kept of a deleted SQL Agent Job? I
> need to track down why a job is going away but I haven't been able to
> figure out how to do it.
>
> Thanks!
>
> --
> Posted via http://dbforums.com|||Are you saying that some of your jobs just Disappear ?
If yes then have a look at the job properties. On the Notifications
tab there is a "Automatically delete this job - When it
fails/Succeeds/completes"
Do you have that checked ?
Allan Mitchell (Microsoft SQL Server MVP)
MCSE,MCDBA
www.SQLDTS.com
I support PASS - the definitive, global community
for SQL Server professionals - http://www.sqlpass.org|||There is no record kept of deleted jobs in SQL Server. If your msdb database
uses either the full or bulk-logged recovery model and you have backups of
the transaction logs, you can use LogExplorer from www.lumigent.com to do
research.
If a job disappears on a regular basis, make sure that you don't have the
'automatically delete job' property set. You can find it on the
Notifications tab of the job properties in Enterprise Manager.
--
Jacco Schalkwijk MCDBA, MCSD, MCSE
Database Administrator
Eurostop Ltd.
"MrSanity" <member40366@.dbforums.com> wrote in message
news:3390116.1063975524@.dbforums.com...
> Does anyone know if there is a record kept of a deleted SQL Agent Job? I
> need to track down why a job is going away but I haven't been able to
> figure out how to do it.
>
> Thanks!
>
> --
> Posted via http://dbforums.comsql