Skip to main content

mom: some details about mom 2005 summary reporting

get this… since the day summary reporting was rolled out, the dts package has never successfully completed. i’m talking 6 months. so, for 6 months, mom has been force fed data from the systemcenterreporting database. for those of you that may not know, summary reporting is an add-on that aggregates data points in the reporting warehouse database. by using this system, you can effectively reduce the size of your warehouse. this is by no means a comprehensive guide. just some things discovered along the way. more information is in the accompanying guide that comes with the download. to get started, here are the stored procedures that can be executed to change the behavior of summary reporting pack:
  • exec p_setweekstartday [value]
    • [value] is 1-7 (sunday through saturday, respectively)
  • exec p_setcutoffdate [yyyy-mm-dd]
  • exec p_setsamplegroomdays [value]
    • [value] must be greater than 7
  • exec p_setalertgroomdays [value]
    • [value] must be greater than 7
  • exec p_setlowpercentilevalue [value]
    • [value] 1 through 49
  • exec p_sethighpercentilevalue [value]
    • [value] 51 through 99
  • exec p_setgroombatchsize [value]
    • [value] must be greater than 10.
most of these are pretty self-explanatory and are explained in much better detail in the guide. i put them up here for my own reference whenever i need to execute one of these things without having to look up pages 12-15. :) there is one in particular that i want to talk about, which is p_setcutoffdate. this is something that you want to pay particular attention to when you setup. if you set this value too far in the past, the dts job may never complete, depending on the amount of data you have. the reason being, this value dictates where aggregation starts. in other words, where do you want the aggregation job to start looking at data points? do you want to start from 3 months in the past? expect it to fail. at any rate, don’t worry about starting it too early. just make sure that the date you start, matches the p_setweekstartday value. let me explain! the previous stuff, you know, the stuff prior to the p_setcutoffdate value, can be brought in manually. you can read in a week, a month, etc. here are the commands (watch for word wrap):
  • daily samples
    dtsrun.exe /F "[MOM 2005 Reporting installation directory]\Reporting\BuildDailySampleAggregations_AnyDays.dts" /W True /A "ServerName":"8"="YourServer" /A "DatabaseName":"8"="SystemCenterReporting" /A "StartDate":"8"="YYYY-MM-DD" /A "EndDate":"8"="YYYY-MM-DD"
  • weekly samples
    dtsrun.exe /F "[MOM 2005 Reporting installation directory]\Reporting\BuildWeeklySampleAggregations_AnyDays.dts" /W True /A "ServerName":"8"="YourServer" /A "DatabaseName":"8"="SystemCenterReporting" /A "StartDate":"8"="YYYY-MM-DD" /A "EndDate":"8"="YYYY-MM-DD"
  • alerts
    dtsrun.exe /F "[MOM 2005 Reporting installation directory]\Reporting\BuildAlertAggregations_AnyDays.dts" /W True /A "ServerName":"8"="YourServer" /A "DatabaseName":"8"="SystemCenterReporting" /A "StartDate":"8"="YYYY-MM-DD" /A "EndDate":"8"="YYYY-MM-DD"
now that we have that out of the way, why else would the dts job fail? here are the reasons:
  1. dts job is extremely cpu intensive and space intensive (tempdb).
  2. uploadaggregations table may not have a recent timestamp.

if you look in the dts job itself, the instructions are executed in parallel, instead of serial. this may not be a big deal since ordinarily, summary reporting shouldn’t take very long to run. so a small spike in cpu, in the dead of night… who cares? now if you factor this with tons of data to have to look through, then you’ve got some issues. we modified the dts job to execute in serial initially, thinking this was what was causing all the problems. not quite… refer back to #2.

this uploadaggregations table. hmmm. it’s not in accompanying guide. odd... turns out this table holds the timestamps to indicate from which date forward summary reporting pack should look at data points. remember, the p_setcutoffdate parameter only tells the dts job from which date to start. this only matters for the maiden voyage. to see what i’m talking about, issue this command in query analyzer. it won’t break anything… no, really… it won’t.

select LastDate, LastWeek, AlertLastDate, AlertLastWeek 
from dbo.uploadaggregations 

select top 1 dategenerated_pk as LastDate 
from sc_daily_counterdatafact_table 
order by LastDate desc 

select top 1 weekgenerated_pk as LastWeek 
from sc_weekly_counterdatafact_table 
order by LastWeek desc 

select top 1 dategenerated_pk as AlertLastDate 
from sc_daily_alertfact_table 
order by AlertLastDate desc 

select top 1 weekgenerated_pk as AlertLastWeek 
from sc_weekly_alertfact_table 
order by AlertLastWeek desc 

the query will list the values in the table that need your focus. the fields listed in the first table is a marker to indicate where to begin looking at data points for the dts job’s current run. if you have a value way back in the past, it’s going to go back and look.
 
to correct this, just pair up the names and dates. if the timestamps are older in the first table, modify this table to raise the timestamps to the values in their pair. for example, if lastdate in the first table is older than lastdate in the following table, then change the lastdate in the first table to match the lastdate in the second table. by the way, if you have multiple rows in the first table, that means you're sending sending more than one management group to your warehouse. :)
 
hope this helps.

Comments

  1. nice post....very good info.

    ReplyDelete
  2. Great info! I have future dates in the UploadAggregations table! Nice.. wonder how that happened?

    ReplyDelete
  3. a reader sent this in today:

    Marcus,

    Not sure if you'll even remember writing it over 2 years ago, but wanted to thank you for your blog posting on MOM Summary Reporting. We had an issue where a server inadvertently had a date set in the year 2038, and the data then ended up in our MOM Reporting DB wreaking havoc. Using your queries as a starting spot, I was able to track down the data.

    Really appreciate it!

    ReplyDelete

Post a Comment

Popular posts from this blog

using preloadpkgonsite.exe to stage compressed copies to child site distribution points

UPDATE: john marcum sent me a kind email to let me know about a problem he ran into with preloadpkgonsite.exe in the new SCCM Toolkit V2 where under certain conditions, packages will not uncompress.  if you are using the v2 toolkit, PLEASE read this blog post before proceeding.   here’s a scenario that came up on the mssms@lists.myitforum.com mailing list. when confronted with a situation of large packages and wan links, it’s generally best to get the data to the other location without going over the wire. in this case, 75gb. :/ the “how” you get the files there is really not the most important thing to worry about. once they’re there and moved to the appropriate location, preloadpkgonsite.exe is required to install the compressed source files. once done, a status message goes back to the parent server which should stop the upstream server from copying the package source files over the wan to the child site. anyway, if it’s a relatively small amount of packages, you can

How to Identify Applications Using Your Domain Controller

Problem Everyone has been through it. We've all had to retire or replace a domain controller at some point in our checkered collective experiences. While AD provides very intelligent high availability, some applications are just plain dumb. They do not observe site awareness or participate in locating a domain controller. All they want is the name or IP of one domain controller which gets hardcoded in a configuration file somewhere, deeply embedded in some file folder or setting that you are never going to find. How do you look at a DC and decide which applications might be doing it? Packet trace? Logs? Shut it down and wait for screaming? It seems very tedious and nearly impossible. Potential Solution Obviously I wouldn't even bother posting this if I hadn't run across something interesting. :) I ran across something in draftcalled Domain Controller Isolation. Since it's in draft, I don't know that it's published yet. HOWEVER, the concept is based off

sccm: content hash fails to match

back in 2008, I wrote up a little thing about how distribution manager fails to send a package to a distribution point . even though a lot of what I wrote that for was the failure of packages to get delivered to child sites, the result was pretty much the same. when the client tries to run the advertisement with an old package, the result was a failure because of content mismatch. I went through an ordeal recently capturing these exact kinds of failures and corrected quite a number of problems with these packages. the resulting blog post is my effort to capture how these problems were resolved. if nothing else, it's a basic checklist of things you can use.   DETECTION status messages take a look at your status messages. this has to be the easiest way to determine where these problems exist. unfortunately, it requires that a client is already experiencing problems. there are client logs you can examine as well such as cas, but I wasn't even sure I was going to have enough m