Skip to main content

opalis: working around limitations with workflow objects and link operators

UPDATE: pete zerger was kind enough to point out that sometimes i don't make sense, and i missed a very obvious point in the documentation.  since the post itself is still useful, i didn't just scrap it.  :)  instead, i added an addendum.

if you've been working with opalis long enough, you might will find that there are moments when hacks are required to get you from one point to the next point.  i've been experimenting a lot with nested workflows.  it's like evolving from inline scripting to scripting with functions and/or subs.

i discovered that when using trigger policy to run a nested workflow, a bizarre thing happens.  even if the nested workflow executes with an error, the status returned by the calling trigger policy object is "success".  it didn't make sense at first until i realized that by all accounts, the trigger policy did execute successfully.

image image

well, there's a problem with this.  if it comes back as success, even though something failed, the policy will continue on down the path unless you tell it otherwise.  enough of that.  let's talk specifics about my scenario.

the workflow i created was designed to do one thing: usher alerts from opsmgr into tickets in remedy.  since remedy is divided into many different operating queues, i had to consider how to create tickets into the correct queues.  i decided to try it based on computer group membership.

in order to get the group membership, i had to query the opsmgr database.  i decided to push that into a nested workflow so that it could be reused in other workflows at some later point.  the information retrieved from the nested workflow would be the basis of information fed to a text file.  the master workflow could reference the text file to search through for cross checking names.

now what would happen if the database failed to query, and the associated text file never filled with any data?  if you're cross checking the alerts against an empty text file, chances are you will never have a match and as such no tickets generated.

but if opalis is returning a success on the nested workflow, how do you know the query is failing?  that seems simple.  if the published data returned from the nested workflow is empty, then obviously the query failed.  too bad the link operators don't have any filters for stuff like "is empty" or "is not empty".

image

all isn't lost though.  to get the effect that we want, we simply have to know what to look for.  going to the nested workflow, we can use the query database object status as our criteria to branch appropriately.  if successful, the publish policy data object writes the expected server list.  if it runs into a warning or failure, we publish static text to a different publish policy data object in the form of "FAILED".

image

back in the master workflow, we can now use the link operator to cull out anything that tries to come through with "FAILED".  if it matches the include filter, the policy processing stops.

image

 

addendum:

keep in mind that link operators do not have an "AND" operation.  instead the filters are evaluated as "OR" expressions.  however, the include/exclude tabs are separate so mixing and matching is a possibility, assuming you have the right content coming through. 

in the opalis client user guide, the trigger policy object section has a table that states this description for the child policy status: the status that was returned by the child policy.  it's important to clarify that the default behavior of a link operator coming from the trigger policy object is to set the filter to look for anything coming from trigger policy itself to "success".

image

if you're looking for the status coming from the child policy, you should change the link operator filter to look for something like this:

image

Comments

Popular posts from this blog

using preloadpkgonsite.exe to stage compressed copies to child site distribution points

UPDATE: john marcum sent me a kind email to let me know about a problem he ran into with preloadpkgonsite.exe in the new SCCM Toolkit V2 where under certain conditions, packages will not uncompress.  if you are using the v2 toolkit, PLEASE read this blog post before proceeding.   here’s a scenario that came up on the mssms@lists.myitforum.com mailing list. when confronted with a situation of large packages and wan links, it’s generally best to get the data to the other location without going over the wire. in this case, 75gb. :/ the “how” you get the files there is really not the most important thing to worry about. once they’re there and moved to the appropriate location, preloadpkgonsite.exe is required to install the compressed source files. once done, a status message goes back to the parent server which should stop the upstream server from copying the package source files over the wan to the child site. anyway, if it’s a relatively small amount of packages, you can

How to Identify Applications Using Your Domain Controller

Problem Everyone has been through it. We've all had to retire or replace a domain controller at some point in our checkered collective experiences. While AD provides very intelligent high availability, some applications are just plain dumb. They do not observe site awareness or participate in locating a domain controller. All they want is the name or IP of one domain controller which gets hardcoded in a configuration file somewhere, deeply embedded in some file folder or setting that you are never going to find. How do you look at a DC and decide which applications might be doing it? Packet trace? Logs? Shut it down and wait for screaming? It seems very tedious and nearly impossible. Potential Solution Obviously I wouldn't even bother posting this if I hadn't run across something interesting. :) I ran across something in draftcalled Domain Controller Isolation. Since it's in draft, I don't know that it's published yet. HOWEVER, the concept is based off

sccm: content hash fails to match

back in 2008, I wrote up a little thing about how distribution manager fails to send a package to a distribution point . even though a lot of what I wrote that for was the failure of packages to get delivered to child sites, the result was pretty much the same. when the client tries to run the advertisement with an old package, the result was a failure because of content mismatch. I went through an ordeal recently capturing these exact kinds of failures and corrected quite a number of problems with these packages. the resulting blog post is my effort to capture how these problems were resolved. if nothing else, it's a basic checklist of things you can use.   DETECTION status messages take a look at your status messages. this has to be the easiest way to determine where these problems exist. unfortunately, it requires that a client is already experiencing problems. there are client logs you can examine as well such as cas, but I wasn't even sure I was going to have enough m