URLGenie Management Pack for SCOM – An Easy, Powerful Solution for Bulk Website Monitoring



Management pack and Guide available for download here:

URLGenie Management Pack for SCOM
Size: 6.25 MB
Version: 2.0.0.61
Published: February 9, 2024

Let me start by first saying that I was inspired to start this project after dissecting a very cool solution by Kristopher Bash. Without this excellent example, I would have never set out on this authoring journey.

A few years ago I was working for a large eCommerce company and I was responsible for monitoring a very large number of sites. I was tired of using the slow and cumbersome (although complex and powerful) built-in Web Application Transaction Monitoring wizard and wanted something faster and easier. My MCS buddy, Boklyn, suggested WebMon. Although that solution is definitely streamlined, I needed to be able to post SOAP messages to web services using authentication in addition to testing forms-based logins. I decided to build my own solution. This MP has been a work in progress for over 2 years (when this post was originally written in 2015). I have been slowly chipping away at my lengthy “to do” list for this pack only when my schedule would allow. The hardest part was finding the time to build a lab in Azure, followed by load testing, followed by extensive documentation with tutorials. After no small amount of testing, I finally feel like it’s ready for the community to use. See the MP guide for tutorials and load testing results. I hope this management pack makes your life easier. Enjoy.

Overview

The URL Genie Management Pack provides a fast and easy way to implement monitoring for a large numbers of URL instances from only a few instances up to many thousands! In addition there are some special features which allow monitoring sites which require client certificates in addition to pages that use forms-based authentication. With URLGenie you can easily configure monitoring for thousands of standard URL instances in less than a minute.

The URL instances and their respective monitoring criteria get instantiated on any number of “watcher” nodes from one or more XML configuration files. Any managed Windows computer can be activated as a watcher node with a simple Console task during which the watcher nodes get configured with a path to where it should look for configuration files. There can exist any number of configuration files, each with any number of requests defined within. Typically the configuration files will be centrally located in a single shared network folder. A decent place for the shared configuration folder is on management server or the data warehouse server with all watcher nodes configured with the same shared folder path. This is the most simple and scalable configuration.

*Please make note of the supported limit of URLs here.

There are standard monitors which target the http and https class types. Each individual monitor will alert with plenty of alert specific context information. This is significantly different than the Operations Manager standard Web Availability or Synthetic Transaction monitoring which will only alert on the rollup and contain no specific or useful alert context information.

The standard monitors all support the various types of http authentication: None, Basic, NTLM, Digest, and Negotiate. In addition, there are special monitor types which can be enabled for URLs that require a client certificate or even for web sites that use forms-based authentication, a first for SCOM (to my knowledge)!

Setup Overview: (from the URLGenie Management Pack Guide)

  1. Import Management Pack
  2. Decide where you want to store your configuration files. URL instances get discovered from one or more configuration files. You can store files locally on each watcher node but my suggestion is to create a shared folder on your management server, data warehouse server, or other file server where you can read/write your configuration files. This way any URL instances that you define in your configuration files can be discovered by any/all watcher nodes depending on how you configure the <watchers> tags. See Parameters and Instance Properties section of the MP Guide.
  3. Activate 1 or more Watcher nodes. See “URLGenie EnableWatcherNode” task section of the MP Guide. Use the path from step 2 above.
  4. Create one or more configuration files. See Configuration File Examples section of the MP Guide. Once you activate 1 or more Watcher nodes, the instance discovery (http) will run on the activated Watcher node(s). The discovery will attempt to gather instance info from the config files (if the <watchers> tag matches the server name), then create the URL instances which will become automatically monitored within a few minutes after the group population workflows complete. Https discovery will occur once the initial http instances have been discovered.
  5. Profits.

Monitors

URLGenie DNS Resolution Failure Monitor
URLGenie Reachable Monitor
URLGenie Certificate Expires Soon Monitor
URLGenie Error Code Monitor
URLGenie Certificate Expired Monitor
URLGenie Status Code Monitor
URLGenie IE Login Scripted Monitor
URLGenie Scripted Monitor
URLGenie Content Monitor
URLGenie Certificate Invalid Monitor
URLGenie CA Untrusted Monitor
URLGenie WatcherNode ConfigFile Path Test Monitor
URLGenie Response Time Monitor

URLGenie Watcher Request Dependency Monitor
URLGenie Aggregate Health Monitor

Rules

URLGenie Scripted Request ResponseTime Collection Scripted PerfRule
URLGenie HTTP ContentSize Collection PerfRule
URLGenie HTTP Scripted IE Login ResponseTime Collection Scripted PerfRule
URLGenie HTTP DNS Resolution Time Collection PerfRule
URLGenie HTTP Days to Certificate Expiration PerfRule
URLGenie HTTP Time To Last Byte Collection PerfRule
URLGenie HTTP Download Time Collection PerfRule
URLGenie HTTP Time To First Byte Collection PerfRule
URLGenie HTTP ResponseTime Collection PerfRule

Screenshots

Load test with 6000 URL instances on one watcher node; a management server. Configured in minutes. (see MP guide for more detail)



Enable Watcher Node

1)  Execute EnableWatcherNode task



2)  Watcher node activation success

The output is verbose but we are looking for the success and verification of the config file path as shown below.



3) Watcher node is discovered.



Generate Configuration File

1) Start with a basic text file of URLs/addresses



2)  Run the task to generate the config file from this basic list.



3)  Config file is created successfully



4)  The configuration file is created with default parameters. Feel free to modify the settings as needed.
Notice the <watchers> tag contains “MS02”. Any watcher nodes that contain “ms02” (not case sensitive) in their name (FQDN) will be able to discover these instances.



URL Instances



 Health Explorer



Critical Context Info



Alert View



Get Certificate Info Task (for https instances)



Forms Login Test

The Scripted IE Login monitor is able to simulate a login, even a 2-page login sequence (1:username, 2:password) like the example below.

Overview: To configure the IE Login Scripted Monitor

You will need to find the HTML IDs of the elements designated below. You can find the element IDs by enabling “developer mode” (Internet Explorer: F12, Chrome: CTRL+SHFT+I ) and selecting the objects to find the section of source code that defines the element properties. See the management pack guide for full tutorial.

Developer Mode (Internet Explorer)

Once you identify the form fields and buttons, you can override the monitor with the necessary values.

Below is an example of a login test with a TWO URL sequence:
https://login.live.com/login.srf^https://account.microsoft.com/devices

The first URL will open the login page (the first of two login pages; 1 for username, 2 for password).
The second URL is the actual target page that will be monitored for a content match: “find my device”.


Example of a forced content match error:



IE Login Test ContentMatch Failure Alert

The alert description indicates that the login test was successful but there was a content match error.



IE Login Monitor Healthy

After the ContentMatch field is set to a value that is expected to appear on the target page, the monitor returns to healthy and you can see the expected site text in the context area of Health Explorer.



 Example Email Notification (See this blog post for more details)

Severity: Critical Error

Severity:

Critical Error

Alert Description:

Content Validation Error

Monitor Settings:
URL: https://www.qnap.com
ContentMatch String: h
GroupID: URLGenie_Default
DNSResolutionTime: 0
Interval: 300
RetryCount: 1
Wiki: No link provided
Description: URL address to monitor.

******* Request Headers *******
GET / HTTP/1.1
User-Agent: Mozilla/5.0 (compatible; MSIE 9.0; Windows NT; Windows NT 6.1; en-US)
Content-Type: text/xml;charset=utf-8
Accept-Language: en-US
Accept-Charset: utf-8
From: SCOM@yourdomain.com
Connection: Keep-Alive

******* End Request Headers *******

******* Response Headers *******
ResponseHeaders: HTTP/1.1 200 OK
Connection: keep-alive
Date: Sat, 09 May 2015 00:09:25 GMT
Content-Length: 1996
Content-Type: text/html; charset=UTF-8
Last-Modified: Tue, 21 Apr 2015 13:25:30 GMT
ETag: “4236f2-7cc-5143bfafc0680”
Server: Apache

******* End Response Headers *******

Command Channel:

MyDev

Source:

https://www.qnap.com

Path:

MS02.Contoso.com;\\Db01.contoso.com\scom_do_not_touch\URLGenie

Principal Name:

MS02.Contoso.com

Alert Name:

HTTP Request Error: Content Validation Error

Alert Resolution State:

New (0)

Alert Monitor Name:

URLGenie Content Monitor

Alert Monitor Description:

None

Time Raised:

5/8/2015 5:09:25 PM

Alert ID:

b1980341-a92d-453a-8fa7-75bc8e8fc003

SCOM Operations Console Info:

Operations Console Login Info

SCOM Web Console link:

Web Console

Research It:

Bing It!

ADMIN DIAGNOSTIC INFO

——————–

**Use this command to view the full details of this alert in SCOM Powershell console:

get-SCOMalert -Id “b1980341-a92d-453a-8fa7-75bc8e8fc003” | format-list *

This email sent from:

MS01.Contoso.com

CF1: Alert.NetBIOSName:

MS02

CF2: Alert.NetBIOSDomain Name:

CONTOSO

CF3: Alert.PrincipalName:

MS02.Contoso.com

CF4: SubscriberList:

Subscribers:TEST Subscriber;

CF5: Management Pack Name:

URLGenie Management Pack

CF6: Alert Class Name:

URLGenie.HttpRequest

CF7: Alert.Category

AvailabilityHealth

CF8: Workflow Type

MONITOR

CF9:

RESERVED

CF10: Helpful POSH Command

get-SCOMAlert -ID “b1980341-a92d-453a-8fa7-75bc8e8fc003” | format-list *

Context:
Note: This context data is only relevant to the moment/time at which this alert was sent.

type : Microsoft.SystemCenter.WebApplication.WebAp
plicationData
time : 2015-05-08T17:09:25.0005666-07:00
sourceHealthServiceId : 120875E8-C79E-91CA-E3AC-EA09A391F4BD
RequestResults : RequestResults
TransactionResponseTime : 0.0024875
TransactionResponseTimeEvalResult : 0
CollectPerformanceData : CollectPerformanceData

Knowledge Article:


Summary

The context information for these alerts is not always helpful. To see more detailed information about this alert log into the console for the management group.

Wiki


Note:

The context information provided in this notification is limited and is not always helpful. To see more detailed information about this alert, log into the appropriate SCOM console for the applicable datacenter (SCOM management group) and use the Health Explorer to find more details about the state change event for the object.




Please let me know if you have any suggestions for improvements. Contact info is in the MP guide or use the Contact Us page.

Version History:

  • 2024.08.16: v2.0.0.61
    Fixed counter name for rule: URLGenie HTTP Time To Last Byte Collection PerfRule. Changed ‘ms’ to ‘Sec’.
    Fixed counter name for rule: URLGenie HTTP ContentSize Collection PerfRule. Changed ‘bytes’ to ‘Bytes’.
    (Thanks, Nicolae!)
  • 2024.02.09: v2.0.0.58
    Get-WebsiteCertificate_PB.ps1, Get-WebsiteCertificate_Task.ps1:
    Added region agnostic calculation for expiration. Added UserAgent param.
    Get-WebsiteCertificate.ps1:
    Added region agnostic calculation for expiration. Added UserAgent, RequestTimeoutSeconds params. Improved formatting of output.
  • 2021.07.20: v2.0.0.47
    Updated the Scripted unit monitor – Added support for -UseDefaultCredentials and -UseBasicParsing.
  • 2020.11.19: v2.0.0.46
    Updated the alert view to include all except Closed.
    Added link to task in all https unit monitors.
  • 2020.05.05: Version 2.0.0.44.
    Added DisplayName and Description to MP properties.
    MP Guide minor updates.
    Improved script ‘EnableURLGenieWatcher.ps1’ with more logging, better logic and output for user.
    Add missing overrides for the additional performance collection rules.
    Improved ‘ScriptedIELoginContentMatch.ps1’. Added Trim() to some vars that decided to cause problems.
  • Full history available in MP guide.

Page History:

  • 2018.10.23: Updated screenshots and content to reflect the version 2 updates.
  • 2017.3.16.1330: Added updated MP version info.
  • 2015.11.6: Fixed a typo.

42 Replies to “URLGenie Management Pack for SCOM – An Easy, Powerful Solution for Bulk Website Monitoring”

  1. I created my watcher node, imported my 4 URL files and it created my 4 Configuration XML files. All 4 files have the same settings. The only thing that is different are URLs. For some reason, it is not creating the Web Monitors in 3 of the 4 files. Unfortunately the Operation Manager Event Log on the system has nothing to go on. Does URLGenie write any logs I could look at to figure out why it’s not importing some of my config files?

    1. Update: I have removed all but 1 Web Monitor in each Configuration XML file. They have now been created. I am now slowly readding them to the files. Is there possibly some content or strings that URLGenie does not like and as a result, refusing to import the entire file?

      1. Hi Chris,
        The discovery (URLGenie.HttpRequest.Properties.Discovery.1) is pretty good about not choking on the fields IF you wrap risky text in CDATA tags. Have a look at the example Requests file and you will see an example of how to wrap your text in CDATA. Also, you can override the discovery; WriteToEventLog=true. This should produce some info in the Opsman event log to help identify any potential issues.

  2. Brilliant MP Tyson Paul! MS support directed me to this because they could not answer my question in regards to a SCOM Web Application Transaction Monitoring Test. Sadly, I’m thinking this will not either. I’m looking to insert a pause between requests to allow server think time. When running in a browser, the client actually polls a webservice every 2 seconds, and waits for a return value of ‘2’ before submitting the next request. Of course this IF THEN logic is too much for the native WATM player. I supposed I could just wait a certain amount of time before submitting my next request. I’ve played around with different combinations of SystemCenter.WebApplication.UrlProbe flags, the most promising seemed to be 5, but that seems to have absolutely no effect no matter what value I put in it. Also fooled around with 5 and 60 with no luck.

    Update: It seems to have removed the XML tags, should say “most promising seemed to be ThinkTime 5 ThinkTime” and “Also fooled around with RetryCount 5 RetryCount , and RetryCount 60 RetryCount with no luck”

    1. @Adam,
      I have no idea what this means:
      “When running in a browser, the client actually polls a webservice every 2 seconds, and waits for a return value of ‘2’ before submitting the next request. Of course this IF THEN logic is too much for the native WATM player. I supposed I could just wait a certain amount of time before submitting my next request.”

      When running what in a browser? What client? What is this return value of ‘2’ to which you are referring? Are you referring to some other kind of test that you are doing?

      I’m trying to understand your scenario; you didn’t give me much to go on so I’ll make some assumptions:
      1) You are using the WATM template
      2) You have configured multiple Requests
      3) You wish for the URLProbe to pause X seconds between testing of the Requests
      4) You have tried modifying the XML in the PA (Microsoft.SystemCenter.WebApplication.UrlProbe) directly; specifically ThinkTime and RepeatCount but you are not convinced that those parameters have any effect.

      If the assumptions are correct…
      In theory, the ThinkTime parameter should cause the probe to pause between making the request and collecting the body from that request, not pausing after finishing one request and beginning another, at least according to the documentation here: https://docs.microsoft.com/en-us/system-center/scom/url-probe-schema?view=sc-om-2019
      ThinkTime [Integer] Amount of time to wait between the request and collection of the body.

      Anyway, since ‘5’ and ’60’ had no noticeable effect, did you consider that perhaps this parameter is in milliseconds? Try 10000 and see if you get the desired effect. Post back here to let me/us know your findings. Good luck.

  3. Just went through the blog…Brilliant Job Tyson Paul..Really hoping that this will help all the SCOM technicians who is looking to configure hundreds of URL in to monitoring in a go. Will import the MP and perform some testing and post any questions I may have.

    Thank you so much !

  4. Thank you very much for this blog. This is a very powerful tool!

    Internet Explorer’s scheduled End of Life is August 17, 2021. Does URLGenie work with Microsoft Edge? We have a total of 44 websites that will require scripting (presenting username and password) before we can access the certificate for monitoring. Before we invested our time and effort writing these scripts, we just wanted to know URLGenie’s life expectancy beyond August 17, 2021 since it’s heavily integrated with IE. Thanks.

      1. ANy update to IE issue. We have two application URLs where in client is not ready to enable them on IE as its going out of support and here SCOM and URLGenie are using IE. SO I am stuck. Any suggestions on how to monitor availability and transaction monitoring for URLs that are not enabled on IE, but on Edge, Chrome, Firefox etc.
        We do have other tool SOlarwinds in our environment. WOuld that help here?

        1. @Madhuri,
          I’ve read a little bit about WebDriver and Selenium for Edge so there might be some hope down the road. However, I don’t have a ton of time to poke at this right now. This and pretty much everything else provided on this blog are “as free time permits”. :-/

  5. I am getting an error when enabling a watcher, new-guid is not a recognized command.

    The task is looking for powershell version 3 but it seems that this command is introduced in version 5.

    I will be upgrading the powershell version on these machines but wanted to point this out.

    New-Guid : The term ‘New-Guid’ is not recognized as the name of a cmdlet,
    function, script file, or operable program. Check the spelling of the name, or
    if a path was included, verify that the path is correct and try again.
    At line:33 char:28
    + $ThisScriptInstanceGUID = (New-Guid).Guid.Substring(((New-Guid).Guid.Length
    ) -6 …
    + ~~~~~~~~
    + CategoryInfo : ObjectNotFound: (New-Guid:String) [], CommandNot
    FoundException
    + FullyQualifiedErrorId : CommandNotFoundException

    1. @Stephen,
      Thanks for the info. I use the GUID fragment to uniquely identify the instance of the script that is running. I can change that line to use a more version-friendly approach to creating a unique string. I’ll include it in the next update, whenever that is.

  6. Can someone help me remove this MP please? There is a dependency on the SecureOverrides MP, yet there are no Run As Accounts associated with any of the URLGenie profiles in the console. The Secure Overrides MP still contains these lines:

    00C21B6FE41C2B3F893EEB31AF24AFC48FDFF1855F00000000000000000000000000000000000000

    0083485CBCF7B1EC6E109F2D17D192835AE8F8235200000000000000000000000000000000000000

    0083485CBCF7B1EC6E109F2D17D192835AE8F8235200000000000000000000000000000000000000

    Should I remove these lines manually? Is that safe?

    1. OdgeUK,
      There are plenty of examples/tutorials online for removing obsolete references. It sounds like this is almost the same scenario. Read some tutorials on this topic first.
      Here’s a rough outline of how to clean up the unsealed SecRef MP:
      Back up the SecureReference MP. Make sure you have a copy stored safely somewhere just in case.
      In the SecureReference xml, remove the elements that contain a reference to the URLGenie alias.
      Remove the element that contains the URLGenie alias in the Manifest.
      Save the SecRef MP.
      Import the SecRef MP.
      Once it is digested by the Console there should no longer be a dependency between the URLGenie MP and the SecRef MP.

  7. We’re moving to ADFS as our SS0 solution across the board. The ADFS authentication page presents two stage authentication, which is so common these days, via Forms Based Authentication. URL Genie appears to be our best hope since Microsoft appears to have long abandoned support of the built in web site monitoring MPs and underlying HTTP engine. Any tips for getting there?

    1. @Walt,
      If you can provide an example of how to do this reliably with PowerShell then I can likely bake it into the management pack.

  8. Hello! I’ve ran into an issue with the ability to monitor pages that require logging in. I’ve narrowed it down to either a focus issue when the login boxes have placeholder text or a using the wrong method:

    On login boxes that have place holder text:

    If you don’t .focus() the box, the browser does not clear the place holder text, and instead, when you run the .innertext = data, it replaces the placeholder text and the forum will fail to log in as it doesn’t believe any valid data has been put into the box (due to the required filed). To get around this, you need to focus all input boxes before putting data in them so that it clears any placeholder text:

    $ObjIE.Document.getElementByID(‘password’).focus()
    $ObjIE.Document.getElementByID(‘password’).focus()

    OR

    You need to use .Value instead of .InnerText

    $ObjIE.Document.getElementByID($UsernameElementID[$a]).Value = $Username
    $ObjIE.Document.getElementByID($PasswordElementID[$a]).Value = $Password

    Note that your verbose output for password success says you use .Value but you actually use

    $ObjIE.Document.getElementByID($PasswordElementID[$a]).InnerText = $Password

    which leads to the focus issue above.

    1. I can second the comment on the .value comment. I’m currently trying to sign into a web page and by using the developer tools in Chrome I can see that only by using document.getElementById(“username”).value the submitted username is actually written into the text box.

      @Jacob: .focus() is only used with elements but not for input. By changing the value of an input box the old content will be overwritten anyway.

      @Paul: is that something you can look into, i.e. using .value instead of .innerText

      1. I’ve done some further investigation and added the following code into the “If ($UsernameElementID[$a])” section. Interestingly enough I now can see the correct values set in the event log (did the same for the password)

        if ($ObjIE.Document.getElementByID($UsernameElementID[$a]).InnerText -ne $Username) {
        $ObjIE.Document.getElementByID($UsernameElementID[$a]).value = $Username
        }

        For some strange reason it seems as if the form is submitted after pasting the username. At least I can see the error message that the website returns when incorrect credentials are provided. And that is even before the password was pasted. So I look further through the code and found the following command commented out:

        # $ObjIE.Document.getElementsByName($ClickButtonName[$a])[0].Item().Click()
        That is in the “click button” section of the code. Not sure if that is correct. But I couldn’t figure out why it would submit immedatialy after entering the username. Looks like this particular website is hard to log in to programmatically.

  9. Hi,

    I run into an issue with the scripted login.
    The monitor returns this error:
    InnerText Navigation to the webpage was canceled What you can try: Refresh the page.Refresh the page.

    I’ve followed the guide to the point and tested everything manually with the configured runAS account for the Powershell script. and that works fine. The credentials for the Basic authentication are correct as well.
    Finally all Id’s and logout page url have been identified and configured in the override.
    I’ve done a few tests with different login websites (internal to the company) all and up with the same error.

    Any ideas?

    Much obliged.

  10. Great Management Pack. I’ve got an enhancement request: it would be great if you could specify exclusion times for which monitoring is not performed. For example, a URL should only be monitored within the business hours, weekdays from 6:00am to 6:00pm. Something like that, so that you can define when monitoring should happen or maybe even excluded (like regular nightly webserver restarts 3:00am-4:00am).

    1. @Holger,
      That feature is already present. It’s called ‘Maintenance Mode’ and you can schedule maintenance windows as you requested!

  11. Hi Tyson, I tried to upgrade a UrlGenie install that was previously configured for a management server (“MS”) that no longer exists (the new MS is SCOM 2019 and the old one was SCOM 2016 –I thought I could add a 2nd MS to the network after reading that 2016 & 2019 could run side-by-side –which they cannot).

    Anyway, after importing the latest MP, I can’t configure the watcher node; none of the Tasks appear and those that do are all greyed out. Can I fix this by fiddling with the registry?

    1. @Al,
      I’ve read your comment over and over and I can’t make much sense of it. You can enable (or ‘reconfigure’) a Watcher Node while one or more Windows Computer objects are selected. The task, “URLGenie EnableWatcherNode” will appear in the tasks.
      It sounds like your management group may have larger problems unrelated to this management pack.

      1. @Tyson, I wanted to give you some context for my dev environment. The bottom line is that when I look at the “Watcher Nodes State” entry, the Tasks pane only shows the standard State Actions. All three are greyed out. None of the UrlGenie specific actions appear. Is there a way to get them back?

        1. I can’t explain what you are seeing. Perhaps you’ve got orphaned objects. I DO know that this MP works perfectly with a healthy SCOM 2016 and 2019 environment.

  12. hi,
    First of all I would like to say thank you all about this great MP.
    we have 3 watchers nodes and recently we made a transition of Configuration files from one watcher to another.
    For some reason still Instances that were monitored by the first Watcher are still displayed in the Console as if they were still monitored under it even though we actually moved the configuration file to another server and duplicate Instances was created.
    We tried to clean the scom agent cache and reboot the server but this not helping.
    i will be happy to get you help!

    1. @O.b.D,
      This symptom is by design so that your discovered instances don’t become undiscovered upon failure to read the configuration folder/file(s). The solution is best used with a single configuration folder, a shared folder, so that ALL watcher nodes can feed from the single location. It would eliminate this kind of thing.

      To solve the issue, make sure the discovery is still enabled. Restore the original configuration folder (and access rights of the runas account if applicable) so that the discovery is able to complete/exit gracefully. The original Request file doesn’t need to exist, but the agent must be able to scan the original folder to effectively UNdiscover the original URL instances because the configuration info will no longer exist in the folder and/or any of the files within. Once the discovery workflow runs successfully, it will detect that the original instance data no longer exists, it will UNdiscover the instances.

      1. Thank you very much for your reply.
        what do you mean when you wrote “Restore the original configuration folder” ? that i need to somehow get the Watcher Server to re-identify the configuration file? by doing the “disabled watcher server” task or by remove the permission to the share file ? and after one of the actions to restore the situation to its former state and see the result ?

        1. Typically instances get removed/deleted from SCOM by one of the following ways:
          1) The hosting object is removed.
          Example: A Microsoft.Windows.LogicalDisk is hosted by Microsoft.Windows.Computer. (graph located here) If you delete the agent for the computer from SCOM, the MWC object is automatically deleted and everything that lived (was hosted) on it, including the logical disk is deleted from SCOM.

          2) Discovery runs and no longer discovers the instance. The instance becomes UNdiscovered (deleted from inventory because it must not exist if the discovery cannot detect it).
          Pretty much every object in SCOM becomes discovered (comes to exists in inventory) by a discovery workflow. The workflow might use a registry probe, WMI query, or maybe a PowerShell script to return discovery data for specific types of class instances. One exception might be the Microsoft.Windows.Computer type because it is automatically created upon agent installation.

          The URLGenie MP uses a discovery that relies on a PowerShell script to read URL configuration information from one or more XML files in the designated folder. The designated folder path is set as part of the Enable Watcher Node task. That path gets discovered as a property of the Watcher Node.

          So first, the Configure Watcher Node task is run, targeting a Windows Computer. The config folder path you provide is stuffed into the registry on the watcher node. It’s just a string value.
          The Watcher Node discovery automatically runs on every Windows Computer. If that special registry value exists, the Watcher Node gets discovered and added to inventory, the ConfigFilesPath is a property of the new WN instance. Once the Watcher Node becomes discovered, now a target exists for the URL discovery (which targets the new Watcher Node class instance), it uses the ConfigFilesPath of the WN to know where to look for Request.xml files. So basically, all workflows have a target class type. Once an instance of that target type exists, the workflow(s) will run on behalf of that instance because now the workflows have a reason to live.

          Now, more about the URL instance discovery, there are technically 3 discoveries, but you need only concern yourself with the first one at this point: URLGenie.HttpRequest.Properties.Discovery.1, this is the only one that is enabled and relevant for this explanation. As I mentioned above, the discovery uses a PowerShell script to read URL configuration information from one or more XML files in the designated folder (using the ConfigFilesPath of the Watcher Node property). The designated folder must exist and the RunAs account (whichever security context is running the posh script) must have Read access to the contents of the folder. If the posh script is unable to read the contents of the designated folder/files for whatever reason, the discovery will NOT complete gracefully. In other words it will NOT return proper discovery data and the existing URL instances will not be affected. Nothing will change.

          However, if the folder is readable but no files exist and/or no applicable elements exists for the watcher in any config files, the discovery should exit gracefully with proper discovery data but the data will be empty. This will effectively tell the mgmt server that discovery happened successfully but NO instances exist. If any previously discovered instances exist, they will become UNdiscovered. This is only true for the same discovery workflow which targets the same instance. This is why you can simply remove a element from one of the config files and that request will become undiscovered at the next discovery interval but only if the discovery workflow completes gracefully.
          This is all by design. Why? Consider a scenario where the folder or config files are not accessible for some reason (network issue, NTFS permission change, RunAs account lockout, etc), you wouldn’t want all of your instances to disappear leaving you with no monitoring.

          Now, onto your issue. You said that you moved your config files from one Watcher Node to another.

          “we have 3 watchers nodes and recently we made a transition of Configuration files from one watcher to another.
          For some reason still Instances that were monitored by the first Watcher are still displayed in the Console as if they were still monitored under it even though we actually moved the configuration file to another server and duplicate Instances was created.”

          For the original instances to become UNdiscovered here are a couple options:
          1) Remove/disable the original Watcher Node. You can do this by launching the URLGenie DisableWatcherNode agent task for the original WN. This will remove the special registry value. When the WN discovery runs at the next interval (every 10 mins by default), it will UNdiscover ALL URL instances hosted by that WN. If the WN is removed, so is everything that is hosted by it, including the URLs (see this graph for reference.)

          2) If the original Watcher Node is still needed/wanted then you simply need to make sure ConfigFilesPath location is still accessible by the original Watcher Node. You can verify this access by using the URLGenie Test Folder Path agent task on the Watcher Node. If the folder contents are readable, then URL discovery should run gracefully. If no Request data exists, the existing instances for the WN should become UNdiscovered.

          I hope this helps.
          -Tyson

  13. Hi Tyson,

    if I search for ContentMatch: “status”: “UP” , its not working.
    Words without double quotation marks works fine. But if there is a double quotation, it shows me an error.
    But I need to put a line with double quotation.
    How can I do this?

    Regards
    Alican

    1. Override the ContentMatchRegexOperator parameter, use MatchesRegularExpression for the override value.
      Then use this for the regex: “status”:\s*”UP”

      There’s a good chance that will work for your purposes.

  14. Can you make URLGenie compatible with the Edge Browser pleaseee? ☹
    It has problems with the Internet Explorer ☹ When I open a Web Page with Edge Browser, it works. But when I open it with IE, it has a security risk and generates an critical Alert in SCOM.

    And logging in to a Login form doesn’t work (URLGenie IE Login Scripted Monitor). Can you please fix this function and write a more detailed guide? I did everything according to your guide and checked 10 times to make sure I did everything right. But it always shows up as an error.

    1. @Hazal,
      There’s not really a good way to automate interaction with Edge browser through SCOM (PowerShell). To be honest, using PowerShell to automate Internet Explorer was never very good either. A better approach would be to develop your own script, maybe leverage Selenium web driver with PowerShell?. Run your tests and then dump the results to the Windows event log, then use an event log rule or monitor to grab perf/health data. Trying to write a “one size fits all” script, as I did previously, was a nightmare. Collecting data from a Data field of the OpsMan event log is easy. I don’t foresee myself making any updates to the scripted browser workflow anytime soon unless a customer requests it through a support contract (so I can justify the labor) at which point I might be convinced to look deeper at feasibility.

  15. Hey, Tyson,

    The most recent version is a .mpb, but the only thing in the bundle is the .mp. There don’t seem to be any resources in it. Was this done for a reason?

    Thanks!

    1. @Scott,
      I’ve gotten into the habit of building all my recent management packs into MPBs. Most of them actually have additional resources and/or the posh scripts are flagged as resources so they can be included and unsealed whole (as .ps1 files). That being said, in this case there’s nothing missing and it shouldn’t affect functionality. Perhaps next time I make a new build I will flag all the scripts as resources just as a “nice to have”. I hope you are enjoying the solution.

Leave a Reply

Your email address will not be published. Required fields are marked *