Hey guys, I have a question regarding a possible workaround or maybe even a misconfiguration on my behalf.
I am trying to monitor a URL and associate it with a Host.
However the URL is calling the page from another host.
example.
Host IP is 10.211.1.2 (Host I am tyring to associate service with)
URL is 10.35.1.2/….
I tried what you suggested and Nagios threw an output of:
check_http: You must specify a server address or host name
Usage: check_http -H | -I -u ] -p ]
-w ] -c ] -t ] -L]
-a auth] -f <ok | warn | critcal | follow | sticky | stickyport>]
-e ] -s string] -l] -r | -R ]
-P string] -m <min_pg_size>:<max_pg_size>] -4|-6] -N] -M ]
-A string] -k string] -S] -C ] -T ] -j method]
So I changed the -H to -I and there was still a 404 error:
HTTP WARNING: HTTP/1.1 404 Not Found - 1799 bytes in 0.150 second response time |time=0.149694s;15.000000;20.000000;0.000000 size=1799B;;;0
I tried removing the post string (-s), just to test the URL availability and still got the above message.
It seems that Nagios just refuses to monitor the URL if the Host or IP does not match the URL.
This Monitor does work if I put the IP of the URL as Host (-H) or IP (-I)
What I was thinking of doing is defining the Host 2x. Once with the correct IP address for system monitoring, and the other with the URL IP for http content monitoring. But that would be too redundant and would crowd the Dashboard. Also, I have like 100+ servers to configure like these, each with 70+ content check monitors. This is why I gotta try to figure this out before I populate Nagios.
Aw yes Luca, they are private addresses used for internal monitoring.
How its set up to run is, the URL is called from a custom SOAP monitoring application server (10.35.1.2) as configured in the URL.
It then calls the application SOAP url from the monitored server (10.211.1.2) and instead of returning the extremely long SOAP results, it has its own logic to compare the results and either return a “PASS” or “FAIL” response. This is how we do most of our content checks. We configure our monitors in our primary monitoring tool (EM7) to look for the string “PASS”.
So to answer you question, YES. If I paste the URL in a browser it works fine. It comes up with a PASS.
Should I be possible to monitor this kind of URL?
I tried to make is as basic as possible to see if Nagios will even return an OK status for the URL. No luck.
Like I said… it works, but only if the Host definition is set for the 10.35.1.2 address, and not the 10.211.1.2 address that the host actually has.
I contacted one of our Linux guys here and he helped me to solve the issue.
After some troubleshooting, he told me to use both the -H and -I options.
The -H to define the actual host name of the server I am monitoring, and the -I option to define the SOAP monitoring server. See below.
HTTP OK: HTTP/1.1 200 OK - 2554 bytes in 2.006 second response time |time=2.005849s;15.000000;20.000000;0.000000 size=2554B;;;0*
To answer your last question. Our setup here is a unique one. We use that custom application server (SOAP) to check the application on our systems instead of going directly to the server. From the web page (10.35.1.2/liteclient2) there is a list of all of our servers, each with 70+ content check links. We just click the specific link for the server we want to check, in this case (ESRI_00-08PopGrwth_US_2D), it goes out to that server and returns a Pass or Fail.
I wanted to figure this out so that on the Nagios GUI, I can have all of the content checks listed under each server.
I appreciate your help. You really helped me to get my creative juices flowing, which helped the Linux guy figure it out.