Suppressing Service Alerts when Host is down


On average, I have 15-20 services for each host. Whenever a host goes down (I don’t control all reboots, so using notification suppression won’t apply), I get 15-20 (plus the host) alerts. This is too many alerts. Is there a way to supress the service alerts when the host is down?

I already do this using the parents definition when a site goes down.



How many max_check_attempts have you setup for service and host checks in your definitions? If they are 1, then Nagios sends the notification immediately after noting that a service is unreachable, and if it schedules a host-check after noting the services are all down, then it is obvious the host-check will go after all service checks (as Nagios does the host-check on it’s own schedule) and you will get the notification for every service and after that for a host. So setting the max_check attepts for service-checks to 3 and max_check_attempts for host-check at 1 or 2 would do the desired situation 'cause then host-checks would have time to be executed, and notification for services should be suppressed.


OK, so if I look at the example configs, they show the Hosts max_check_attempts as 10 and the Service max_check_attempts as 3 or 4.

In general, according to your recommendation, I should lower my hosts to 1 or 2.

Is that what most people are using (in general)?

Thanks for your help on this!


That could be the problem 'cause if the services are checked 3 or 4 times and Nagios sees that those are all down it will send the notification for them. As for host checking, there is still 6 or 7 check left to be executed, and when they are done then the host notification comes. After that you shouldn’t get more service notifications.
Yes, try to lower the max_check_attempts for hosts below the max_check_attempts for services and see what happens. If it solves the problem then we can say we found it and solved it :slight_smile: