Setting up and using nsca


There is too much “this is how I did it” stuff to confuse many people…Does anyone have a clean how-to for setting up ncsa and running it successfully?


gives a full, complete, precise explanation on EVERY step that is needed to use nsca.


Read that…and great info…
I’m looking for info on how the nagios server accepts the data and what it does with it after that as far as displaying the data from the interface


in that readme it explains how to setup the central nagios server. But if your question is “how does it do it” there is no documentation on how nagios does what it does. You would have to look at the source and figure that out. I’m no programmer, so I’d be totally clueless as to “how” it does it. It just does. It reads the .cmd file in the rw folder. It parses the info and displays it. How it does it? Dunno.


The readme does not give explanation on how to complete the installation on the nagios server. It give detail on how to get it running. I’m looking for how the nsca daemon on the nagios server recieves the data from the remote host and how it gets configured to read that info and show up in the host detail. basically how do I tell it to parse the data being sent over.


Per the README that comes with NSCA is a complete description of how to setup the central server. I did nothing more, nothing less, than what is in that file.
make all

The binaries will be located in the src/ directory after you
run ‘make all’ and will have to be installed manually.

In other words, take the nsca file and put it someplace like /usr/local/nagios/bin/

Add a line to your /etc/services file as follows (modify the port
number as you see fit)

    nsca            5667/tcp        # NSCA

***** XINETD *****
If your system uses xinetd instead of inetd, you’ll probably
want to create a file called ‘nsca’ in your /etc/xinetd.d
directory that contains the following entries (a sample config
file called nsca.xinetd should be created in the root folder of
the distribution after you run the configure script):

    # default: on
    # description: NSCA
    service nsca
            flags           = REUSE
            socket_type     = stream
            wait            = no
            user            = <user>
            group           = <group>
            server          = <nscabin>
            server_args     = -c <nscacfg> --inetd
            log_on_failure  += USERID
            disable         = no
            only_from       = <ipaddress1> <ipaddress2> ...

/etc/rc.d/init.d/xinetd restart
Add entries to your /etc/hosts.allow and /etc/hosts.deny
file to enable TCP wrapper protection for the nsca service.
This is optional, although highly recommended.

So, now that you have nsca defined in xinetd it will be started when you restart xinetd. It will then be listening on some port. When data is sent to nsca, it then passes the info to nagios. The format of that data is <host_name>[tab]<svc_description>[tab]<return_code>[tab]<plugin_output>[newline]

If the data format matches something on the central server, then the host/service information is displayed. But of course, if you have no host/service descritiption that matches, then there is no way it’s going to display anything. Check your nagios.log file for errors.


I’m past that step and have been since before I put this thread in… I’m sorry but your not understanding what I’m asking.
I have nsca running on the central nagios server already (Running with no problems). What I’m confused on is how to display the data being sent from the remote client running send_nsca.

I have my define host already setup in my config. How should I put in the define service to display the data being sent over from the remote host.

check_command submit_check_result!% ???

In my checkcommands.cfg I have the command defined as so:

define command{
command_name submit_check_result
command_line $USER1$/eventhandlers/submit_check_result $HOSTNAME$ ‘$SERVICEDESC$’ $SERVICESTATE$ ‘$OUTPUT|$PERFDATA$’

Is this setup properly?


The check command DESCRIPTION on the central nagios server must match.

My remote host is running a mini nagios (distributed server setup).
it’s checkcommands.cfg is
define command{
command_name submit_check_result
command_line $USER1$/eventhandlers/submit_check_result $HOSTNAME$ ‘$SERVICEDESC$’ $SERVICESTATE$ ‘$OUTPUT$|$PERFDATA$’

it’s services.cfg is
define service{
use generic-service ; Name of service template to use
host_name TDN-NDT
service_description CPU-Load
check_command check_local_load!15,10,5!30,25,20
So on the central nagios server, I must have a service/host that is almost identical.

Central nagios server services.cfg is
define service{
use TDN-CCI-service ; Name of service template to use
host_name TDN-NDT
passive_checks_enabled 1
freshness_threshold 1200
check_command service_is_stale
active_checks_enabled 0
check_freshness 1
service_description CPU-Load

Notice how the central server check command is not the same. That’s because the Central server will NOT be actively checking, but it will check, if the freshness threshold is reached. If it is reached, it will run the active check called “service_is_stale” and will simply show “service is stale” warning.

But also notice that the check description is IDENTICAL to what is on the remote host. That is explained in the docs if you wish to compare.


Hi all,

when I did send_nsca it says

[root@ bin]# ./send_nsca -H -c /opt/nagios/nsca/etc/send_nsca.cfg

Error: Timeout after 10 seconds

what could be the problem? anybody got that before?

thanks and regards,

Edited Tue Jul 26 2005, 10:31AM ]


I can see PASV icon besides the service but Status information says: Service is not scheduled to be checked…
and Status is Pending …

on the central server
Warning: The results of service ‘http’ on host ‘host02’ are stale by 1 seconds (threshold=420 seconds). I’m forcing an immediate check of the service.
but nothing happened!

any clue? thanks.

Edited Tue Jul 26 2005, 02:40PM ]


It sounds like you didn’t follow the nsca readme. Perform those instructions on the remote an local nagios server’s. it involves a xinetd restart also.

The results are stale because you have freshness checking enabled and the time limit has been reached. At that point, it will perform an active check on the service. So you tell us what the active check is, and perhaps we can tell you why “nothing happened”. Usually, people use a passive check on the central nagios server because they CANT execute active checks for that host/service. It may be behind a firewall, etc. So the only thing they can do, is passive. So that’s why the active check on the central server is usually just “service_is_stale”. Which is nothing more that a shell script that echo’s “serivce is stale” for an output. It doesn’t check anything at all.