Let's say you schedule regular host checks, and they are cached. Now a service or dependent host is unreachable. Nagios can check the cached state, and if it's recent enough, it may not have to do another "on demand" check, thus improving performance.
I can tell you what I do, which seems to work well. For every host that I actually want to monitor services on, I always create a separate "ping" service definition with a 1 minute interval. I do this so when I look at performance graphs I always have a measure of latency. For hosts that are just in the "path", I just define the host and make sure my dependencies are set so I will know where the issue is (see following paragraph).
I don't schedule regular host checks. If your dependencies are set up correctly, Nagios will execute a host check when a dependent host is down or unreachable.
Of course there are about 1000 ways to skin this cat, but my goals are to get prompt alerts, suppress unnecessary notifications, and record performance data for important hosts and services, so I feel that this type of configuration strikes a nice balance between performance and detail. It seems to be very efficient and I can scan 100s of hosts and almost 1000 services using a tiny fraction of the CPU on a dual atom system
BTW, one of the other things that I do to improve efficiency is to put spool/checkresults and cache/nagios on tmpfs so that they are written and read from memory rather than disk.