Either way, remember that the bots/spiders don't always respect your meta tags or your robots.txt. Although most legitimate ones like Google MSN and Yahoo search indexers will obey, less than honest types will often completely ignore your restrictions unless they are backed up by a hard limit such as a .htaccess or some type of browsing rate limiter.
It can be beneficial for certain kinds of content- such as protecting an image gallery or file host from having the content leeched to death by hotlinking after getting indexed. You would also want to look into such methods if you have a forum with relatively personal content on it for instance that really doesn't need to be indexed to be found quickly by other people. I actually use the method myself for preventing certain files of my site from appearing in search engines. They're special purpose scripts that hook into other APIs, so although they are public facing they need not be indexed as only their corresponding APIs should ever need to access them.
Here's a piece of one of my robots.txt files. This blocks all clients that respect robots.txt from accessing the named folders, or the named php file- which contains code I am developing to be later implemented in my billing system.
Code:
User-agent: *
Disallow: /cgi-bin/
Disallow: /tmp/
Disallow: source.php