Or, x10 hosting has a clustered server arrangement they are using to pool the required resources. Using that they could actually do the whole job with only one maybe two such clusters, but they still would show issues with any software limitations.
I'm curious myself about this, since you have a valid point that with Linux being open source surely someone out there has a fix for the UID limitation especially if some hosts actually do have thousands of users to a machine.
But in practice by the time you actually get said thousands of users in a production environment you should be able to make enough money off of them to afford a fleet of servers and staff to manage them all. Now on the bench it might be a different story, but on the bench you can safely ignore overselling effects because none of the resources are actually being used.
I think the best way to find out is to just get in there and do it, perhaps develop a script to automate the setup and teardown so you can experiment with different stress levels to see what happens. I'd love to hear the results, because even though I am never going to oversell my stuff there may come a time when I have sufficiently large hardware to actually put thousands of people onto the same operating system install- either using clustered hardware, or a really big machine that actually has the resources to pass around.
Also, I can see where having such huge numbers of clients can make the server difficult to manage. Almost all of the daemons have to reload their config file any time they are started or restarted. Apache and MySQL in particular are likely to take a long time to do this because each account will make an entry in the configuration files.
Plus if your servers use PHP CGI or some other method of running PHP as the owning user, you'll find very quickly that you'll run out of open process IDs because each account on even the slightest of traffic will attempt to launch and maintain a worker process for PHP processing.