We have clients who have similar requirements and have HA in place. we also don't do service upgrades in the middle of thier busy time.
WE have both, and we manage several system, not jut one from our noc.
are you suggesting the stock exchanges networks are more apt to go down than say, United? oh wait.
O
I mean, I don't know what kinds of systems you manage, so I can't really say, but here, take an example. Say that in one of the applications you manage, somebody submits a helpdesk ticket saying that their user profile page isn't loading. How long might it be before anybody checks into the ticket at all? If the page in fact is not loading, how long before somebody diagnoses the problem? How long before you get a fix on the system typically?
Now, if instead of a user's page not loading, that was a situation where somebody's trade wasn't being recorded, that could leave the NYSE user in a situation where they lose $600,000 when the stock goes up 1% before they realize that the trade wasn't actually recorded. A similar severity of bug as might cause the user above's profile page not to load might cause a trade not to be recorded, but where in the profile page scenario, you consider the time that bug is in place to be uptime, and the bug may be in place for weeks, months or even years, the NYSE would immediately shut down and would not come back up until they found and fixed the bug and corrected any data that had been corrupted by it.
Generally speaking, places that shoot for 99.999% uptime are places where users can deal with a few bugs and where the system behaving slowly for brief periods still counts as it being up. There are not a lot of environments that I'm aware of with 99.999% uptime where "up" means zero bugs and nearly instantaneous response times. And I can't think of any that hit that kind of zero bugs, instantaneous response, uptimes with a massive worldwide distribution of networks, servers and software doing extremely complex things running all custom software.
For example, Google's search engine is massive and equally complex and it hits 99.999% uptime. But, it does it by being very fault tolerant. If you do the same search 100 times, you won't actually get quite the same results each time because different databases will be getting fed new data at different times and this thing won't be quite up to date on that one, but will on this other, etc. Many of the stats they give you, for example about search queries, the data you get back is in the form of "approximately 500 - 540" and whatnot because they're often using more scalable, faster, techniques when they don't need to be 100% confident in 100% accuracy. Other large complex systems often hit 99.999% by being slow. For example, most ecommerce platforms have no problem making a user wait even 5 or 10 seconds while they process things. The NYSE can't do either of those things, and I can't think of another equally complex system that can't do either of those things which hits 99.999% with no bugs.