• This is a political forum that is non-biased/non-partisan and treats every person's position on topics equally. This debate forum is not aligned to any political party. In today's politics, many ideas are split between and even within all the political parties. Often we find ourselves agreeing on one platform but some topics break our mold. We are here to discuss them in a civil political debate. If this is your first visit to our political forums, be sure to check out the RULES. Registering for debate politics is necessary before posting. Register today to participate - it's free!

Trading halted on NYSE floor

Interesting real-time cyber attack site:

Norse

Interesting...

Lots of attacks aimed at Kirksville, St. Louis, and Seattle, US at 750pm CST. What's so significant in those three cities that hackers from China are so interested in? Over 420 attacks coming from mainland China currently. Amazingly persistent buggers.
 
Interesting...

Lots of attacks aimed at Kirksville, St. Louis, and Seattle, US at 750pm CST. What's so significant in those three cities that hackers from China are so interested in? Over 420 attacks coming from mainland China currently. Amazingly persistent buggers.

Apparently the way the company works is that they set up monitoring software on their clients' systems and aggregate data about attacks on those systems, and on a number of honeypot servers they set up. That's what they show on the map- attacks on those systems. They are headquartered in St. Louis, so that's probably just where most their honeypots are set up. Probably they just happen to have more customers in Kirksville and Seattle than most places.

If you have ever run a webserver, if you look through the logs, you'll see at least dozens of attempts a day to access a weird port or grab some suspicious file that doesn't exist. Maybe in a period of 10 seconds, you'll see 100 attempts in a row from the same IP to access a list of files like /admin, /admin.jsp, /admin.asp, /admin.html, /admin.htm, and so on. Or, you'll see a string of connection attempts at 150 commonly used ports that you don't happen to have anything running on. They're just running scripts that scan all over the place for well known vulnerabilities. They aren't necessarily an "attack" like you might think of some hacker in the basement of a military complex plotting out an evil plan. Presumably nearly all of them are dudes sitting at home at their computer running scripts they downloaded from the Internet that don't really intend to do anything other than see if they can find a place to store some porn or whatever. I'm thinking this company just aggregates the list of that kind of activities from the servers it is installed on. So, what we're really seeing is probably more of an indication of how many customers they have and where they are.

All that said though, it is really cool anyways :) It'd be a lot cooler if a statistically representative geographic representation of servers around the world were being monitored.
 
Last edited:
Interesting...

Lots of attacks aimed at Kirksville, St. Louis, and Seattle, US at 750pm CST. What's so significant in those three cities that hackers from China are so interested in? Over 420 attacks coming from mainland China currently. Amazingly persistent buggers.

I wonder if server farms in those cities are affecting the rest of the country?
 
So.

We are to believe that United, NYSE, WSJ all had glitches the same day? I think getting eaten by a shark is more likely....

Plus Anonymous seemed to know something was up prior to it happening. Wierd that.
 
So.
We are to believe that United, NYSE, WSJ all had glitches the same day? I think getting eaten by a shark is more likely....
Plus Anonymous seemed to know something was up prior to it happening. Wierd that.
How often do you suppose that three different computer systems have an issue on the very same day?
 
How often do you suppose that three different computer systems have an issue on the very same day?


systems like that that? If the IT department wasn't maintaining %99.999 uptime, they would be fired. It's industry standard.
 
How often do you suppose that three different computer systems have an issue on the very same day?

systems like that that? If the IT department wasn't maintaining %99.999 uptime, they would be fired. It's industry standard.

Given the tens of thousands of systems of that scale that are operational these days, it's very likely. I'd also point out that it was only NYSE's internal trading platform that was affected by yesterday's outage. Anyone trading NYSE securities on any other platform were trading without issue.

Furthermore NYSE said today that it was brought down by a botched software update: New York Stock Exchange Explains Wednesday's Computer Glitch - Fortune
 
Given the tens of thousands of systems of that scale that are operational these days, it's very likely. I'd also point out that it was only NYSE's internal trading platform that was affected by yesterday's outage. Anyone trading NYSE securities on any other platform were trading without issue.

Furthermore NYSE said today that it was brought down by a botched software update: New York Stock Exchange Explains Wednesday's Computer Glitch - Fortune


What do you think United's operational uptime percentage is? The NYSE? Why would the NYSE engage in an upgrade mid week? Why did that anonymous tweet seem to be in the know?

yeah it was only the dog and pony trading floor, but its far too coincidental to believe the reports.


If my system went down for 3 hours, people would be fired and providers would be sued. we operate above that %99.999 threshold. do you think the NYSE or united, or the WSJ would operate any less efficient?
 
Furthermore NYSE said today that it was brought down by a botched software update: New York Stock Exchange Explains Wednesday's Computer Glitch - Fortune


From your link.


"On Tuesday evening, the NYSE began the rollout of a software release in preparation for the July 11 industry test of the upcoming SIP timestamp requirement. As is standard NYSE practice, the initial release was deployed on one trading unit. As customers began connecting after 7am on Wednesday morning, there were communication issues between customer gateways and the trading unit with the new release. It was determined that the NYSE and NYSE MKT customer gateways were not loaded with the proper configuration compatible with the new release."


We do this on a tuesday?

What happened to the redundancy? why could they not roll back in seconds?

Are they saying that the overlooked the OTHER HALF of a systems upgrade, IE the clients and connectivity?
 
What do you think United's operational uptime percentage is? The NYSE? Why would the NYSE engage in an upgrade mid week? Why did that anonymous tweet seem to be in the know?

yeah it was only the dog and pony trading floor, but its far too coincidental to believe the reports.


If my system went down for 3 hours, people would be fired and providers would be sued. we operate above that %99.999 threshold. do you think the NYSE or united, or the WSJ would operate any less efficient?

The NYSE's trading is halted all the damn time due to technical glitches. It's happened at least 4 times this year that I can remember.
 
Furthermore NYSE said today that it was brought down by a botched software update: New York Stock Exchange Explains Wednesday's Computer Glitch - Fortune


wait this is a hoot....


"NYSE and NYSE MKT began the process of canceling all open orders, working with customers to reconcile orders and trades, restarting all customer gateways and failing over to back-up trading units located in our Mahwah, NJ datacenter so trading could be resumed in a normal state"


Mahwah is the backup now? (it's not, SHHHHH)

and it took them 3 ****ing hours to fail over?

Many of my customers have fault tolerance and redundancy, "failover" ranges from near instant, to about 10 mins.
 
What do you think United's operational uptime percentage is? The NYSE? Why would the NYSE engage in an upgrade mid week? Why did that anonymous tweet seem to be in the know?

yeah it was only the dog and pony trading floor, but its far too coincidental to believe the reports.


If my system went down for 3 hours, people would be fired and providers would be sued. we operate above that %99.999 threshold. do you think the NYSE or united, or the WSJ would operate any less efficient?

Emergency release to fix something else that it was decided couldn't wait until the weekend. I don't know that that's what happened but it's an often enough occurrence.
Or how about an upgrade over the weekend that didn't actually exercise the problem code path until yesterday?

How do you define your 99.999% uptime? I ask because it's impossible to make it meaningful in the real world for any moderately complex system.
 
Last edited:
wait this is a hoot....


"NYSE and NYSE MKT began the process of canceling all open orders, working with customers to reconcile orders and trades, restarting all customer gateways and failing over to back-up trading units located in our Mahwah, NJ datacenter so trading could be resumed in a normal state"


Mahwah is the backup now? (it's not, SHHHHH)

and it took them 3 ****ing hours to fail over?

Many of my customers have fault tolerance and redundancy, "failover" ranges from near instant, to about 10 mins.

If it's a software problem all the hardware in the world won't help you.
 
Emergency release to fix something else that it was decided couldn't wait until the weekend. I don't know that that's what happened but it's an often enough occurrence.

They didn't call it an "emergency", they seemed to suggest it's how they operate. I don't believe it.


How do you define your 99.999% uptime? I ask because it's impossible to make it meaningful in the real world for any moderately complex system.

In laymans terms, that the **** works,

They seem to think it's important:
NYSE, New York Stock Exchange > About Us > News & Events > News Releases > Press Release 10-23-2013
NYSE, New York Stock Exchange > About Us > News & Events > News Releases > Press Release 02-02-2012
NYSE, New York Stock Exchange > About Us > News & Events > News Releases > Press Release 11-05-2012


So do you really think this HA, redundancy, and fault tolerance they seem to be VERY big on, goes out the window for the floor?

Do you think united is as haphazard about thier noc as well? Never mind it was the same day....

An Inside Look: United Airlines' Mission Control Center - Forbes
 
If my system went down for 3 hours, people would be fired and providers would be sued. we operate above that %99.999 threshold.

That may well be true, but without knowing what sort of system you manage, I can't say whether it has the kind of complexity, traffic and the very narrow fault tolerances something like the NYSE does. If one trade in a million fails to complete for a technical reason, that's a huge, show-stopping, problem for the NYSE. Even if certain transactions just take 100 milliseconds longer to complete than they should, that is like alarms start going off an people panic. They can't rely on the usual techniques of letting things fail over after things time out or using not-quite-synchronous data and whatnot.

Also, the system you manage is probably more similar to a lot of other systems, so you have to deal with bugs in the slice of the environment that is custom built code for that particular purpose, but most of the code you're using is like Apache, Linux, Windows, Oracle, etc.- things that have been tested and fixed up for many years and which serve millions of users. In the NYSE, the odds are nearly all their code is custom. Even most their hardware and network protocol is custom.

So, I have no idea. It could have been Anonymous for all I know. But I wouldn't go around comparing uptime you see in a typical IT situation to uptime at something like the NYSE. A lot of what you would count as "up" would count as "down" for the NYSE and they're working in a much more complex, bespoke, environment.
 
They didn't call it an "emergency", they seemed to suggest it's how they operate. I don't believe it.




In laymans terms, that the **** works,

They seem to think it's important:
NYSE, New York Stock Exchange > About Us > News & Events > News Releases > Press Release 10-23-2013
NYSE, New York Stock Exchange > About Us > News & Events > News Releases > Press Release 02-02-2012
NYSE, New York Stock Exchange > About Us > News & Events > News Releases > Press Release 11-05-2012


So do you really think this HA, redundancy, and fault tolerance they seem to be VERY big on, goes out the window for the floor?

Do you think united is as haphazard about thier noc as well? Never mind it was the same day....

An Inside Look: United Airlines' Mission Control Center - Forbes

You have a system that 20,0000 people connect to and has 15 functions. If one person cannot execute one function but everything else works is the system up or down? How does that failure get factored in uptime calculations?

I'll refrain from commenting on anything specific to NYSE since I work for them - though not in the area affected yesterday. I will say we don't as general rule make routine production changes during the week. That doesn't mean we don't ever make them should circumstances warrant. It also doesn't mean anything that the release didn't use the words "emergency"
 
Last edited:
That may well be true, but without knowing what sort of system you manage, I can't say whether it has the kind of complexity, traffic and the very narrow fault tolerances something like the NYSE does. If one trade in a million fails to complete for a technical reason, that's a huge, show-stopping, problem for the NYSE. Even if certain transactions just take 100 milliseconds longer to complete than they should, that is like alarms start going off an people panic. They can't rely on the usual techniques of letting things fail over after things time out or using not-quite-synchronous data and whatnot.

We have clients who have similar requirements and have HA in place. we also don't do service upgrades in the middle of thier busy time.


Also, the system you manage is probably more similar to a lot of other systems, so you have to deal with bugs in the slice of the environment that is custom built code for that particular purpose, but most of the code you're using is like Apache, Linux, Windows, Oracle, etc.- things that have been tested and fixed up for many years and which serve millions of users. In the NYSE, the odds are nearly all their code is custom. Even most their hardware and network protocol is custom.

WE have both, and we manage several system, not jut one from our noc.


So, I have no idea. It could have been Anonymous for all I know. But I wouldn't go around comparing uptime you see in a typical IT situation to uptime at something like the NYSE. A lot of what you would count as "up" would count as "down" for the NYSE and they're working in a much more complex, bespoke, environment.


are you suggesting the stock exchanges networks are more apt to go down than say, United? oh wait. ;)O
 
We have clients who have similar requirements and have HA in place. we also don't do service upgrades in the middle of thier busy time.

WE have both, and we manage several system, not jut one from our noc.

are you suggesting the stock exchanges networks are more apt to go down than say, United? oh wait. ;)O

I mean, I don't know what kinds of systems you manage, so I can't really say, but here, take an example. Say that in one of the applications you manage, somebody submits a helpdesk ticket saying that their user profile page isn't loading. How long might it be before anybody checks into the ticket at all? If the page in fact is not loading, how long before somebody diagnoses the problem? How long before you get a fix on the system typically?

Now, if instead of a user's page not loading, that was a situation where somebody's trade wasn't being recorded, that could leave the NYSE user in a situation where they lose $600,000 when the stock goes up 1% before they realize that the trade wasn't actually recorded. A similar severity of bug as might cause the user above's profile page not to load might cause a trade not to be recorded, but where in the profile page scenario, you consider the time that bug is in place to be uptime, and the bug may be in place for weeks, months or even years, the NYSE would immediately shut down and would not come back up until they found and fixed the bug and corrected any data that had been corrupted by it.

Generally speaking, places that shoot for 99.999% uptime are places where users can deal with a few bugs and where the system behaving slowly for brief periods still counts as it being up. There are not a lot of environments that I'm aware of with 99.999% uptime where "up" means zero bugs and nearly instantaneous response times. And I can't think of any that hit that kind of zero bugs, instantaneous response, uptimes with a massive worldwide distribution of networks, servers and software doing extremely complex things running all custom software.

For example, Google's search engine is massive and equally complex and it hits 99.999% uptime. But, it does it by being very fault tolerant. If you do the same search 100 times, you won't actually get quite the same results each time because different databases will be getting fed new data at different times and this thing won't be quite up to date on that one, but will on this other, etc. Many of the stats they give you, for example about search queries, the data you get back is in the form of "approximately 500 - 540" and whatnot because they're often using more scalable, faster, techniques when they don't need to be 100% confident in 100% accuracy. Other large complex systems often hit 99.999% by being slow. For example, most ecommerce platforms have no problem making a user wait even 5 or 10 seconds while they process things. The NYSE can't do either of those things, and I can't think of another equally complex system that can't do either of those things which hits 99.999% with no bugs.
 
Last edited:
Back
Top Bottom