Military History | How To Make War | Wars Around the World Rules of Use How to Behave on an Internet Forum
Korea Discussion Board
   Return to Topic Page
Subject: ROKN Patrol Corvette sucken by DPRK torpedo boat
YelliChink    3/26/2010 12:10:07 PM
Just happened 2150 Korean local time. Chinese reports say that it was DPRK torpedo boat. The ROKN corvette sunk is probably a 1200t PCC. I can't read Korean so I am not sure which one exactly. At this moment, 59 out of 104 crew have been saved so far. Best wishes to the still missing ones and condolence to families of lost sailors.
 
Quote    Reply

Show Only Poster Name and Title     Newest to Oldest
heavy       5/14/2010 3:16:29 PM
@gf: I believe V2 was referring to himself as the cherry picker. There is no technical disagreement at all, just an estimation from me that they are much more likely to primarily rely on containerized, modular data centers (again, trivial to build out) than a swarm of clients, though as you've pointed out the two are definitely not mutually exclusive.

@ham: I understand that segmentation, obfuscation, encryption, can all lend the nodes a high degree of trustworthiness.. but it remains a network. The individual nodes and their lines of transmission are necessarily exposed in a way that a local cluster is not, and simply analyzing the delta between what a handful of compromised nodes are processing (or simply relaying) yesterday vs today can be useful. Fuzzy as they may be, the outlines are there in a way that they simply aren't for a discrete system, ie even if we can't peer under the tarps of their trailers (or make sense of the payload) we can observe the traffic patterns.

And security concerns are still secondary to the cost/watt problem.

I would also just add that the Americans, having written all of this stuff in the first place, have been playing this game for much, much longer. The high end capability and technical ability is unmatched, but it's the same sad tale of institutional myopia and bureaucratic inertia that has the PRC building out a hardened client network while, say, State is navel-gazing over what procedure it should use to approve Firefox for dept use. Sigh.
 
Quote    Reply

VelocityVector       5/14/2010 7:05:14 PM
I believe V2 was referring to himself as the cherry picker.
 
Indeed.  I was raised in the "heart of the fruit belt" (West Michigan) and used to actually pick cherries and apples and peaches and berries and string grape vines etc. ;>)  So most definitely I was referring to me, not to GF or others.
 
Many valid points, and you have my respect esp. Oz defense experts.
 
v^2
 
 
Quote    Reply

Reactive       5/14/2010 7:09:49 PM
Try for a second to imagine the following:
 
We are calculating the motion of one-hundred 20mm irregularly shaped objects which have a surface resolution (in terms of the simulation) of one nanometre, we are going to fire these at high velocity and we are going to calculate their motion and interaction as they hit an irregular surface, bounce and collide with each other, and we're going to calculate this at a nanosecond frame rate (or one billion frames/second). We're also going to calculate a fluid simulation for the air (or medium) that they are moving through to see the effect that these particles will have aerodynamically.
 
The problem you have is that to get an accurate model (especially the fluid simulation) of our chosen scenario you need each node that is 'crunching' to have a simulation that takes account of the "global picture" of the previous frame. Imagine the problems we are going to have if each node in the network requires say 200GB of data/frame in order to be working on the correct "global" picture. With a supercomputer, you can store this data in a central databank which can then be loaded dynamically from the local processing nodes), with a supercomputer utilising the very highest bandwidth/latency linkages between processing nodes, this can be done in milliseconds, each node can be loading data dynamically as it requires it, the central nodes can also allocate a greater workshare to chunks of the simulation that are especially complex. None of this you can do on a distributed network becuase the amount of data/node necessary is too great, and more importantly, the bandwidth required to synchronise each frame of the calculation would result in the simulation taking thousands of years even if the nodes themselves are more numerous and individually powerful.
 
With a distributed grid, each unit is essentially "going solo", each model discreet, which is why these are especially useful for carrying out tasks that involve discreet data ranges, and not for creating a vast, sequential calculation as the one described above. There are SO many inherent limitations about distributed number crunching for global simulations, they're good for establishing a mean, or dealing with discreet (non interacting) chunks data (BOINC (seti, climate prediction etc etc).
 
This is the problem in the applications we are discussing...

R
 
 
 
Quote    Reply

heavy    back to politics   5/15/2010 9:00:41 PM
 
Quote    Reply

heavy    back to politics   5/15/2010 9:01:05 PM
 
Quote    Reply

gf0012-aust       5/15/2010 10:02:13 PM
Imagine the problems we are going to have if each node in the network requires say 200GB of data/frame in order to be working on the correct "global" picture. With a supercomputer, you can store this data in a central databank which can then be loaded dynamically from the local processing nodes), with a supercomputer utilising the very highest bandwidth/latency linkages between processing nodes, this can be done in milliseconds, each node can be loading data dynamically as it requires it, the central nodes can also allocate a greater workshare to chunks of the simulation that are especially complex. None of this you can do on a distributed network becuase the amount of data/node necessary is too great, and more importantly, the bandwidth required to synchronise each frame of the calculation would result in the simulation taking thousands of years even if the nodes themselves are more numerous and individually powerful. 

generally speaking ....
 
thats on the low side of what data chunks can be manipulated.

A typical multi layered tactical picture pulling in data from various apps (eg falconview, gazeteers, fft/bft feeds. terran/wac charts) would be 40gig just for a localised picture - not a theatre picture, and definitely not a global picture. a lot of that is being done on distributed assets - there's no way to feed multi-layers into tight spots without layering the feeds or distributing the feeds.  time is money. time is life.

getting that data into the relevant platforms is not done by supers, and even the crunching of that layered feed is likely to be done by Unix and Apple "black boxes" - windows boxes are not in the race - not at the data management level anyway.  The SC's might get used for analysis, but for sending the layers and pictures so that they form a single "picture" is the province of black boxes.
 
Quote    Reply

gf0012-aust       5/15/2010 10:11:47 PM
Many valid points, and you have my respect esp. Oz defense experts.

v^2

 
well, kind comments are appreciated, but I'm nowhere near being an Oz defense expert. the real ones never go near public forums. or when they do, speak in soft generalities.  
Although I'd like to think that I do bring an insight into various bits and pieces.....

 
Quote    Reply

cwDeici       5/16/2010 3:50:53 PM


 


You may also overestimate the time and data share issues, Heavy. Zombies can handle gruntwork calculation that is masked as something else.

 

H. 



 
Lol, this thread is so hard to understand this far out. Funny thing is I read about the PS-3 on Sankaku complex (an anime and manga site).
 
Quote    Reply

Reactive       5/16/2010 6:20:22 PM


Imagine the problems we are going to have if each node in the network requires say 200GB of data/frame in order to be working on the correct "global" picture. With a supercomputer, you can store this data in a central databank which can then be loaded dynamically from the local processing nodes), with a supercomputer utilising the very highest bandwidth/latency linkages between processing nodes, this can be done in milliseconds, each node can be loading data dynamically as it requires it, the central nodes can also allocate a greater workshare to chunks of the simulation that are especially complex. None of this you can do on a distributed network becuase the amount of data/node necessary is too great, and more importantly, the bandwidth required to synchronise each frame of the calculation would result in the simulation taking thousands of years even if the nodes themselves are more numerous and individually powerful. 




generally speaking ....

 

thats on the low side of what data chunks can be manipulated.




A typical multi layered tactical picture pulling in data from various apps (eg falconview, gazeteers, fft/bft feeds. terran/wac charts) would be 40gig just for a localised picture - not a theatre picture, and definitely not a global picture. a lot of that is being done on distributed assets - there's no way to feed multi-layers into tight spots without layering the feeds or distributing the feeds.  time is money. time is life.




getting that data into the relevant platforms is not done by supers, and even the crunching of that layered feed is likely to be done by Unix and Apple "black boxes" - windows boxes are not in the race - not at the data management level anyway.  The SC's might get used for analysis, but for sending the layers and pictures so that they form a single "picture" is the province of black boxes.


Whilst you are absolutely correct, when I say "global" I mean in simulation terms.
 
What I was trying to explain, but probably in an overly convoluted manner, is that it's often impossible to break down a large simulation (global being the entirety of that simulation) to many parallel chunks for crunching without the transfer of incredible amounts of data. It, again, depends on the amount of bandwidth needed, and for many types of computer simulation/modelling you simply can't distribute the processing to nonlocal assets.... Even if they're high-bandwidth server farms, 200GB is indeed conservative, for atomic reactions you could be talking hundreds or thousands of terrabytes, which EVERY processing node needs to be able to access interactively... 
 
The particular bottlenecks are different for parallel and sequential simulations respectively...
 
R

 

 
Quote    Reply

Nichevo       5/16/2010 6:58:52 PM
I regret to inform you that Cisco has seen fit to open factories in China, enabling 1) Huawei to clone all their stuff and undersell them, 2) China to have access (I would presume) to at least the outside of all Cisco's fine technologies including such grid computing interconnect tech as Topspin and Infiniband, 3)  possibly counterfeit/sabotaged circuitry to go into Cisco gear which of course is relied on by all the majors incl DoD. 
 
So it's not as if they can't crank out all the clustering they want.
 
Quote    Reply



 Latest
 News
 
 Most
 Read
 
 Most
 Commented
 Hot
 Topics