Military History | How To Make War | Wars Around the World Rules of Use How to Behave on an Internet Forum
Korea Discussion Board
   Return to Topic Page
Subject: ROKN Patrol Corvette sucken by DPRK torpedo boat
YelliChink    3/26/2010 12:10:07 PM
Just happened 2150 Korean local time. Chinese reports say that it was DPRK torpedo boat. The ROKN corvette sunk is probably a 1200t PCC. I can't read Korean so I am not sure which one exactly. At this moment, 59 out of 104 crew have been saved so far. Best wishes to the still missing ones and condolence to families of lost sailors.
 
Quote    Reply

Show Only Poster Name and Title     Newest to Oldest
gf0012-aust    heavy   5/13/2010 4:16:20 PM
are you referring to china or NorK computing capability?  if the latter I agree, if the former, then we already know that thats what they are doing and that they have an extensive distributed SETI type capability.

as for supercomputing, they have a fondness for big powerplants near mountains...  funny about that.... 

 
Quote    Reply

VelocityVector    Cherry Picking Me   5/13/2010 4:56:52 PM

they have an extensive distributed SETI type capability

Respectfully, grid presents limitations and vulnerabilities including those pointed out by Heavy that you avoid with supercomputing, which is why the US Government still buys Cray.  Every node for example has its own error tendencies which collectively result in bad data that you must check or allow for and pose unique issues including timeliness for analysis.  High resource consumption simulations such as nuclear modeling are best-serviced by a supercomputer unless of course your lack of access to supercomputers leaves no better choice than grid or reengineering high throughput cots.  But, no, we do not know the true extent of non-public indigenous PRC computing resources or what precisely the PRC might be doing with all those cheap electrons from pplants situated near mountains and hydro.  0.02

v^2

 
Quote    Reply

Reactive       5/13/2010 5:31:22 PM

they have an extensive distributed SETI type capability


Respectfully, grid presents limitations and vulnerabilities including those pointed out by Heavy that you avoid with supercomputing, which is why the US Government still buys Cray.  Every node for example has its own error tendencies which collectively result in bad data that you must check or allow for and pose unique issues including timeliness for analysis.  High resource consumption simulations such as nuclear modeling are best-serviced by a supercomputer unless of course your lack of access to supercomputers leaves no better choice than grid or reengineering high throughput cots.  But, no, we do not know the true extent of non-public indigenous PRC computing resources or what precisely the PRC might be doing with all those cheap electrons from pplants situated near mountains and hydro.  0.02


v^2



 It's also important to note that the ability to run extensive distributed crunching networks depends on your ability to give each computer (node) a seperate (parallel) model, which does not send or recieve live feedback for the grid, but rather computes in data chunks, for certain types of simulations this is favourable, each computer runs its own small element of a larger model (seti@home, various climate sims etc), not needing to interact (except for updates/reports) with the main node.
 
Some programs (fission/fusion simulation, mechanical deformation, blastwave propogation, for example) are resource and bandwidth intensive, can not be reduced to nodal elements, and for those projects a supercomputer (with an inherently low-latency, high-bandwidth setup) is essential. I.e. if you need the model to be interactive, and propogate processes that are themselves derived from the crunched data in real time you need a cluster that is a closed system and in physical proximity. In my view, this is why supercomputers, even with a fraction of the power of a distributed grid, can run programmes of greater complexity. 

It's a case of parallel computation versus sequential computation, each setup has its own merits, the least of which is vulnerability to attack, much like a hash-check, there are ways and means of quickly verifying a model's accuracy.
 
R
 
Quote    Reply

VelocityVector    Reactive   5/13/2010 5:40:25 PM

Illinois Biogrid; Argonne; et al here.  Damned glad to meet ya, R ;>)

v^2

 
Quote    Reply

gf0012-aust    cherry picking me   5/13/2010 5:43:22 PM
not my intent, but an attempt to point out what we are aware of.
 
their SETI model is not based on home computers, but on the pooling of govt agency computers so that they can also function as a virtual entity.
 
rightly or wrongly, we know thats one of the reasons why they favour unix derivative OS, and especially linux as it fits the govt office sanctioned OS model and allows them to monitor and create virtual processing footprints "at will"
 
it's obviously not the only way that they do business. but when you look at some of the nodes in their power station grid, then they don't make sense when related to the "consumer base" within the loss leakage footprint.
 
ie, it doesn't make a whole lot of sense unless they are chewing up big chinks of power out of the view of the visible "footprint"
 
i'd suggest that some of those mountain ridges have got more than rocks inside.
 
 
 
Quote    Reply

Reactive       5/13/2010 7:06:14 PM

Illinois Biogrid; Argonne; et al here.  Damned glad to meet ya, R ;>)


v^2



And likewise V^2, a fascinating area you are involved in! 

R
 
Quote    Reply

heavy    @gf   5/13/2010 11:39:40 PM

are you referring to china or NorK computing capability?  if the latter I agree, if the former, then we already know that thats what they are doing and that they have an extensive distributed SETI type capability.




as for supercomputing, they have a fondness for big powerplants near mountains...  funny about that.... 





In that post I was referring to China, but I should clarify my opening sentence:

I would hazard that [the role their] dedicated computing power plays [in simulating nuclear explosions] is underestimated and their distributed computing power is overestimated [for that role].

That guess is based on the premises that:
a) supercomputing is cheap (moore's law)
b) distributed computing is inherently exposed

Exposure here does not need to mean an open vulnerability to pollution/corruption, but it does mean that as the number of nodes grow so do the opportunities for surveillance, statistical tracking, etc.

Now, that is not to say that the PRC isn't pushing Red Flag Linux as hard as it can in order to exploit the high degree of network-centric management inherent in Linux. This includes the possibility of tacking on all sorts of goodies at the whim of the package maintainer: surveillance and filtering being the most obvious priorities, and also distributed computing.

These boxes are reliable, robust, relatively trustworthy (secure), and likely also doing actual client-facing work that is deemed meaningful in some way, whether it's sitting on a party drone's desk or in an urban classroom. But over-deploying them as a means to expand a distributed computing network and soak up surplus wattage strikes me as backwards. Distributed computing is a way to exploit an existing surplus of cycles, but not a reason to go and create such a surplus. If you wanted more cycles, you would build more data centers, not deploy more web kiosks, that will give you many many more cycles/watt.

So while i'm sure it's true that PRC is focused on distributed computing as well as a surrpetitious surplus of electric power, I don't think the two are really directly linked. IE, if this power is driving cycles, it's probably doing so in Google-style containerized data centers and not a dozen extra terminals at the local library.

Then there are the so-called zombies (effectively synonymous with hijacked Windows clients) that represent the vast majority of the potential client population but simply are not trustworthy. You're not going to see sensitive data of any kind assigned to these machines; they are unreliable, volatile, and most importantly they are extremely vulnerable to a vast ecosystem of other interested parties, not least of which are transnational criminal gangs. Their limited lifespans will be devoted to relaying phishing/spam, cracking encryption keys, and DDoS-style network attacks.

(As an aside, I would say that if you are still using Windows as your primary OS by personal choice, you are the proverbial last person in the room to get the joke, and the joke's on you.)

Unlike the PRC, NorK simply doesn't have the infrastructure to support a large client/zombie base of their own, but they can certainly procure hardware and generate electricity to drive a data center/supercomputer. IMO their most significant computing function consists of acting as network mercenaries for the PRC, with whom they are functionally coordinated and extremely active, having wholly embraced network attacks as a so-called "asymmetric" capability. NorK is if nothing else a useful node to spoof attacks through.. Significantly, they spend a lot of time working to compromise machines south of the DMZ (successfully I might add) and exploit the exceptional network resources there.
 
Quote    Reply

gf0012-aust    heavy and V2   5/14/2010 1:07:34 AM
comms break somewhere here.
 
I thought the "cherry picking me" comment came from V2 and was trying to explain my position to him
 
not trying to pick a technical fight with anyone in here...
 
Quote    Reply

Hamilcar       5/14/2010 4:11:59 AM
 
You may also overestimate the time and data share issues, Heavy. Zombies can handle gruntwork calculation that is masked as something else.
 
H. 
 
Quote    Reply

heavy       5/14/2010 3:13:45 PM
 
Quote    Reply



 Latest
 News
 
 Most
 Read
 
 Most
 Commented
 Hot
 Topics