http://www.thebulletin.org/web-edition/op-eds/new-way-to-detect-secret-nuclear-tests-gps

A new way to detect secret nuclear tests: GPS

By Jihye Park, Dorota A. Grejner-Brzezinska, and Ralph von Frese | 18 August 2011

Article Highlights

-North Korea’s second known nuclear bomb test was conducted deep underground and in extreme secrecy.

-The May 2009 explosion disturbed the ionosphere in a way that could be detected in GPS signals at 11 receivers in the region.

-GPS could complement other nuclear test detection methods and give the US more reason to ratify the Comprehensive Test Ban Treaty.

]]>As argued above the “must test regularly” crowd would be more believable were they to advocate for improving the delivery systems.

The JASON have said the existing pits will be OK for a century.

The CTBT allows testing for supreme national interests, in any case.

Want more reliable warheads than the 99% reliable ones we have? Try U gun-type weapons.

]]>I don’t know why you think I am missing your point. What in what I wrote was inconsistent with what you are saying? The point you are missing is that it might actually be more expensive to improve delivery system reliability – your argument is valid for a given type of system but does not take into account the differences between warheads and delivery systems. The latter are far more complex systems. This is very probably the reason that they are less reliable.

]]>Improving a component’s reliability from 85% to 86% translates to a 7% reduction in the probability of failure; improving a component’s reliability from 97% to 98% translates to a 33% reduction in the probability of failure. Improving the reliability of the 85%-reliable component is the low-hanging fruit.

]]>My math isn’t wrong, you just didn’t understand it.

Yes, if an outcome depends on two independent events both occurring, the probability of that outcome is the product of the probabilities of the two independent events.

Thus the probability of a *successful* mission for a nuclear weapon can be factored as the product of the probability that the delivery system works and the probability that the warhead works.

The probability of a *failure* is a bit more complicated because either of two events (failure of the warhead or of the delivery system) can cause failure. If both are rare events, the probability of system failure is approximately the sum of the two independent probabilities of component failure, as I explained.

If a is the probability of warhead failure, then 1-a is the reliability of the warhead. Similarly, if b is the probability of delivery system failure, 1-b is the delivery system reliability. Then c is the probability of joint system failure, and 1-c is the joint system reliability. The equation I wrote

1-c = (1-a)(1-b)

correct, and the rest is algebra.

Note that the overall probability of system failure is not exactly the sum of the two component failure probabilities, but if those are small (which they are) then the product term is very small and is also opposite in sign, i.e. the product term actually *increases* the reliability.

Frankly, I did not check Mark’s or your math because it is rather simple, as I explained before:

“The reason engineers looks at the least reliable components to improve is that the fractional change in going from, say, a 30% to 60% reliable component is bigger than in going from 98.4% to 98.7%. ”

And as I said before, the warheads’ pits will be OK for the next century or so, according to the JASON group. And if SSP finds anything worth testing — even if the CTBT is ratified — we can use the escape clause of supreme national interest and test away.

And btw, testing tells you very little about the reliability of the next bomb of the same flavor — so it is not relevant for the purpose you advocate.

There is no issue — the warheads are fine and will be for decades if not a century and are being monitored. The delivery systems are in worse shape.

These facts will allow you to make a judgement about what components of the weapon system ought to be addressed and what components we need not worry about for decades.

In any case, the discussion is altogether irrelevant for whether or not we should ratify the CTBT.

Apologies if this seems condescending — it is probably just an artifact of frustration at the fact that you repeatedly appear not to offer any credible defense of any relevant argument.

You seem to be saying “I wanna have the option of testing cause it makes me feel good” when the CTBT indeed does offer the option of testing for “supreme national interests”.

I am not sure what point you are repeatedly trying to make. Thus my frustration and apparent condescension.

]]>You are clearly a smart guy. What I find surprising is not that all people — both smart and dumb — make mistakes; we all do that. But what’s with the pretty shocking attitude (suggesting that I “consult any elementary statistics textbook)”?!? Especially b/c I’m pretty sure that **you’re the one making the stats error here.**

You write:

“If we assume, somewhat simplistically, that the success of a weapon is a binary value, i.e. either it kills the target or not, and then that the probability density of this value is close to unity and is the product of two factors both close to unity, then we can write

(1-a)(1-b)=(1-c),

where (1-a) and (1-b) are the two factors (say, warhead reliability and delivery system reliability) and (1-c) is the resulting system reliability. Expanding, we get

1-a-b+ab = 1-c.

If a and b are both small, e.g. 10%-15%, then ab will be very small, e.g. 1%, and can be neglected. In that case, we have

c = a+b,

i.e. the “unreliability” of the system is the sum of the “unreliabilities” (failure probabilities) of the two parts. Not the product.”

Actually, it’s the product.

Imagine in your example that the first “factor” in the system’s reliability has a reliability of 0.4, so then a=0.6. The second factor has a reliability of 0.3, so b = 0.7. In that case, solving your equations shows that reliability = 1-0.7-0.6+0.42. The total, of course is 0.12, or 12%.

But that shouldn’t be surprising because one could have simply reached that answer by **MULTIPLYING**, exactly as I argued: 0.4 * 0.3 = 0.12. This is straightforward and shouldn’t surprise because we all know that the joint probability of two independent events is calculated by multiplying the two probabilities together.

The reason your calculation led you to think it was additive rather than multiplicative is that you picked the special case in which reliability of both systems is very high — in which multiplication still produces the right answer, but adding as you did (and discarding ab) gets you close. Even then, the actual system reliability is properly estimated by multiplying but your method (casting out the ab term and simply adding) produced a number that was close to correct.

OK — maybe there’s some obscure reason that I’m wrong, and if so I trust you’ll point that out to me and the group. (Though I’d bet my expensive mountain bike that I’m right, and that one calculates joint probabilities of roughly independent events the way I described above). But even if that happens, let’s agree that, in the future, we’ll disagree professionally, i.e., without telling people to go consult elementary textbooks. (And frankly you were no worse in this matter than FSB and Yousaf.)

I’m sure I’ve been condescending before too. This board has smart folks on it and we can all learn from each other if we try.

(I’m not dodging the other points in your post — but I think the list has exhausted its interest in this thread. If anyone is actually interested in my response to your other points — including you — I’d be happy to either post it or email it privately.)

Best,

Daryl