Alan Jones’ “Fair Review”

- August 21, 2007 -



      I and Darrell Harrington submitted “A study of earthquake prediction by atmosphere precursors” (1) to Susan Hough, Chief Editor of Seismological Research Letter (SRL), in Oct. 2002. She sent it to Alan Jones, retired International Business Machines Corporation (IBM) employee and adjunct professor of the Univ. of New York, to review. He told us his rules on and after Jan. 30, 2003. He adopted the largest magnitude of a quake by the United States of Geological Survey (USGS) as the magnitude standard (2). He assumed the data of the USGS without error to divide a prediction into a hit or a miss, called “Peer on” (3). He adopted the manuscript of him and his brother (4) as the standard method to replace Brelsford–Jones’ score method (5) by adding ‘aftershock probability, and himself as the authority to judge “aftershock. He adopted likelihood of 5% or less as the threshold of statistic significant of a set of predictions. He used dependence to delete overlapping predictions (6).      


        Jones calculated the Normalized Score x= 0.854 or the Likelihood (Integrative Probability) p= 27% to reject our paper (6).  Unfortunately, there were many artificial errors in his calculation and If those errors had been corrected, the likelihood would have gone down far below 5%.  We appealed the review on this basis, but the appeal was rejected without further review and instead the journal might publish it with “Apart from prediction” (7). We declined since we believed that the successful predictions were core evidence for the validity of the prediction method. On Oct. 30, 2006, Jones put his review out to attack me (8). I asked him for permission to show his review to the public, and he said, “Yes” (9). He was at loss for word on his mistakes (10), but I did not show his review.  


      In May 2007, Ara, Japanese predictor, wrote me that Roger Hunter, former employee of the USGS, had been attacking me with Jones’ permission (11). Hunter asked me why I had not showed Jones’ review by the permission (12). I asked Jones if he knew who had told Hunter the permission. He admitted, “I did” on Jun. 3, 2007 (13). I asked him if he wanted me to show his review.  No, I never ….But, if you want, you may” (14).  I persuaded him by replying Hunter‘s question (15). To the contrary, he claimed, “I believe I gave your paper a fair review and I stand by it(16).  Thus, I have to show his fair review to the public and request scientists and people to judge whether his review is fair  


Score Correction 


I requested Jones to show his scores by excel for detail, but he rejected it (17). Thus, I try to make the Excel Correction (18), whose Column A~C and F~I are as the same as Column ‘No.’, ‘Shou Prob.’, ‘Hit?’, ‘Jones Prob. A-S’, ‘Jones Hist. Prob’, ‘Jones total Prob’ and ‘Jones hit?’ of Jones’ Spreadsheet respectively (19). Column B, F, G and H depict Shou’s probability and Jones’ aftershock probability, general probability and total probability respectively. Sign ‘1’ and ‘0’ show hit and miss in Column C, D and I, independence and dependence in Column E and J, and Jones’ not aftershock and aftershock in Column K respectively.   


Column L and M reveal Shou’s score s=(b-c)In(b(1-b)) and variation v=b(1-b)(In b(1-b))2 here b: Shou’s probability in Column B and c: Shou’s hit and miss in Column C.  The total score is S=Σs=19.22 (L56 of 18), the total variation is V=√Σv=4.87 (M57) and the Normalized Score z (Jones’ x) =S/V=3.94 (L59), whose Integrative Probability p= 0.000044 (L60) according to the Normal Distribution Table (20). All above formulas are from (4). This example shows how to calculate the total probability by the score method laconically. 


In contrast, Jones calculated x= 0.854, or p= 0.27 (if x= 0.854, then p= 0.196 according to 20). Let’s focus on the calculation in Column P ‘Jones sco H~K’ that adopts all his rules such as “Peer on”, dependence, aftershock probability, his calculated probabilities and yields for z= 0 (P59). It is a puzzle of how he gets x(i.e. z)= 0.854. Since he doesn’t show further details, my correction has to begin from Column P and I will now show where he went wrong.      


Five highlighted misses   


Jones highlighted 5 misses of No. 19, 23, 26, 32 and 45 of my predictions by the clouds with red color maybe to expose my “guilt” on Oct. 30, 2006 (19), so I have to firstly explain that I did not know him until he wrote us on Jan. 30, 2003. Before that, we had created our rules.  The USGS divided its data into five Ranks: A, B, C, D and out of the rank with error 0.1, 0.2, 0.5, 1.0, and more respectively. To determine whether a prediction is a hit or a miss, we adopted its minimum error of 0.1 as a uniform rule for latitude, longitude and magnitude although the real errors were much bigger. Moreover, the USGS offered 1~4 magnitudes for one quake, and we adopted its average, noticed by Jones (2). Above rules are mentioned in our paper (1). Because magnitude error twice affects a prediction: once when I predict a magnitude by comparing the size of a cloud with those of formers, whose magnitudes had an error of 0.1 at least, and the other when the USGS reports the predicted quake, whose magnitude has an error of 0.1 at least, too. Together the errors allow no more than 0.2 for magnitude. By them, the 5 predictions are all hits, whose No. 23 data were from Bogazici Univ., Turkey (21), so I am not “guilt”.  


Jones highlighted the 5 misses, but only “changed 23 and 32 to misses”. I wondered about this puzzle. He replied, “I gave you hits on the three events you mention above” (22), so his ‘fair reviewexaggerated the miss number 2 to 5 to lure attention. The following table shows data by his largest magnitude rule and utopiaPeer on      













N China >35N


4/5 23:46






4/5 20:36



4/6 4:36







25~41, 53~105


2/4 14:33











4/9 10:48








Turkey& Med. ≥15E


6/1 12:54









6/30 21:48











1/27 10:44













Note: LT: local time of the west coast. Mag. the largest magnitude according to Jones. Lat. Latitude. Lon. longitude. No. 19 and 26 are clear hits. No. 32.1 is classified as a miss by longitude error of 0.01 degree, ten times more precise than 0.1 degree of the minimum error of the USGS. However, I tolerate it for his utopiaPeer on”. On the other hand, both the Northern California Earthquake Data Center of the USGS (NCDC) and the Univ. of Nevada offer No. 32.2 (23, 24) to prove it a hit. Jones gave No. 45 a hit on Nov. 7, 2006 (22), but I decline it for his utopiaPeer on” rule. After Jones’ review, I checked the data of No. 23.1 again, but they disappeared. Since I don’t know why Bogazici Univ. withdrew the posted data and whether the USGS lost the data, I tolerate it as a miss by the data of No. 23.2 from the USGS.       


No. 26

Jones wrote, “Shou calls a 5.9 a hit but it is not in his mag window.  However, I find a 6.1 so I give him a hit” (19). He noticed our average magnitude (2), but replaced his largest magnitude by our average to make No. 26 a miss. He found the 6.1 and even claimed, “I give him a hit”, but marked No. 26 a miss in his Spreadsheet.  Column R corrects this mistake (18), so its score increases 1.42(R56) from -0.58 (P29) to 0.84 (R29), the Normalized Score z (Jones’ x) increases 0.33(R59) and the Integrative Probability p reduces to 37.07% (R60).          


No. 19

Jones wrote, “Shou claims hit but only 5.8”. He replaced his largest magnitude by our average to make No. 19 a miss, too.  Column S corrects this mistake. Its score increases 1.91 from -0.35 (P22) to 1.56 (S22), the Normalized Score z increases to 0.78(S59) and the Integrative Probability p reduces to 21.77% (S60).


No. 32

Jones wrote, “Nos 31 and 32 are not independent.  No hit.  Slightly out of region by 0.01 deg. First, No. 31 hit a quake on Dec.12, 1998, while No. 32 began on Dec. 28, 1998 (1). Therefore, they are independent. Second, 0.01 degree is 10 times more precise than 0.1 degree, the minimum error of the USGS. Third, both the NCDC and the Univ. of Nevada prove it a hit (23, 24) even if the 0.01 degree makes it a miss. Column T corrects this mistake. Its score increases 2.03 from -1.71 (P35) to 0.32 (T35), the Normalized Score z increases to 1.26(T59) and the Integrative Probability p reduces to 10.38% (T60).


No. 45

Jones gave No. 45 a hit on Nov. 7, 2006 (22), but I would rather decline it for his utopiaPeer on” although it reduces 1.71 score from1.31 to -0.40 (P48).  


No. 23

I don’t know why Bogazici Univ. withdrew the posted data and whether or not the USGS lost the data. However, I tolerate No. 23 as a miss, which reduces 1.44 score from 0.88 to -0.56 (P26). Both No. 23 and No. 45 together loss 3.15 score.  




Jones claimed 8 dependences: No.20~22 & No.30~34. They are all hits except No. 32 for ‘Slightly out of region by 0.01 deg.’ Then, he wrote, ‘Take out dependent events, aftershocks, and change 23 and 32 to misses’ (6, 19 ). It is interesting to withdraw No. 32 for a miss after Taking out dependent events’.  He can either give both the 7dependent hitsand the dependent missthe same score ‘0’ by dependence, or give the 7 dependent hits 7 plus scores, and the ‘dependent missa minus by hit, but he cannot give the 7 dependent hits 0 by dependence’, while gives thedependent missa big minus by miss, which is a clear bias. Since No. 32 is already corrected, I just discuss the 7 dependent hits.      


Seven dependent’ hits   

In fact, the 8dependent’ predictions are all independent. No.30 is obviously independent for its time window in "7/24/1998~9/2/1998", while No. 31 in "11/23/1998~1/9/1999". No.20 and No.21 may look dependent, but No. 21 was made after No. 20 had hit an earthquake. Similar, No. 21 and No. 22, No. 31 and No. 32, and No. 33 and No. 34 are independent. Because he wrote No. 20 as a Northridge ‘aftershock’ and No. 32 is already corrected, Column U only corrects the other 6 independent hits. The total score increases 2.64 from 5.35 (T56) to 7.99 (U56), the Normalized Score z increases to 1.74(U59) and the Integrative Probability p reduces to 4.09% (U60).


Eight dependent events

Jones did not claim his independent rule even to the USGS until Feb.14, 2003. As a result, the USGS had signed some overlapping predictions. Thus, it is not my fault, but his.  A reasonable way to solve this problem is to delete all dependent predictions from our paper. I already did it. Column E depicts 8 dependent predictions: No. 3, 4, 41, 43, 44, 47, 49 and 50 with green ‘0’. Column W solves this problem.  The Normalized Score z decreases to 1.61(W59) and the Integrative Probability p increases to 5.37% (W60). 




Jones wrote, “My definition of an aftershock is an event within the region of aftershocks of a main event” on Feb. 1, 2003 (2). It is illogical to useaftershock” to definite aftershock”. He also wrote, “If there is a M 7.0 followed by a M 7.0, this would be a surpriseon Feb. 9, 2003 (25), but such “surprises” existed widely e.g. I gave him two such examples immediately (26). He claimed the M6.3 off coast of Oregon earthquake at (43.51,-127.42) on Oct. 27, 1994 as an aftershock of the M7.1 off coast of California earthquake at (40.40, -125.68) on Sept. 1, 1994. However, the M6.3 Oregon quake was isolated by a distance of 375 km from the M7.1 California quake according to the data of the USGS. His aftershock model looks like a missile to shoot a remote and isolated place he likes (27).  By this incorrect interpretation, he gave my prediction No. 10 a score of zero. Even if it were an aftershock prediction its score would not be zero at least because he gives it a probability not 100%, but 89.3%. He claims No. 1, 7, 10 and 20aftershocks’, while the USGS labels them the Newhall earthquake, San Fernando, off coast of Oregon and San Fernando respectively.    


Moreover, Jones extends aftershock probability even to those earthquake he did not claim as "aftershock", such as No. 2, 4-6, 8-9, 11-17, 25, and even claimed as "Not aftershock" such as No. 18, 21, 22 and 24 to make hits smaller plus scores, and misses bigger minus scores artificially.    


Correct extending ‘aftershock’ probability to not aftershock prediction

Column Y corrects the use of aftershock probability for quakes not claimed to be aftershocks.  The Normalized Score z increases to 1.96(Y59) and the Integrative Probability p decreases to 2.50% (Y60), smaller than 5%, Jones’ own significant threshold for publication.   


Correct No. 1, 7, 10 and 20 so called ‘aftershock’ prediction

Column AA corrects the 4 so called ‘aftershock’ predictions that were incorrectly labeled aftershocks.  The Normalized Score z increases to 2.46(AA59) and the Integrative Probability p decreases to 0.69% (AA60), much smaller than the 5% publication threshold.   


About Prediction 2001/03/20


Jones wrote, “It seems he made a prediction on 2001/03/20 and then send in a new prediction to replace it on 2001/03/21”. Our paper has two tables of predictions: Table 1 by the clouds and Table 4 by geoeruptions. He forgot to review Table 4, whose No. 7 was the prediction on 2001/03/20. 


Miss Analysis


Jones blames all misses on the precursor, but miss analysis shows their causes from satellite data problems, earthquake data problems and my lack of experience as a pioneer on the prediction, detailed in (28, 29).




Jones exaggerates two misses to five to lure attention. He replaces his own largest magnitude rule by our average magnitude rule to make No. 19 and No. 26 misses. He claims No. 26 a hit, but marks a miss. He misclassifies 8 independent hits as dependent and takes them out. Then, he puts No. 32 back for a big minus based on a longitude error of 0.01 degree, ten times more precise than 0.1 degree minimum error of the USGS. He extendsaftershockprobability to not aftershock predictions to make hits smaller plus scores and misses bigger minus scores. He classifiesaftershockwithout a scientific definition. His unproved ‘aftershockmodel looks like a missile to shoot a remote and isolated place he likes. He claims No. 1, 7, 10 and 20 aftershocks, while the USGS classifies the Newhall earthquake, San Fernando, off coast of Oregon and San Fernando respectively. Furthermore, Jones blames all misses on the precursor, but miss analysis shows them due to satellite data problems, earthquake data problems and my experience problems as a pioneer. He calls his reviewFair”.   


By contrast, I adopt his largest magnitude and utopia Peer on” to divide between hit and miss. I decline No. 45 as a hit for his utopiaPeer on”. I tolerate No. 32 a miss by error 0.01 degree for his utopia Peer on.  I also tolerate No. 23 as a miss in spite of a puzzle of why Bogazici Univ. withdraws the data and if the USGS loses the data. I delete 8 dependent predictions for his fault of not claiming his independent rule to the USGS. I also adopt his individual probabilities to calculate scores. 


The likelihood p using Jones’ individual probabilities and rules after correcting his artificial errors reaches 0.69%, much better than his 5% threshold in spite of blaming those data and experience problems on the precursorThis clearly demonstrates my work to be statistically significant and worthy of publication.    



1.      Zhonghao Shou & Darrell Harrington. A study of earthquake prediction by atmosphere precursors. (manuscript for SRL) Oct. 20, 2002

2.      Alan Jones. aftershock & average - largest magnitude Email Feb.1, 2003 (

3.      Alan Jones. peer on Email Jan. 30, 2003.

4.      Richard Jones & Alan Jones. Testing Skill in Earthquake Prediction. (manuscript) 1996

5.      Brelsford, W.M. & Jones, R.H. Estimating Probabilities. Monthly Weather Review 95, 570-576 (1967). 

6.      Alan Jones 5% significance, independence Review Feb.14, 2003

7.      Susan Hough. Apart from prediction. Email.

8.      Alan Jones. Put review out. Email Oct. 30, 2006

9.      Alan Jones. Permission Yes. Email Oct. 30, 2006

10.  Alan Jones. Up to you. Email Nov. 9, 2006

11.  Ara. Roger attack Email May 31, 2007.

12.  Roger Hunter. Why did not post Email Jun. 2, 2007.

13.  Alan Jones I did. Email  Jun. 3, 2007  

14.  Alan Jones. you may. Email Jun. 3, 2007

15.  Zhonghao Shou. two reasons. Email Jun. 11, 2007.

16.  Alan Jones. “fair review” Email Jun. 11, 2007.

17.  Alan Jones. Can’t show in excel. Email Nov. 3, 2006

18.  Zhonghao Shou. Excel Correction. June 6, 2007

19.  Alan Jones. Spreadsheet. Oct. 30, 2006.

20.     Answers. Normal table (

21.  Bogazici Univ. of Turkey. Data for No. 23.

22.  Alan Jones. 3 hits. Email Nov. 7, 2006.

23.  The Northern California Earthquake Data Center of the USGS. NCDC Data for No. 32.

24.  The Univ. of Nevada. Nevada Data for No. 32.

25.  Alan Jones. surprise Email Feb. 9, 2003.

26.  Zhonghao Shou. two examples Email Feb. 10, 2003

27.  Alan Jones’ missile aftershock model

28.  Zhonghao Shou. Earthquake Vapor, a reliable precursor. Earthquake Prediction, 21-51 (ed. Mukherjee, Saumitra. Brill Academic Publisher, Leiden-Boston, 2006).

29.  Darrell Harrington & Zhonghao Shou Bam Earthquake Prediction & Space Technology Seminars of the United Nations Programme on Space Applications 16, 39-63 (2005).


Sign Our GuestbookGuestbookView Our Guestbook


 Updated: August 21, 2007 | Webmaster