[ncdnhc-discuss] [long] The NCDNHC's .org report is numerically inconsistent.

Thomas Roessler roessler at does-not-exist.org
Wed Aug 21 01:25:34 CEST 2002


On 2002-08-20 17:58:35 -0400, Milton Mueller wrote:

>The p. 49 table is not sorted at all, as I said in my last letter.

You mean, like a couple of unrelated columns randomly thrown on  
paper?  I'm sorry, Milton, but I didn't expect _that_ kind of  
nonsense from that source.

>>I don't think so (I hope this makes it to you in reasonably 
>>readable form):

>>ISOC	3	3	5	5	3	5	2

>There's the mistake: the first number is a 2, in the ACTUAL table; 
>i.e. the Excel spreadsheet. You could have figured this out from  
>my last message. 

I finally found it.  It's in that appendix where all the things not  
intended for publication are collected.  One page has data  
apparently not used in the report, one page has garbage, one page  
has the actual data which readers should refer to (p.48), and then  
there's another page of junk.

I don't blame you for this mess since it's apparently not your  
fault.  But, please, don't pretend that anyone outside your team can 
be expected to know what part of that annex is supposed to be taken  
seriously and what part is crap not destined for publication.

(Maybe you can get someone at ICANN to fix the PDF?)

>Yes, indeed. But ability to mobilize is real. if you cannot get  
>any people, or organizations, to spend a half an hour writing a  
>letter and sending it in to ICANN, how much support can you really 
>claim? Try it some time, if you don't agree.

We have no disagreement about this.

>The alternative you implicitly offer is to divine how  
>"representative" of .org registrants the support expressed for  
>each bidder is. Please tell me how to do THAT objectively? 

I don't say it is anything you can easily or realistically do.

>>Thus, this is, from the very beginning, the worst and most  
>>insignificant input you have.  

>Calm down. We were ASKED for this estimate and indeed it was a  
>part of the RFP that all bidders were asked to provide. If you  
>don't like the fact that the RFP included this criterion you  
>should have complained back in April or May, when it was  
>formulated. And I don't agree that it is insignificant, either. 

>The top tier bidders demonstrated widespread public support, the  
>others did not. There were real and significant differences.

Agreed.  In a very ill-defined sense, you certainly have 
"significant" results.

But as the different approaches and results you guys have produced  
indicate, these differences were hard to quantify - certainly harder 
than the differences in other areas of the report.  Ultimately, you  
don't even know what you are comparing: Are you ultimately comparing 
the efforts made by applicants?  Are you comparing the success of  
similar-size efforts?  The numbers you generate that way are 
necessarily weak.  Weaker than the ones from the other parts of the 
report.

That's why I maintain that it is an extremely bad idea to combine  
the scores across categories.

>>To make it still worse, you have to _estimate_ the number of  
>>ISOC class B responses on page 23, because you can't reasonably  
>>make the distinction between class As and class Bs for this  
>>application (which may indicate that the distinction was the  
>>wrong approach to solve this particular problem).

>We can and do make a clear distinction between Class A's and B's.

Re-read what I wrote.  I said that you can't make that distinction  
IN THIS CASE.  Of course you have a nice theoretical definition of  
that distinction. But when you don't have the necessary input, it's  
useless.

>What we can't tell from the evidence is whether the people who  
>added their name to the Class B list were ISOC members or not.  Of 
>course, we could have thrown out all the endorsements for ISOC. Or 
>we could have counted all 500 of them.  Do you think either would  
>have been fair or accurate? 

Is it fair or accurate to just make up a number in the absence of  
clear data?  The correct approach would be to separate the scores  
for different categories, and to leave this particular score open  
for this particular application, with an appropriate explanation.

-- 
Thomas Roessler                        <roessler at does-not-exist.org>



More information about the Ncuc-discuss mailing list