drsmooth wrote:Shore wrote:Isn't 21 out of 150 14%? They tested 150 people, and 21 tested positive, and 70 skipped the test. Assuming the 70 were users, then 61% of the referred people would have tested positive.
I mean, it sounds like it's a non-issue, anyway, and speaks only about the accuracy of "screening". But why mis-represent the data?
Gotta look at the positives over the screened population - 7,600 people. Screen/test/reject positives would, as I understand it, be the way the testing protocol works
So I should have said something like "...if all of those tested positive the applicant group's positive rate...'
I don't mean you, I mean the article. Just seems weird. If they identified 150 people out of 7600 to get tested, and 21 of those were positive, and 70 skipped, their screening method is pretty damned good. But, also irrelevant, since that's a low number relative to the applicant population.