Based on Table 1: This method is actually worse than generating a random number (0-100% independent of the program) and testing if it is less than 98.8%. That would achieve a better detection rate without increasing the false positive rate.
It doesn't seem worth it to try to follow the math to see if there is something interesting.
Interesting direction but the 98.8% FPR in Table 1 seems like a dealbreaker. Anyone understand what's going on with the contradictory results between the text and tables?
> Empirically, CTVP attains very good detection rates with reliable false positives
A novel use of the word "reliable"? Jokes aside, either they mean the FPR as the opposite of what you'd expect, the table is not representative of their approach, or they're just... really optimistic?
It doesn't seem worth it to try to follow the math to see if there is something interesting.
A novel use of the word "reliable"? Jokes aside, either they mean the FPR as the opposite of what you'd expect, the table is not representative of their approach, or they're just... really optimistic?