I would suspect that they test the odds ratio usin
Post# of 148166
The problem is not so much in the 19/26. That's easy to deal with because X/n is close to normally distributed if sample size is not small.
The problem is the 13/13. The estimated survival rate would be 100%, which doesn't leave any room for uncertainty. The problem with that is that anything other than 100% survival in the other group is totally incompatible with 100% survival. A stupid solution but one that fits with the classical MLE logic would be something like this: 19/26 is definitely not 100% => p-value = 0; likewise, and 25/26 is definitely not 100% => p-value = 0. But that's silly and defies common sense. Although the formula gives a variance of 0 and 100% certainty that survival in the treatment group is 100% no matter what, we know that's utter nonsense. You can't extract meaning out of the 13/13 unless you can impute some uncertainty about what true population proportion the 13/13 sample might be representing. It could be 100%, sure. But if you ran the trials again, are you 100% sure there would be 100% survival? What if you ran them again and again and again? Tough to tell. We can be pretty sure that the true survival rate after testing a crodzillion patients would not be 5% or 10%; if it were that low, there's no way we'd see 13/13 in our first trial. It could be 75% and we were just lucky in the trials, but more likely it is something like 90% or more. Quantifying that distribution of probabilities is not a part of classical statistics but is the heart of Bayesian statistics. And there are completely reasonable ways to assign those probabilities and construct an estimator that gives you the same answer as the Wolfram Alpha or the logistic regression when x is not 0 or n, but also provides defensible answers when x is 0 or n.