Aziz et al. comment on problematic stratification of patients according to their mutant allele and use of the Pearson correlation coefficient for assessment of the relationship between normal allele and age of onset. The confusion may arise from the controversion between the results of Cox regression, where the normal allele was found to be statistically significant, and subsequent analyses, where we claim that the normal allele is not a useful predictor of age of onset.
This was caused by a typographical error; the figures from Cox regression for the normal allele is 0.02 (P 1/4 .16) instead of 0.15 (P { .0001). Also, there is a typographical error for the mutant allele for Cox regression, where the regression coefficient is 0.20 instead of 0.15.
As we pointed out in our article, we replicated the analyses of Aziz et al (multiple regression with natural logarithm of age of onset as the dependent variable). Surely, dividing patients into subgroups according to the mutant allele might result in a loss of power in detecting the significance of correlations.
Nevertheless, a sample size of about 20 subjects is usually considered enough to detect a bivariate relationship (however, not enough to detect significance). We pointed out in Table 1 in the original article that it should be interpreted with caution.
The issue of sample size would become important if the point estimates of correlations would be high enough for any practical implications (say, 0.3 or higher). This, however, is not the case, as can be seen from Table 1 in the original article.
The rationale for using Pearson correlations in our article is that the level of measurement for CAG is continuous and not a rank order measure, as assumed by the Spearman correlation. On the other hand, we agree that one of the implications might be that it can account for a small amount of nonlinearity (however, this must be monotone nonlinearity).