Dear Antonio,
Datamonkey reports one of the two things:
1). For SLAC: dN and dS (and their ratio) computed relative to the neutral expectation. This is done by reconstructing the evolutionary history of the sample (using an ML method)
2). For FEL and REL it effectively reports the omega (in Nielsen Yang terms) value, i.e. the ratio of non-synonymous to synonymous substitution rates.
For likelihood models, in my opinion, one should use the dN/dS
rate ratio to estimate selective pressure. Codon models already report either a global dN/dS (if there is not site-to-site variation), or site-by-site values.
If I recall, the Nei-Gojobori distance measure is only properly defined for two sequences anyway. Take a peek at an old paper by Spencer Muse about some of the issues involved with the classic dN and dS estimators: Multimedia File Viewing and Clickable Links are available for Registered Members only!! You need to
. Also, take a look at Multimedia File Viewing and Clickable Links are available for Registered Members only!! You need to
.
Effectively, you can now estimate E[syn subs] and E[non-syn subs] PER CODON, but you want to normalize it by the expected substitution counts if your sequences were evolving neutrally (thus the synonymous and non-synonymous
sites), i.e. all substitutions out of a codon were equiprobable. There are many ways do this (some are worse than others), but I am not sure which one you had in mind
Cheers,
Sergei