Julian and I have been asked to put together some unit and functional test coverage figures. We've not actually been given any time in which to do it, though, so I wouldn't hold your breath.
Looks like some work was put into using Emma to provide these figures, but it's not finished.
OTOH, I think you have to be a bit careful with metrics such as those provided by these coverage tools. Ned himself points out many of the problems with taking your coverage figures at face value (though Emma's block-level approach fixes some of them).
I'd add one more issue - gaming. People can start to see good coverage figures as an end in and of themselves. If the coverage isn't at, oh, say 80%, the tests must be crap. If they are at 100%, they must be great. It's not just that the figures can be meaningless, they can actually lead people astray.
Now, for an approach to coverage testing that actually makes sure you are testing that your code does the right thing, check out Jester & Pester.Posted to Testing by Simon Brunning at November 01, 2007 06:26 PM