At my new client site, everyone has an picture of themselves to stick up on the story wall to show whatbn we are working on. Here's mine:
What do you think?
Given that I'm speaking on Python mocking shortly, I thought I'd better dig through my accumulated bookmarks and see what I'm missing.
PyMock is what I've been using up till now. It's heavily inspired by EasyMock, a mocking style I'm used to. It's pretty good, really. The docs really could be better, and the failure messages are not very helpful, but it does the job.
But then I found Mox. Superficially, it's very similar to PyMock. It's also based on EasyMock. But wherever Mox does differ from PyMock, it's better.
It's much better documented than PyDoc, and has more meaningful, helpful failure messages. I think it's has more functionality, too, with some nice comparators, side effects and callbacks.
StubOutWithMock() is nice too. But it's possible that PyMock has all of this - given the documentation, it's hard to tell.
Mox will warn you if you call a method on a mock that the mocked class doesn't have, which is handy. It can get confused if your mocked class uses delegation, but you can always fall back to
MockAnything(). The mailing list is small but helpful.
On the downside, Mox isn't in PyPI, which is a shame. And they missed a trick not calling the documentation MoxDox. ;-)
There are many other Python mocking packages out there, too - notably pMock, a jMock style mocking library, Mock and minimock, a mocking library for the doctest unit test library.
In early development there's Mockito For Python, a port of Mockito, which will be worth keeping an eye on if Szczepan ever gets the hang of Python. ;-)
As an aside - I think we need Spamcrest!
Update: Seems there's already a Hamcrest for Python - source here.
Also - I forgot one other problem with Mox - the naming standards. One of my biggest early stumbling blocks was that my brain refused to see the the initial capital on the
AndReturn() method, and I couldn't get anything working. It's Google's internal naming standard, I gather. Yuck. I feel a patch with PEP 8 style synonyms coming on.
Revision 40,000 went into our repository yesterday. Surely that's enough? We can stop now, can't we?
Busy week - XTC today and London Python tomorrow.
What if powerful languages and idioms only work for small teams? (via Small teams and big jobs) is interesting. We are running a really large Agile project here at GU; what, 60 odd people? Mention any of the road bumps you hit, and people tell you your team's too large.
That's no good. The team, and the project, are the size they need to be. Splitting them arbitrarily might be possible, but it would carry huge costs of its own. The challenge is to make agile work with a big team. And on the whole, I think we are.
Interestingly, despite the fact that we've made agile methodologies scale up, some people here are still nervous about whether agile languages can do the same. Plus ca change...
Fuzzyman has a new mocking library for Python, which he presents in Mocking, Patching, Stubbing: all that Stuff.
Michael takes issue with Martin Fowler's Mocks Aren't Stubs; specifically where he defines mocks as objects pre-programmed with expectations which form a specification of the calls they are expected to receive. Michael's mocks are not pre-programmed with expectations - his expectations are defined after the test execution.
Now, to me, this is a trivial distinction - the important difference between a stub and a mock is in the existence of expectations. Whether the expectations are defined before or after the text execution is not crucial - it's still a mock.
It does matter in terms or usability, of course. It feels more natural to Michael to define his expectations after the test object is exercised, along with his assertions. For me, I have to say that defining the mock object's behaviors all in one place makes sense, so both the expected method calls and any return values should defined together - and the return values have to be pre-defined. We are using EasyMock here at GU (along with the Hamcrest constraints library) and I like it just fine. But that's more a matter of taste than anything else.
All this refactoring is all very well, Martin, but there is a cost. When a piece of code that you are currently working on gets refactored out from under you, you can end up with the merge from hell. I've just spent two hours merging a perfectly sensible set of refactorings into a change I'm making. Methods were renamed and moved, all to better names and places, but the changes were well beyond what Suvbersion can deal with automatically.
What's the solution to this? I have no idea. ;-) Certainly the refactoring of code that is hard to understand or maintain is necessary if you are to keep your code-base workable.
One piece of advice is to check in little and often. This doesn't make the issue go away, but does limit its impact.
The other thing is to bear in mind the potential cost of the refactoring that you are doing. Renaming that
isPlayable() method to
isVideoPlayable() might not be such a good idea, even if it is a better name, if it's going to cost someone hours of work.
Brunning's 1st Law of Source Control: He who checks in first, merges least.
"Source control ate my files!" is a superb post. Spot on - 9 times out of 10, when someone complains about Subversion (or whatever) screwing things up, it can be traced back to fear of updates or commits, or to someone blatting someone else's changes with a blind merge. This last, especially, can always be traced - there's no hiding the truth when history is an open book.
The other time, it's someone trying to revert a revision from a dirty working copy. I've never yet seen the software get it wrong.
But for the love of God, Darren, start running a continuous integration server already!
Julian and I have been asked to put together some unit and functional test coverage figures. We've not actually been given any time in which to do it, though, so I wouldn't hold your breath.
Looks like some work was put into using Emma to provide these figures, but it's not finished.
Over in the Python world, there's been some discussion over on c.l.py about Ned Batchelder's coverage.py, which looks like a fairly nifty module.
OTOH, I think you have to be a bit careful with metrics such as those provided by these coverage tools. Ned himself points out many of the problems with taking your coverage figures at face value (though Emma's block-level approach fixes some of them).
I'd add one more issue - gaming. People can start to see good coverage figures as an end in and of themselves. If the coverage isn't at, oh, say 80%, the tests must be crap. If they are at 100%, they must be great. It's not just that the figures can be meaningless, they can actually lead people astray.
Now, for an approach to coverage testing that actually makes sure you are testing that your code does the right thing, check out Jester & Pester.
I took yesterday afternoon off to nip across to Reading for a parents' evening, missing a retrospective. This morning, I find I'm giving a breakfast brown bag on news story packages (a set of interesting if complex user stories that I've been working on).
I don't think "bastards" is too strong a word.
Bugger. Growl doesn't have an option to wake your Mac from a screen saver, even on a per-app or per-event basis - and it isn't going to get one. I have to say, I don't agree that it would be that evil an option, provided that it wasn't the default. People wouldn't have to turn it on if they didn't want to, after all.
I'm quite prepared to believe that it would be a bugger to implement, though.
Why do I want it? Well, I'm currently Build Whip here at GU, and ccmenu is an essential tool in my armory. It would be really nice if I could arrange to be told about a broken build even if my screen saver has kicked it. I can make it play a sound, I suppose, but it's not really the same.
What's a Build Whip? Well, we have a big old team here, twenty-odd pairs, and we are not allowed to check in on red. Someone needs to make sure that the build gets fixed ASAP, and it's my job to make sure that happens.
Still none the wiser? OK, well, we practice something called Continuous Integration. When we are happy with a bit of code we've written, we put it into a single shared place, the code repository. Whenever the code in the repository changes, one or our server machines automatically bursts into life, and compiles all the code (to make sure that it's valid) then runs all our automated tests (hopefully demonstrating that it's bug free).
If this all works, the build is green. If not, it's red, and there's always some what that the entire team is informed of this. At my last site, we had a lava lamp. At GU, we have a big-ass plasma TV showing all sorts of stuff, and ccmenu or ccTray.
If the build is red, it need to be fixed ASAP. Fixing the build can be enormously complicated by other people putting further changes into the repository while you are working on it, so that's not allowed. Since that means that no one can check in (i.e. put code changes into the repository) while the build is red, that means it's even more important that the build is fixed ASAP. ASAPer? ASAPest? Whatever. So you have a build whip, whose job it is to either fix the build, or more often to track down the pair whose change was responsible for breaking the build, and make them fix it.
More on this later. Time to go shopping.
Here's one nice practice that we have here at GU - whenever the QAs sign off a story, they take a bell over to the devs who implemented it. The devs ring it, and the room applauds.
It's not just nice for the people ringing the bell - it gives the whole team a feeling of progress.