After all, it's not difficult to run a transparent proxy. Many ISPs do it to keep bandwidth costs to a minimum - and have been doing it for years. The early Freeserve network ran on top of a huge transparent caching proxy built on top of a set of NetApp Filers.
In a well-designed transparent proxy everything is handled at a low level in the network stack, so that IP packets traverse the proxy without the source address being changed. Request something that's cached at the proxy or being blocked by it, and the appropriate response and content are delivered to the originating network. Request something that's not in the cache, and packets traverse the proxy as if it wasn't there, just adding a couple of microseconds of latency. The source IP address is maintained, and data routes back to it as per normal.
If the IWF proxies had been running transparently then Wikipedia editors would have been able to carry on working with the site without even noticing the blocked image (probably getting a 403 or similar error and putting it down to problems somewhere at Wikipedia). No one would have noticed, and if anything, there would have been a storm in a teacup rather a full-blown censorship witch-hunt.
I'm a firm believer in cock-up over conspiracies, and suspect it actually comes down to the folk at IWF being cash strapped and having to cobble together their filter out of the Internet engineering equivalent of stick-backed paper, cardboard, and string. Transparent proxying is more processor intensive than rewriting the packets, and it costs a fair bit more to implement. If the IWF just put together a cheap solution, then the events of the last few days were inevitable. Somewhere down the line their proxies would have triggered some site's defences.
In this case it was Wikipedia's anti-vandalism tools that caught the proxying - and the folk at the IWF must be feeling very relieved. What if it had been Amazon's anti-fraud software that had triggered, locking most of the UK Internet out of one of the largest ecommerce sites just before Christmas? I have a feeling that that scenario could have ended up very very expensive indeed.
I misread the architecture diagram at the Guardian. The IWF just creates the list of "bad" sites and feeds it to ISPs who then implement the filters.
Even so, my point remains the same. Someone screwed up with their proxy implementation, most likely as a result of having to do things on the cheap. It will be interesting to watch just what happens to the proxies currently in place. I suspect they will very quickly become transparent...