US Healthcare ‘debate’

Having used both the US and the UK healthcare systems, and getting increasingly fed up by the misrepresentations by some in the US about the NHS, I write in favour of universal healthcare.

Apologies as this is quite off topic for this blog: I’m going to justify it under the “travel” tagline as I was unlucky enough to break my wrist while in the USA, and to require surgery.

The US healthcare debate appears to be descending more into farce by the day, the latest and well covered point is the laughable assertion that “Stephen Hawking would not be treated under the UK system as the cost-benefit analysis doesn’t stack up”. Aside from the obvious fact that he was, is and will continue to be treated by the NHS (and has since made a statement about his NHS care), this comparison is especially bizarre given that the Obama plan is not modelled on the NHS. The other mistake is laughable misrepresentation of the role of the National Institute for Health & Clinical Excellence (NICE).

Sarah Palin has been citing “death committees” who will somehow sanction treatment, and NICE was described as one of those. The role of NICE is to assess treatments, new and old, to recommend if they are offered on the NHS. It doesn’t assess things per patient, but it does assess the cost-benefit analysis of introducing a drug. Put simply drugs that cost 5% more than existing treatments, but provided a terminal patient with 30% additional quality-adjusted life tend to be approved, a new treatment that costs 500% more but provides limited improvement tend to be rejected.

NICE is often in the headlines for refusing new treatments, but rarely when it recommends that older, cheaper drugs shouldn’t be used as newer, more expensive, drugs are better. The concept of Quality-Adjusted life years is not that controversial, as much of the new treatments focus on terminal conditions.

It’s always an emotive topic, but the cost of providing someone 1 extra month of life, with a drug that costs 3 times the existing needs to be evaluated again providing 50 other patients palliative care. They make tough choices, and in the face of public outcry are overruled more than I would like. However, this is not a death committee.

The USA system already has something much closer to “death committees”, the teams of Doctors who scours medical records looking for unreported medical conditions that can be used to rescind insurance – removing coverage from patients with lethal conditions, because they omitted to mention a minor, unconnected illness some years ago.

There will always be some form of rationing: in the absence of an infinite supply of money there will always have to be choices. The US healthcare system already has rationing in place, by insurance providers. The opponents of reform claim that “a layer of bureaucracy would be placed between you and your doctor” which seems to be ignoring the fact that your insurance company is already playing this role quite successfully.

Would you rather those be made by a medical committee of experts looking at the true value of a given medicine, or by a fixed cap on your insurance policy, meaning that if you get cancer and you’re at 190k of your 200k cap, that you can have 10k of treatment? Oh, and finding a new policy will be problematic as it’s a pre-existing condition.

Rolling policies are prone to falling foul of the pre-existing problem – if you’re on a 3 month rolling policy, unless you fall ill at the beginning, and are healed by the end of the window you’re going to be in problems come the next renewal. It’s a new policy and your chronic condition is pre-existing… these policies are typically taken out by those unemployed or without workplace health cover. Even people attempting to do the right thing can still lose out if they are diagnosed with cancer at 2.9 months.

I am not going to say the NHS is perfect, we’ve a lot of bureaucratic problems that have crept in during the last decade. Our MRSA and C.Dif rates are nothing to be proud at all. We have deaths due to mistakes and mal-practice.

But so does the USA. However, unlike the USA we don’t have people dying because their cancer treatment is withdrawn part way through because of cost caps. Or because someone can’t afford co-pays.

While in america the staff at the hospital where I had surgery treated me fabulously, I was scheduled for day surgery, and received good care (and many opioid painkillers). I have no complaints, however a good friend received awful care at the same institution where they were dismissed due to superficial assessment. The best care in America is amazing, however this care is not universal.

The administration burden also appears amazing: In the fallout from my broken arm I attempted to get a bill out of one of the hospitals that I was treated by. As I had a foreign address they couldn’t send this, but they could send me a questionnaire to rate “how well they dealt with my billing enquiry?”. Badly, but thanks for asking.

This week I’ve had some of the best primary care that I’ve ever had from the NHS, I popped in for 1 item, and while there discussed 2 other things, both of which require some degree of specialist services, and both of which will be undertaken at my local, clean, modern GP complex.

And those services are also available to the people less comfortable and middle class who live down the road from me, and who don’t have an option to go private.

In the UK I live in a country where everyone has acceptable care, and where those who choose to can pay for better care in the private sector. In some cases that gets you better treatment, but mostly it allows you to jump a waiting list and while being seen by a doctor who still does some work for the NHS.

In America you can get exceptional care. I will not deny that, the specialist hospitals and surgeons available are among the best in the world. But not everyone gets that, the masses of un-insured or under-insured people go without healthcare, or have to make very tough choices to get basic care available elsewhere in the world free at the point of use.

Our system is not perfect, but it’s more equitable, and you don’t have to use it if you don’t want to. Go private, go abroad, you’re not stopped.

From what I’ve read of the reforms in the US, you won’t be stopped either.

Saving power with Wake On Lan

As I was saying to my friend Nick Taylor who’s clued up about identity management, I want my ID card integrated with the IT Wake-on-Lan systems.

When I walk in through the turnstiles, my card fires a message to the IT Asset management system, and if I’ve designated a computer, my machine is woken up, and by the time I arrive it’s ready to log on.

Saves me all of 45 seconds, but could well get at least some of the people who insist on turning their machines off overnight because “they don’t have time to waste”.

Ultimately though, this doesn’t have that much long term use given that everyone is moving towards laptops and wireless.

On the Yahoo/Bing search deal

Firstly a disclaimer: I know a number of Yahoo!/flickr people, including a few who were previously involved in search.

I think the deal was inevitable, and starts to define what Yahoo! actually is. Can it defeat Google, I’m doubting that, but I think that when you’re in the market dominance that Google has you really need to have bigger competitors. In search Microsoft could be an oil tanker, and that once they really get up to speed that bing will start approaching Google as a bigger challenger. I’m not cheer-leading the deal here, just thinking that neither party had much option when they were comprehensively outgunned.

Anyway, much of the negative coverage, in particular this article shared the same character. The tone. It was all “Yahoo! was” “Yahoo! should have” “if I was in charge I would have”.

Yahoo! is where it is. It’s lost market share, advertising revenue and focus. Search is expensive to run, and if you’re in what appears like ongoing decline, then a strategic retreat could make sense.

On content consumption and twitter

The rise of the likes of twitter make what your friends are doing more relevant to when you watch television – how can broadcasters harness this to increase the incidents of “event tv”

Since I got back I’ve found myself watching far less television than when I went away, my laptop has replaced the telly as my “ongoing background distraction”. (Radio4 has also made a welcome return in that role)

The only things I really have as appointment televisions are some reality shows like the Apprentice and some other far crappier programming that for reasons of reputation I’ll not divulge – and the thing I’m enjoying is tweeting along with my friends.

Commercial broadcasters must love this, because suddenly I’ve a reason to watch live, and take in the adverts. The BBC has the Predictor for the Apprentice, but aside from a Myspace, I’ve not seen things like this for commercial channels.

Anyway, since my friends who watch this show aren’t watching tonight, I’ve no reason watch live and am timeshifting to zip through without the adverts.

Who’s going to the be first broadcaster to put up a suggested #hashtag at the beginning of a show?

Suddenly Home Networking Matters

Years ago your connection to the internet was much slower than your internal network, and you never had to worry about performance. Now we’ve got much quicker broadband speeds, home networking gets trickier because it matters.

Historically networking it was easy, you plugging in your 11mbps router and all was good. The 0.5mbps pipe from you to your provider was always so small that it didn’t really matter. You accepted patchy coverage as it was all quite new, and you had enough cable in place you could just deploy a second base station upstairs to fix that.

Now though, you can’t really ignore the performance of your internal network. If you’re using WDS to extend your network, have a slow WiFi bridge, or even just an inconveniently placed wall – it turns out it’s quite easy to reduce your throughput to the point that new services like BBC iPlayer in HD won’t work. With readily available broadband up to 16/24/50 megabits a second, your internal network matters.

I’m going through the pain of trying to get the WiFi network that both covers the house, covers the garden and works in my current room, which is helpfully the only place in the house without decent coverage of the existing network and precisely where the repeater to be for the garden coverage.

Do I bridge with Powerline networking? Do i just route a bit of Cat5 cable, because despite being ugly and low tech, it generally works?

While I know there are solutions to this, it does make me wonder that when someone who (mostly) knows the difference between 802.11a/b/g/n, has spare routers he can redeploy, and who despite the vagaries of compatibility that still seem to exist with WPA, (almost) has the patience to get this to work – what hope do the ordinary folk, and the Multi-service operators of the world of solving this.

Slingbox recommend using Powerline adaptors, and I’m beginning to see why.

Is scientific tear-down fair use?

Ben Goldacre is being asked to take down an extract of a show illustrating woeful misunderstandings of the MMR vaccines, and the risks associated with it.

Ben Goldacre has been asked by the lovely Lawyers at Global Radio to take-down his 44 minute extract of Jeni Barnett’s piece she did on MMR. Jenni, who later admitted she was woefully ill-prepared and started off an emotive debate on her blog with the standard pathos laden phrases like “as a mother…”, spouted a load of quasi-plausible pseudo-science about how awful vaccines were.

As Goldacre and others have pointed out many times, the Wakefield claims are totally refuted/withdrawn/dismissed now. There is no evidence that immune systems are overloaded by vaccination. There is a plethora of evidence that Measles is returning.

I hope he finds some legal representation, because at a time when we’re questioning the impact finance reporting can have on the real world economy, we should ask the same about science. But “as a mother…” people don’t tend to have opinions about the state of the credit default swaps market.

On Wikipedia filtering

Now the row has died down, a few thoughts regarding the filtering of the album cover from Wikipedia.

A few thoughts on the now defunct UK Wikipedia censorship row:

  1. It’s good that it’s brought the IWFs presence into the open. It wasn’t really hidden, but many people didn’t really know it existed. Though in reality 95% of the people still don’t know or care.
  2. How come not all ISPs implementing the IWF list were affected? Was there some examination of the list (which from heresay I thought was verboten), or do the other ISPs just have a more rigid deployment/change control procedures for updates?
  3. Kudos to Thus/Demon for providing a descriptive error message (to paraphrase “the IWF told us to block this”), instead of a blank 404 which some other providers presented.
  4. Because of the implementation of the filtering, some ISPs presented all requests to Wikipedia from their outbound proxy IPs. Wikipedians then removed of anonymous editing from these IPs due to the possibility of abuse.

Ultimately, removal of anonymous editing of Wikipedia is not a huge deal. Most users can register, although there are reasons why some people may require/desire to make anonymous edits.

Regardless of the degree of the impact however, it’s now clear that some implementations of filtering can impact the normal operation of some bits of the internet. Deep Packet Inspection could possibly preserve the outbound IP, but at a far higher cost and latency impact than the “selective” transproxying that many ISPs have implemented.

Something for the Australian government/populous to consider.

On Internet Filtering in Australia

I read with dismay this week about the plan to offer all Australian internet users a content filter provided by their ISP. While originally there was to be an opt-out for this, it appears this is actually a switch from a supposed “clean feed” to a core list of illegal material. If the plans go ahead as mooted, Australians will not be able to avoid some form of government mandated internet filtering. (I’m sure there’s a pun here on Great Barrier Reef, Great Barrier of Grief is the best I can think of, please post a comment if you think of a better one).

The incorrect facts and rhetoric I’ve heard peddled got me riled, the Minister responsible says those who don’t want filtering (paraphrased) “want to let people access child porn”. He states that many countries, including the UK, already have such a system in place. During the interviews he doesn’t like the most obvious comparison of China who have the most notorious system, the “great firewall of China”. In the UK, according to the IWF/Hansard 95% of broadband connections block the sites listed by the IWF, which only concerns images of child abuse.

The idea of the system is pointless for so many reasons, but the following stick out for me:

  • False negatives will mean that the “clean feed” never will be entirely safe. It also can’t protect from many threats, including children being grooming on chatrooms, and the sharing of inappropriate personal information.
  • False positives will potentially mean that people can’t access legitimate information, or information hosted on the same server as “objectionable” content.
  • Ineffective as much of the harmful material that they want to limit access lives on darknets, peer-to-peer services, or is encrypted – so an upstream filtering proxy won’t prevent anyone determined from accessing it.
  • Easily bypassed as the China experience has shown. Anyone who wants to get past the proxy is able to (using VPN, TOR, etc). Given how much more savvy younger users tend to be than their parents, who are the ones likely to understand these workarounds?
  • Expensive for ISPs to implement another level of trans-proxying and traffic management. Will this be a new barrier of entry to the market?
  • Government logging is made an awful lot easier with servers running government approved software embedded in ISPs, with integration with the ISP’s authentication systems – the government could potentially have a complete history of what connections have browsed to, tagged with account details.
  • Performance reducing the already sluggish internet hanging on the end of a relatively thin bit of electric string, do users really want more latency added to their browsing?

The internet is a wonderful resource but has bad elements on it. Safe internet use requires a broader strategy than a single tool, the first step of which is putting the computer in a room where adults can supervise. Machine based filtering can help, and detect activity an upstream proxy can’t, but can never address everything. The strategy to protect children also needs to empower them: explaining that not everything on the internet is what it appears, and teaching them about being a geek, i.e. don’t click on links in spam, be slightly paranoid and protective of your personal information. (That said there’s probably another argument that this is less relevant now, and the problem isn’t that your mother’s maiden name is easy to get hold of from Facebook, the problem is that banks/utilities still think it’s a secure question).

While the goal of preventing access to illegal content is a valid one, and nobody would ever condone the illicit content covered in the core proposal – the idea of a government mandated filters that ultimately won’t even stop all access to the illegal material is worrying. These filters will have knock-ons for legitimate users in terms of false positives and performance detriments.

It’s especially concerning given that some fringe parties holding casting votes in the senate have even more “comprehensive” ideas of what should be banned (gambling sites have been mentioned). While that isn’t part of the government’s proposal today, whenever infrastructure and legislation like this is put in place scope creep take place – witness the UK recent seizing of Iceland’s assets under Anti-Terror legislation during banking crisis.

I leave Australia in a few months, I may yet return, but moves like this make me less keen to.

Details of the campaign against this.


HDMI was sold as a next generation connector, having used it a bit recently, some of the omissions surprise me.

  1. Explicit support for Audio and Video synchronisation only appeared in version 1.3. The forth revision of the standard. That’s a pretty big omission for a next generation audio-video connector; in the meantime every device seems to have optional delay values to tweak the setup.
  2. More generally the audio support is lacking. While you can deliver multiple audio formats, more with each revision, there isn’t (at least in early revisions) a way of sending both surround sound (AC3, DTS or better) and simple 2 channel PCM stereo at the same time. Devices have to elect which to send, and while some form of auto-negotiation is possible, devices like the HD TiVo require you to choose which form you send. And while your Amplifier can decode AC3/DTS, your TV potentially can’t. If the standard had just said from the beginning that you always send 2 channel PCM as a fallback/base level, and also any better standard if available, no negotiation or configuration would be needed.One workaround is to send Stereo audio to the HDMI connector, and send the AC3 audio out over S/PDIF connection, and get the surround sound amplifier to decode this. And then adjust various delays to provide lip-sync again. This is just faff that could so easily have been avoided by sending both, the connector is not lacking in bandwidth for audio.
  3. The inclusion of HDCP to provide the movie studios a misplaced sense of safety that content is protected. In reality all it does is cause sporadic errors when your Source, Amp and TV fail the negotiation and require you to power cycle everything. Meanwhile in the background is the threat that some studio somewhere could deem your TV insecure, and your expensive flat panel on the wall is prevented from showing certain HD content.

Having spent many years trying to get overly complicated SCART setups to work, I hoped that HDMI would be much better, and while impressed at the quality of HD, I’m disappointed at the level of user intervention and forethought required when they are setting equipment up, much of which could be avoided if some more pragmatic decisions were taken at the initial design meetings.

paper saving

Until we really are paperless, a simple idea to save paper when printing out emails.

While we’re meant to be in the era of the paperless office, I still print more than I’d like.

Why don’t Outlook, and web browsers, have something in the pagination engine that detects when there are fewer than 5 lines of text on the final page of a printout. When it finds this, it shrinks the text/spacing (by a level most people wouldn’t notice on a multiple page document) and repaginates it to avoid that overspill and save 1 sheet printing.

This only really works for plain text and HTML, where there isn’t (usually) explicit pagination, but would be largely transparent to users (Word did have a “shrink by 1 page” button, I’m unsure if this still exists, and using it requires user intervention.).

No longer would the phrase “please consider the environment before printing this email” languish alone on its own bit of paper.