Surgeon performance data 'misses the mark'

http://www.bbc.co.uk/news/health-30057602

Version 0 of 1.

The NHS is trying to give patients more information on the performance of individual surgeons. But Roger Taylor, co-founder of health data analysis experts Dr Foster, says despite good intentions, the reports may not be telling us the whole story.

On Wednesday we get to see the publication of most of this year's consultant level outcomes data.

They're the latest in a series of annual publications that aim to provide more information about how doctors compare.

The MyNHS website will show comparisons of surgeon outcomes for over 25 operations across 13 clinical specialties. No other health system has this level of transparency about individual doctors.

The NHS's determination to make this the norm was underlined by Sir Bruce Keogh last weekend when he said that consultants who refused to publish their results could face sanctions, such as not passing their five-yearly revalidation or receiving bonuses.

It has taken a lot of time and effort by those involved to get to this point. Their efforts are appreciated. Being this open leads to improvements in standards and benefits patients.

The data covers surgical procedures ranging from orthopaedic surgery to neurosurgery and much besides. Reports for nine specialties have been published already, one more is due imminently and the remaining four should be out before the end of the year.

But there is a problem. Or to be more precise, a worrying lack of problems.

Almost all the results published to date have one thing in common - they almost all find that there are no significant differences between individual surgeons.

Only three of the audits find any "outliers" - performance outside the expected range - and in two of these, it's only one each.

It would appear that almost all surgeons are the same when it comes to quality of clinical care.

Meaningless?

That might sound like great news. If it were true that all surgeons were equally reliable, it would certainly be good news. The worry is that the published data is only giving us half the story.

The extent to which an audit shows differences between surgeons will depend on many things. It will depend to some degree on the actual differences between surgeons.

But it will also depend on how you choose to do the audit - what outcomes you measure, whether the data is accurate and how you define average performance.

Those decisions make all the difference between an audit that identifies differences between surgeons and one that does not.

The published audits fall down on all three issues. In some cases, the data is weak.

For example, one audit looks at the number of patients who have to return to the operating theatre after their operation because of complications. But the information on whether this happens or not is recorded for fewer than half the patients.

In fact, some published studies comparing clinical databases with other sources have suggested that in some cases they may be missing up to 75% of procedures.

The way in which the data is analysed and the definitions of average performance differ between audits.

In many cases the audits look at events that are so rare that it is unlikely to say anything meaningful.

For example, the audit of hip and knee replacement surgery compares surgeons for their mortality rates.

But death following these operations happens so infrequently that the results tell us very little. An analysis of complication rates would be much more informative.

It is not surprising that this has occurred. The task of publishing information about surgeon outcomes has been handed primarily to the surgeons themselves to address.

A group of surgeons asked to publish information about how they compare with each other is likely to find that the analysis they can all support is the one that shows no differences between them.

There is a conflict of interest here that needs to be addressed. Clinical audits are very expensive to maintain (costing 10 to 60 times as much as other databases, such as hospital administrative data).

They are there to provide greater transparency around variations in quality so that any problems can be addressed promptly.

To make sure that they are used effectively, they must be open to external scrutiny - particularly by researchers who can provide different, sometimes more objective, assessments of what the information says.

In short, Dr Foster believes that the methodology being used by the NHS to monitor surgeon-level outcomes is inadequate. Crucially, it poorly serves not only patients and the public, but also surgeons themselves. We urgently need to find a better way.

If not, we risk fooling ourselves into thinking that all surgeons are the same when the reality may be very different.