A/Prof Lahn Straney’s Post

View profile for A/Prof Lahn Straney, graphic

Epidemiologist at MOA Benchmarking and Associate Professor at Monash University

Next week, I will participate in a panel discussion at the ACQS forum (https://lnkd.in/g-628QVx) on the effectiveness of the Star Rating system. I recognise that simply mentioning the Star Ratings can provoke spirited discussion, and I welcome the chance to engage with providers before, during, and after the session.   I approach this with the genuine belief that transparency is almost always a good thing, and that the program’s goals—communicating home quality to consumers and driving quality improvement—are both worthwhile. While many would agree with these aims, the question remains: to what extent are the Star Ratings achieving these objectives?   Much of the recent debate about the Star Ratings seems to centre on the Compliance Star Rating’s failure to pass the 'pub test'. Contributing 30% of the overall rating, there are concerns that the way stars are assigned doesn't sufficiently discriminate between homes that meet versus those that fail to meet standards. In the strictest sense, the Compliance Star Rating functions exactly as designed and intended, despite arguments to the contrary. However, if most stakeholders feel it should help people distinguish between homes that meet all requirements and those that don't, I would suggest we must reconsider the scoring approach. Notably, in the latest data release (Q3 of FY 23/24), no homes received an overall one-star rating for the first time, which was entirely due to the absence of formal regulatory notices qualifying for that rating.   What interests me most is the goal of driving quality improvement in homes, which hinges on the timely delivery of data. I fully understand the challenges involved—data needs validation, QIs must be risk-adjusted, and information from various sources must be aggregated and scored. Given these realities, it seems almost impossible for the indicators to meaningfully drive quality improvement within homes. Acknowledging these challenges, MOA Benchmarking provides reporting at the time of submission, but this 1) requires homes to have a paid membership, 2) cannot apply the same risk-adjustment approach, as it hasn’t been published, and 3) is limited to those homes participating in the benchmarking (albeit nearly half of all homes).   Other criticisms include the timeliness and meaningfulness of annual, small-sample experience surveys, which contribute 33% to the overall rating. Questions around the reliability of self-reported data are also common. I think much of this criticism is fair, but I believe criticism needs to be made with a view to how we make things better. That’s why I was pleased to participate in the Department’s consultations on reviewing the program, and I hope this demonstrates their commitment to continuous improvement, just as providers are rightly expected to do.

  • No alternative text description for this image
A/Prof Lahn Straney

Epidemiologist at MOA Benchmarking and Associate Professor at Monash University

3mo

Also David Sanders and Garry Neale will be there for MOA if you're more interested in talking about something other than statistical discrimination, sampling bias, and risk-adjustment.

Kevin McCreton

Author, The Catalyst Report - MD of Catalyst Research & ChefPanel

3mo

There is still considerable scepticism around star ratings. Even amongst those aware of the system, only one in five took star ratings into account when making an aged care home decision last year. We'll revisit that in our November report.

Qing Ling

Informatics Navigator, Process Optimiser, Nurse Experience Advocate

3mo

Well said A/Prof Lahn Straney . All contributions to the discussion are welcome, as long as it's constructive for the ultimate outcome we all wish for: better care for older persons in Australia. After all, it could be us in the consumer seat one day.

See more comments

To view or add a comment, sign in

Explore topics