5 Common Mistakes Sponsors Make in MSAC Submissions for Diagnostic and Investigative Technologies

Lab technician conducting a MSAC funded diagnostic test

For sponsors seeking public reimbursement in Australia, submissions for diagnostic and investigative technologies present a fundamentally different challenge to therapeutic services.

Under the Medical Services Advisory Committee (MSAC) framework, diagnostic technologies are defined as services that generate clinically relevant information, but do not themselves improve health outcomes. Any benefit must occur indirectly, through changes in clinical decision-making and downstream management. This distinction shapes how evidence, economics and uncertainty are evaluated.

In this article, diagnostic technologies is used broadly to refer to investigative technologies assessed by MSAC, including diagnostic, prognostic, predictive and monitoring tests.

Many diagnostic submissions fail not because the technology lacks merit, but because sponsors underestimate how MSAC actually evaluates value. The following five mistakes are among the most common and most consequential.


1. Treating test accuracy as an end point instead of a linked decision pathway

Sensitivity and specificity are foundational metrics for diagnostic tests, but they are not decision endpoints for MSAC.

MSAC does not assess diagnostic value in isolation. It asks whether improvements in test accuracy change clinical decisions in a way that improves outcomes or reduces costs. Accuracy matters only to the extent that it leads to meaningful downstream consequences, such as:

  • different treatment or referral decisions
  • earlier or avoided interventions
  • reductions in harm, delays or unnecessary use of health system resources

For diagnostic and investigative technologies, this assessment challenge is compounded by the fact that tests do not directly deliver direct health outcomes. Unlike therapeutics, diagnostics:

  • influence care indirectly through clinician interpretation and action
  • are embedded within complex and variable care pathways
  • rarely lend themselves to randomised trials with direct outcome endpoints

MSAC recognises these constraints. As a result, diagnostic submissions are commonly assessed using a linked evidence approach, rather than direct RCT evidence of health outcomes.

Linked evidence typically connects:

  1. test accuracy
  2. change in clinical management
  3. downstream health outcomes

Each link introduces uncertainty. MSAC therefore expects sponsors to:

  • clearly describe each link in the evidence chain
  • explain how evidence was sourced and applied
  • translate sensitivity and specificity into numbers of true positives, true negatives, false positives and false negatives
  • demonstrate how misclassification affects patient experience, resource use and downstream care
  • explore key assumptions through sensitivity or scenario analyses

Submissions that report sensitivity and specificity without explicitly linking accuracy to changes in management and downstream outcomes are typically viewed as incomplete. Treating linked evidence as a box-ticking exercise, rather than the core logic of the submission, is one of the most common reasons diagnostic MSAC applications fail.

2. Poorly defined diagnostic PICO

One of the most common and consequential weaknesses in diagnostic MSAC submissions is an unclear or poorly specified PICO.

PICO defines the core parameters of the assessment:

  • Population: who is tested and under what clinical criteria
  • Intervention: the diagnostic or investigative test for which funding is sought
  • Comparator: what would occur in current Australian clinical practice if the test were not funded
  • Outcomes: the patient and health system outcomes used to judge value

For diagnostic technologies, PICOs are often more complex than those for therapeutic interventions. Tests may be used at multiple points in the clinical pathway and can replace, triage or add to existing investigations, each with different downstream implications.

A defensible diagnostic PICO must clearly define:

  • who is tested and at what point in the clinical pathway
  • whether the test replaces, triages or adds to existing investigations
  • which clinical decisions are influenced by test results
  • which outcomes ultimately matter to patients and the health system

Vague or aspirational test positioning, or PICOs that are broader than the supporting evidence, make it difficult for MSAC to interpret relevance and often lead to misalignment between the clinical evidence, economic model and budget impact assessment, even when the underlying evidence base is strong.

3. Relying on stated clinician intent instead of observed Australian practice

Clinician surveys can play a useful role in diagnostic MSAC submissions, but MSAC treats them as supportive context, not as definitive evidence of clinical utility.

For investigative technologies, MSAC requires evidence that test results lead to actual changes in clinical management. Survey responses describing what clinicians might do are inherently indirect and subject to bias. As a result, MSAC places greater weight on evidence that reflects observed or verifiable behaviour within the Australian health system.

Clinician surveys are most persuasive when they:

  • are grounded in realistic Australian clinical scenarios, including current funding and guideline constraints
  • focus on specific decision points within an existing care pathway
  • are consistent with, and supported by, real-world data

In practice, MSAC expects assumptions about clinician behaviour to be justified through triangulation with Australian data sources, such as:

  • AIHW data to support disease prevalence, burden and patient pathways
  • MBS data to demonstrate current testing patterns, sequencing of investigations and downstream service use
  • PBS data to inform treatment initiation, substitution effects and potential cost offsets following diagnosis
  • NDIS data, where relevant, to capture changes in service utilisation and supports for populations with disability or developmental conditions

These data sources help validate who is tested, how clinicians currently manage patients, and what happens downstream following test results. Surveys that stand alone, without linkage to Australian utilisation or observational evidence, are rarely sufficient to demonstrate change in management.

In practice, clinician surveys are most effective when used to contextualise and explain real-world evidence, rather than to replace it. Sponsors who integrate surveys within a broader linked evidence framework grounded in Australian data are far more likely to meet MSAC’s expectations than those who rely on stated intent alone.

4. Underestimating the role of economic evaluation in diagnostic submissions

Economic evaluation is not ancillary in diagnostic MSAC submissions. It is a central component of how MSAC assesses value for money, even when clinical benefits are indirect.

Because diagnostic technologies do not directly improve health outcomes, MSAC expects economic models to demonstrate how test performance translates into downstream consequences. In practice, this requires models that:

  • link sensitivity and specificity to changes in clinical management
  • capture downstream costs and outcomes arising from those changes
  • reflect realistic Australian care pathways and funding arrangements
  • transparently characterise uncertainty through sensitivity or scenario analyses

Models that focus narrowly on the cost of the test itself, without incorporating downstream impacts on investigations, treatment, monitoring or adverse events, are frequently judged incomplete. Where assumptions are optimistic or weakly supported, MSAC typically adopts more conservative interpretations, which can materially weaken the perceived value of the proposed service.

Sponsors who treat economic evaluation as an integrated part of the diagnostic evidence strategy, rather than a secondary exercise, are far better positioned to meet MSAC’s expectations and support a positive recommendation.

5. Underestimating budget impact and utilisation risk

Even diagnostic tests with a low unit cost can generate significant budget impact if uptake is broad, rapid or difficult to control.

MSAC places close scrutiny on budget impact for diagnostic technologies, with particular focus on:

  • the size and definition of the eligible testing population
  • expected uptake over time, including the risk of use beyond the proposed population
  • whether the test is likely to substitute for existing investigations or be used as an add on

For diagnostic technologies, utilisation risk is often a greater concern than price. Tests that are easy to order, widely applicable or perceived as low risk can quickly diffuse into routine practice.

Sponsors should therefore be prepared to explain how utilisation will be managed in practice. This may include clearly defined clinical criteria, restrictions on ordering, or other mechanisms to ensure testing remains targeted. Submissions that fail to address utilisation risk explicitly are more likely to attract deferral or negative recommendations, even where the clinical case appears strong.


In summary…

Diagnostic MSAC submissions are inherently complex. They rely on indirect evidence, behavioural assumptions and linked modelling that compound uncertainty.

Sponsors who succeed tend to:

  • translate sensitivity and specificity into real-world consequences
  • justify the use of linked evidence transparently
  • ground assumptions in Australian data sources
  • align economic and budget impact models with practice
  • acknowledge and manage uncertainty rather than overclaim

Applying a therapeutic mindset to a diagnostic submission remains one of the most common causes of failure.


How Pulse Economics supports diagnostic MSAC submissions

Pulse Economics has deep experience supporting diagnostic and investigative technologies through the MSAC process. Two members of our team are former MSAC evaluators, providing first-hand insight into how diagnostic evidence, economic models and uncertainty are interpreted.

We support sponsors with:

  • diagnostic PICO development grounded in Australian clinical pathways
  • interpretation of sensitivity and specificity linked to downstream impact
  • clinician survey design and validation
  • linked evidence strategies using AIHW, MBS and PBS data
  • economic and budget impact modelling that withstands MSAC scrutiny
  • planning for post-listing evidence generation and utilisation management

If you would like to discuss MSAC strategy for a diagnostic technology or pressure-test an upcoming submission, please contact Pulse Economics. We are here to help you navigate the MSAC process with clarity and confidence.

Email: contact@pulse-economics.com.au

Ph: 02 7240 6738

Author