• Industripolitik, politiska strategier, styrning

Transformative innovation programs

Discussion about evaluation for policy development and review

Over the past decade, Swedish innovation policy has undergone a transformation. The focus has shifted from stimulating innovation for competitiveness to promoting system innovations to address complex societal challenges. This report examines Transformative Innovation Programs (TIP), an example of this new approach.

TIP has emerged as a system-changing measure within innovation policy. Rather than solely focusing on technical development within individual sectors, TIP aims to mobilize a broad range of actors—from business and academia to government agencies and civil society—to collectively contribute to the transformation of complex systems. These programs are often long-term and resource-intensive. In Sweden, this approach has gradually developed, particularly through Strategic Innovation Programs (SIP), which incorporated transformative elements to varying degrees.

However, the launch of the Impact Innovation 2024 program represents a clearer move toward full-scale implementation of TIP. The EU’s missions are cited as the foundation for Impact Innovation. Other European countries have also begun developing TIP-oriented programs, but their designs vary.

Implications for Evaluation

The report’s purpose is to describe the emergence of TIP in Sweden and to assess the programs from an evaluation perspective. We hope it can contribute to a better understanding of how TIP can be implemented with favorable conditions for evaluation in a Swedish context. The report also reviews research literature on evaluation methods and examines SIP programs, which preceded Impact Innovation and had certain TIP characteristics. However, SIP does not fully meet TIP criteria as outlined in the report, though there are similarities. Based on our analysis, evaluation implications are discussed.

A key premise for analysis is that follow-ups and evaluations aim to provide decision-makers—such as the government and implementing agencies—with the necessary insights to refine or reassess programs. Process evaluations help improve existing programs by increasing the likelihood that they effectively achieve their goals and adapt to evolving circumstances. A fundamental element of assessing program effects is a counterfactual outcome, meaning an estimate of what would have happened if no intervention had taken place. We rely on established definitions of program effects used by organizations such as the Swedish National Financial Management Authority (ESV) and the Swedish National Audit Office (Riksrevisionen). These evaluations are crucial in enabling the government to determine whether initiatives produce the desired effects or whether policies need to be reconsidered in favor of more economically beneficial alternatives.

Four Key Conclusions

Uncertainty About TIP’s Ability to Deliver Desired Societal Changes

TIP is an ambitious instrument designed to tackle complex societal challenges, but it remains largely untested. In practice, the evidence supporting its intended effects is weak. Over time, we can expect a broader range of effect evaluations and a stronger knowledge base.

Our review of SIP programs highlights several evaluation challenges likely to apply to both TIP and Impact Innovation. Many of these challenges stem from the program structure itself. Vision-oriented objectives, complex actor networks, long program durations, and outsourcing to non-state actors within program offices complicate monitoring, evaluation, and governance. As a result, program effects are difficult to assess. Existing evaluations are not effect evaluations, and there is currently no empirical support confirming that these programs achieve the intended effects.

This uncertainty raises concerns about TIP’s ability to deliver the desired societal transformations. To minimize risks and maximize learning, it is essential to strengthen the knowledge base before full-scale TIP implementation, allowing for program reassessment if necessary.

TIP Research Focuses on Program Design Rather Than Effects

Our review of TIP-related research shows that most literature assumes TIP is an effective policy and therefore advocates for evaluations aimed at improving programs rather than reconsidering them. There is a wealth of academic literature suggesting various evaluation methods, but few studies recommend effect evaluations as a viable approach to assess TIP’s impact.

This distinguishes TIP research from traditional evaluation studies, which typically emphasize effect evaluations. Effect evaluations complement other methods by employing counterfactual analyses, providing insights into whether observed results stem from the intervention itself—whether the policy works.

We argue that TIP should be assessed using multiple evaluation methods, including process evaluations and effect evaluations, but there is a notable shortage of effect evaluations. One reason for this gap is that the necessary conditions for effect evaluations are lacking. However, these conditions can be influenced and improved through how programs are implemented.

The government needs the best possible insights into program effects to determine whether the policy is effective or whether resources could yield greater socioeconomic benefits elsewhere. However, traditional evaluation methods struggle to assess complex programs like TIP, highlighting the need for new evaluation frameworks and methodologies.

Ambitious Evaluations of SIP, but Weak Evidence on Effects

SIP programs have undergone comprehensive interim and final evaluations. As process evaluations, they are well-executed and have proven valuable in adjusting programs and guiding future investments. However, the evidence that SIP programs have delivered the intended effects is weak.

Since counterfactual analysis conditions are often not met, SIP evaluations have primarily relied on the subjective opinions of program participants. Despite weak evidence, evaluations assert that SIP has achieved its intended effects. As a foundation for policy development and reporting to Parliament, these evaluations lack reliability and risk being misleading.

Developments for Risk Reduction and Flexibility

There is growing national and international interest in TIP, and innovation policy must remain open to different types of interventions and approaches. The field is experimental, meaning some policies will inevitably fail. However, TIP initiatives are complex and require significant resources over long periods, limiting opportunities for alternative innovation policy measures.

Our recommendations aim to strengthen the knowledge base, reduce risks, and ensure the possibility of reassessing and reprioritizing programs. Conclusions and recommendations are detailed in Chapter 5.

Recommendations

To the Government

  • Strengthen independent effect evaluation capabilities: Task implementing agencies with establishing conditions for independent effect evaluations. This includes defining primary and secondary goals, outlining impact logic at both program and initiative levels, and documenting support allocation processes.
  • Provide methodological support for implementation and evaluation: Establish a centralized function to coordinate cross-agency development of impact logic and evaluation frameworks, assisting implementing agencies in ensuring high-quality evaluation conditions.
  • Enable early reassessment: Ensure opportunities to reconsider or terminate TIP programs early through independent ex ante evaluations, pilot projects, and midterm evaluations. Trafikanalys’ ex ante review of Trafikverket’s infrastructure plans serves as an inspiring example.
  • Delay full-scale implementation: Structure the five Impact Innovation programs as pilot initiatives and postpone expansion until stronger evidence demonstrates their cost-effective contribution to achieving stated goals.
  • Fund evaluation research: Allocate funding to enhance research on impact evaluations of broad transition policies and complex innovation investments like TIP. Tillväxtanalys could be tasked with administering this initiative.
  • Seek broad political support: Secure cross-party backing for long-term TIP initiatives spanning multiple government terms. SIP programs lasted 12 years, and Impact Innovation aims to continue through 2040.

To Government Agencies

  • Integrate effect evaluation considerations in program design and implementation: Consider evaluation needs early in program development. Collaborating with Tillväxtanalys[1] or academia is an option, but a more cost-effective approach is for implementing agencies to develop in-house analysis and evaluation expertise.
  • Ensure access to relevant documentation and data: Develop impact logic models for programs and key initiatives, and collect data related to program goals for monitoring and evaluation. This may also involve documenting selection processes.
  • Provide transparent reporting: Government reports should clearly specify which aspects of a program have been evaluated and how results should be interpreted. Particularly important is clarifying the level of evidence regarding effects—whether evaluations used relevant methodologies or were primarily process-oriented or observational.
  • Address unintended consequences: Ensure a holistic assessment of TIP’s socioeconomic impact by examining costs, such as administrative expenses, competition distortions, technology misalignment risks, inefficiencies, and tax wedge effects.

[1] Swedish National Agency for Growth Analysis.

About the publication