
Evaluation of Innovative Startups
An impact evaluation of government support for Startups
Government support programs for newly established innovative companies have long been a feature of both international and Swedish innovation policy. This report, Innovative Startups: An Impact Evaluation of a Government Support Program for Newly Established Companies, is—just as the title suggests—an impact evaluation of the Innovative Startups program, one of Vinnova’s funding schemes targeting small, newly founded companies. The program has been in operation since 2017.
The purpose of this report is twofold: to assess the program’s effects during its initial years, and to contribute knowledge and guidance for future evaluations of similar support initiatives. A particular challenge in this type of analysis is that newly established firms are often missing from, or disappear from, available registers – often without any clear indication as to why.
The program’s structured application process and the use of external evaluators to score applications generally provide a good basis for applying robust methods of impact evaluation. This analysis focuses on the first step of the program, in which companies can receive up to SEK 300,000 in grants.
What did we find?
Overall, the results show weak or non-existent average effects across most of the outcomes studied. We find no statistically significant effects on firms’ net sales, number of employees, share of highly educated staff, or physical/real capital assets. However, there is some evidence suggesting that the support may have increased the likelihood of being among the group of firms with the highest turnover – relative to all applicant firms during the period in question.
What method did we use?
We applied a Regression Discontinuity (RD) approach. This method leverages the fact that the allocation of funding is largely determined by how each application’s score, as assigned by external reviewers, compares to a threshold value set by Vinnova. By focusing on firms whose scores were close to this threshold, we can compare companies with otherwise similar characteristics – where some received support and others did not.
The data used in our analysis were made available through collaboration with Vinnova and cover five application rounds between 2017 and 2019. Using linked register data from Tillväxtanalys (the Swedish Agency for Growth Policy Analysis), we tracked the development of the firms through to the end of 2022. The analysis rests on the assumption that firms with scores near the threshold are, on average, comparable in terms of application potential and firm quality. Given that this assumption holds – which we believe is supported by the available data – the method allows for causal interpretation of the program’s effects for this group.
Since the allocation of funding was not based solely on the scores, we used what is known as a fuzzy RD estimation in our main analyses. This is a well-established method, though generally less precise than a sharp RD design. The latter would have required the allocation of funds to strictly follow the external review scores.
Recommendations and conclusions
This is the first impact evaluation of Innovativa Startups. Due to a limited follow-up period, we have not been able to analyze the program’s potentially longer-term effects. Nor have we had the opportunity to study the second step of the program, in which certain companies may receive an additional SEK 900,000 in support. A full evaluation of the program’s overall effects will therefore need to await future studies.
One important observation concerns Vinnova’s handling of individual cases. We note that in some instances, funding decisions deviated from the scores assigned by external reviewers – for example, through adjustments to scores during the review process. While such adjustments may be justified on a case-by-case basis, they reduce overall transparency in the allocation of support. For impact evaluations that rely on the assumption of a well-defined allocation mechanism, such deviations from the original practice present a challenge.
From an evaluation perspective, it would therefore be preferable for decisions on support to be based as consistently as possible on external review scores. If deviations are deemed necessary, we recommend that these decisions be documented in a systematic and transparent manner.
To strengthen the conditions for reliable impact evaluations, there are good reasons to consider applying random allocation of grants. Such an approach – for example, through a lottery among applications with similar assessment scores – would enable clearer comparisons and contribute to more robust knowledge about the program’s effects.
