Case Study
How it works
Frequently Asked Questions
“Isn’t this just another evaluation or consulting exercise with new language?” No. Evaluations assess whether a program met predefined objectives after the fact. Consulting typically recommends best practices within the existing setup. We do neither. We model how your current institutional design behaves over time, regardless of intentions, and show what outcomes it is structurally set up to produce. The diagnostic does not judge performance and does not prescribe actions. It reveals causality.
“We already use dashboards, indicators, and external experts. What does this add?” Dashboards and expert assessments treat indicators in isolation. Our diagnostic models how those indicators interact, reinforce, or cancel each other over time. Most system failures do not sit in single metrics, but in the way metrics are combined under fixed decision timing. This interaction is currently unmeasured.
“Public programs cannot operate like venture capital. Isn’t this assuming a private logic?” No. In fact, the diagnostic explicitly models why VC logic fails as a default for many deep tech programs. We do not push VC-style speed, selection, or returns. We model public mandates, political cycles, audit constraints, and sovereignty goals as first-class system constraints, not as problems to be ignored.
“Our constraints are political and legal. You can’t model that.” Those constraints are precisely what we model. Political accountability, auditability, fixed cycles, and distributed authority are not noise. They are structural parameters that shape outcomes. Ignoring them produces unrealistic recommendations. Treating them as system variables produces usable insight.
“Isn’t this just saying that deep tech takes longer and needs more money?” No. The diagnostic often shows that adding money without changing decision timing or capital thresholds reduces deployment probability by amplifying false positives. The issue is not more funding. It is when, how, and under what conditions funding becomes irreversible.
“Are you saying our evaluators or program managers are making wrong decisions?” No. The diagnostic explicitly reframes outcomes as not a people problem. Highly competent individuals can only act within the decision space the system allows. When authority, risk, and timing are misaligned, even optimal local decisions aggregate into suboptimal system outcomes.
“This sounds risky politically. What if the diagnostic shows uncomfortable results?” That is precisely why the diagnostic exists. It produces modeled futures, not accusations. It allows leadership to say: “Based on this system, these outcomes are likely. Are we comfortable with that?” It creates decision legitimacy rather than political exposure.
“Does this mean fewer projects or less geographic distribution?” Sometimes. And sometimes the opposite. The diagnostic does not optimize for fewer or more projects. It optimizes for coherence between mandate, infrastructure, and expected outcomes. If volume or distribution is a non-negotiable political requirement, the system can model what that implies elsewhere.
“Will this slow us down even more?” No. It typically reduces late-stage delays by moving uncertainty upstream. Slowing irreversible decisions early is not the same as slowing execution overall. Programs often experience faster resolution downstream once structural ambiguity is removed.
“How is this different from AI-based policy or decision tools?” AI tools optimize on available historical data and predefined objectives. Our system models causal structure, not predictions from past correlations. It explicitly includes blind spots, missing data, and qualitative constraints where AI systems tend to hallucinate certainty. AI can support the system. It cannot replace it.
“What if we run the diagnostic and decide to change nothing?” Then the diagnostic has succeeded. Clarity that the current system produces acceptable outcomes is a valid strategic decision. We do not assume change is necessary. We make consequences visible.
“Is this compatible with EU-level programs and audit requirements?” Yes. The diagnostic is non-invasive. It does not interfere with funding decisions, legal frameworks, or procurement rules. It operates at the infrastructure level, producing insight that strengthens, rather than weakens, auditability and political defensibility.
“Why haven’t we seen this before?” Because most innovation support focuses on projects, ventures, or instruments. Very few approaches model the institution itself as a system with internal physics. Deep tech has outgrown tool-level optimization. Infrastructure is now the limiting factor.
