"AI is a buzzword, and buzzword bingo shows up in defense too," says Timon Osinga,
Research Consultant Strategic (Technology) Foresight and Airpower at TNO. "But our analysis of tech trends with subject-matter experts turns tech watch into remarkably realistic and pragmatic insights."
That tension, between loud headlines and quiet, evidence-based judgment, runs through tech watch and also the defense industry. Private industry now drives much of the relevant technology, dual-use spills across civil and military lines, and the same “innovation” can mean very different things for Army, Navy, Air & Space, or Military Police missions.
In a candid, hands-on session with ITONICS, Timon Osinga walked through how the Netherlands’ Ministry of Defence turns signals into decisions: scoring impact vs. feasibility for us and the opponent, adding expected time-to-TRL9, and increasingly measuring Capability Readiness to reflect whether the force can actually field what the lab can build.
This follow-up distills 18 hype-resistant moves you can copy: from parallel SME evaluations and living subscriptions to annual assessment sprints, clear decision triggers, and cross-border reality checks with peer organizations.
The conversation followed ITONICS publication of the report, Defense Innovation in 2025, showing a new defense innovation model emerging and how new players and established primes redefine product development in the defense industry.
Summary and FAQs on modern R&D
How is the TNO approach different from traditional TRL-based evaluations?
Most systems stop at Technology Readiness Levels, but that only tells you how mature something is in the lab.
This approach adds time to TRL 9, Capability Readiness Level (CRL), and opponent feasibility. Together, these layers create a more realistic picture of who can field a technology, when, and with what impact.
Why score feasibility separately for our side and the opponent?
Because our adversaries don’t play by the same rules. A technology we’d need years to integrate (due to ethics, doctrine, or supply chains) might be fielded faster by a less constrained opponent.
Scoring both perspectives helps identify urgent vulnerabilities and counter-move priorities.
What if we don’t have the resources to run a full tech watch program?
Start small. Focus on 1–2 mission areas and a short list of emerging technologies.
Standardize your scoring criteria, involve operators early, and run structured sprints. With the right workflows and tools, even a small team can punch above its weight.
How often should tech watch assessments be updated?
It depends on the pace of change in your domain, but TNO recommends a mix of passive tracking (via subscriptions and triggers) and periodic active sprints. Most items should be revisited at least annually, or immediately, if new evidence changes feasibility or impact ratings.
How do we make sure tech watch insights lead to real decisions?
Build in decision triggers from the start.
If a tech scores high in opponent feasibility and impact, that should activate a clear next step, whether it’s expert consultation, procurement acceleration, or doctrine revision.
The best tech watch systems make action automatic, not optional.
Tech scoring: how to evaluate what really matters
1. Split impact and feasibility for us and for the opponent
“Is this an opportunity for us, or a threat from them?” asks Timon Osinga, foresight expert at TNO. In defense tech watch, scoring feasibility only from your own perspective misses the bigger risk picture. “You always need to consider both sides of the chessboard,” he says.
At TNO, feasibility is scored twice: once for the organization and once for the adversary. If a technology is high-impact and easy for the opponent to adopt, it demands urgent attention. “We don’t want to know what’s cool,” says Osinga. “We want to know what could hurt us or help us soon.”
2. Add expected time to TRL 9
"TRL is just a snapshot," says Timon Osinga. "But it doesn't tell you when you'll actually get there."
At TNO, every tech evaluation includes a time estimate to reach TRL 9 - the point of full field readiness. This time-to-field view matters because domains mature at wildly different speeds: ICT may move in 18 months, platforms may take 15 years. Anchoring evaluations to realistic timelines forces prioritization based on urgency, not just potential.
3. Measure Capability Readiness Level (CRL)
A high TRL doesn’t mean your force is ready to use it. “You can have a mature tech,” Osinga notes, “but if you lack doctrine, logistics, or integration, you won’t field it.”
That’s why TNO pairs TRL with Capability Readiness Level. CRL considers training, sustainment, and operational fit. It brings reality into planning and avoids overconfidence in what looks good on paper.
4. Use multiple lenses for the same truth
“There’s no one right view,” says Osinga. “Army, Air Force, Cyber, they all see different things in the same tech.”
TNO’s platform allows stakeholders to slice the data by mission area, by technology domain, or by time horizon. These multiple lenses help users explore what matters most to them, while staying anchored in a single source of truth.
We don't want the hype. We want to know what could hurt us, or help us, soon.
5. Log the “why,” not only the score
Scores without context get challenged or ignored. “You want to know why someone rated something high,” Osinga explains, “not just the number.”
Every rating at TNO comes with a short rationale and linked sources. That audit trail helps reviewers trust each other’s judgment and gives decision-makers a clear path from signal to insight.
6. Put operators at the center of scoring
Operators bring the mission lens that hype often ignores. “They’re pragmatic,” Osinga says. “Not overly optimistic, not pessimistic, just grounded.”
That realism stabilizes evaluations. TNO makes sure that operators, not just researchers or analysts, have a voice in scoring workshops and divergence reviews. Their experience helps translate tech promise into operational value.
7. Watch for combination effects
“We often miss the combination effect,” warns Osinga. “The smartphone wasn’t new tech, it was the convergence.”
Tech watch must do more than rate individual trends. TNO regularly looks across domains to ask: What if these mature at the same time? What happens when AI meets robotics meets cyber-offense? These questions often reveal disruptive potential hiding in plain sight.
Tech watch structure: how to run the system so it scales
8. Bridge tech experts and operators in one workflow
“The key is not just collecting input, it’s creating a shared language between experts and end users,” says Osinga.
At TNO, tech watch isn’t siloed. Engineers, researchers, and military operators all evaluate and interact with the same items on the platform. This shared space helps surface practical concerns early, whether it’s ethics, logistics, or operational fit, and avoids the classic handoff problem where analysis goes unused because no one owns the next step.
9. Parallelize evaluations with lightweight workflows
In defense, timing matters, but evaluations often get stuck in email chains and waiting games. “If 10 experts are working in sequence, it takes forever,” says Osinga.
TNO built its platform to support real-time, parallel reviews. Stakeholders rate, comment, and update at the same time - tracked in one place. This not only speeds the process but also makes bottlenecks and indecision visible.
10. Keep the portfolio living with subscriptions
“A static list is a dead list,” Osinga explains. “People engage more when they get notified, when the system works for them.”
TNO lets users “follow” technologies of interest. If a high-threat item is re-rated or new evidence is added, the right experts and teams are notified automatically. This keeps the portfolio alive, updated, reactive, and tied to attention spans.
You need a system that doesn't just store insights, but makes them travel to the right people at the right time.
11. Run assessment sprints
“Sometimes you need momentum. A focused push,” says Osinga.
Rather than rely only on ongoing updates, TNO organizes periodic evaluation workshops where SMEs review and rate hundreds of technologies in a structured session. These bursts of attention surface high-priority items quickly, and make it easier to justify decisions because rationale and ratings are collected in one go.
12. Design dual-use from day one
“We deliberately bring in our colleagues from the civilian side,” says Osinga. “There’s no point building parallel foresight systems.”
Dual-use is core to how TNO runs its tech watch. Teams working on energy, encryption, and networks are involved from the start. Half-yearly cross-talks help spot civil-military spillovers early and keep the radar relevant to both defense and innovation teams.
13. Add context beyond technology
A tech trend doesn’t mean much in isolation. “You need to read tech in context: demographic, social, economic,” Osinga explains.
At TNO, signals are often tagged with non-technical drivers and uncertainties. The goal is to understand not just what the technology does, but what kind of world it might operate in. This makes horizon scanning more strategic and less reactive.
14. Compare methods across borders
“No country sees the future alone,” says Osinga. “Sometimes, you learn the most by seeing where you disagree.”
TNO actively collaborates with other defense foresight groups, including FOI in Sweden, FFI in Norway, and the UK’s DSTL. In one case, the Netherlands and Sweden independently evaluated the same technologies, then met to compare. Divergent ratings weren’t a problem; they were a trigger for learning.
Tech watch communication: How insights become decisions
15. Make “passive → active” a habit
“Most platforms end up being used like a library,” says Timon Osinga. “But tech watch only works if it reaches the people who need to act.”
At TNO, active dissemination is built into the process. High-priority items trigger notifications, show up in internal newsletters, and even get shared during “roadshow” briefings with leadership or project teams. The goal is to turn silent evidence into visible, usable input that nudges real decisions forward.
If something scores high threat and high feasibility for the opponent is when we act.
16. Define decision triggers up front
“If something scores high threat and high feasibility for the opponent, we know we need to do something,” Osinga explains.
That “something” could mean starting a counter-tech project, consulting operational experts, adjusting doctrine, or simply watching more closely. The key is that decision pathways are predefined. This ensures that evaluations don’t just inform, but initiate, turning analysis into timely action with ownership and follow-through.
17. Close capability gaps (even if maturity lags)
In some cases, urgency matters more than TRL. “If a capability gap is critical,” says Osinga, “we might act earlier and accelerate the tech as we go.”
This pragmatic approach acknowledges that waiting for perfection can leave you vulnerable. Procurement or field trials might start while development is still underway, especially when the mission need is clear and growing.
18. Show movement over time
“Operators don’t want a snapshot,” Osinga points out. “They want to see what’s changed—and why.”
That’s why TNO doesn’t treat its assessments as one-time events. Items are revisited, re-scored, and visualized to reflect updated evidence or expert feedback. When something moves on the radar or matrix, it’s not random, it’s documented, explained, and traceable. This builds trust in the system and gives leadership confidence that tech watch is responsive to real-world dynamics.
What to take away: Tech watch is only useful if it moves decisions
Modern defense tech watch requires a system that separates signal from noise, connects insight to action, and brings the right people into the loop at the right time.
As Timon Osinga shows, making tech watch work means scoring what matters (not what’s trendy), sharing one language across operators and analysts, and defining clear triggers that turn foresight into fielded capability. Whether it’s rating threat feasibility, measuring time to TRL 9, or closing gaps before they become vulnerabilities, these 18 moves offer both a mindset and a method.
Sometimes the most important thing is to understand why someone else rated it differently. Exchange drives understanding. Understanding drives action.
The mission isn’t to predict the future. It’s to prepare for it: intelligently, collaboratively, and decisively.
Because in the end, tech watch only works if it helps you act.
Boost your tech watch with ITONICS, the best foresight software
The ITONICS Innovation OS is a modular software platform designed to help organizations, public agencies, and R&D teams build a future-ready tech watch system. From structured signal capture to expert scoring, horizon scanning, and roadmap integration, ITONICS supports evidence-based decisions at speed and scale.
Cut noise and surface what matters: Get a centralized, living view of your technology portfolio, enriched with evidence, ratings, and expert input. ITONICS helps teams identify high-impact, high-threat developments and filter out weak signals or duplicate entries. Built-in automation, deduplication, and relevance scoring help you focus on the signals that matter most.
Translate insight into action: With ITONICS, you can define evaluation frameworks (e.g., impact × feasibility, TRL + CRL), set decision triggers, and track ownership. Roadmaps, alerts, and customizable dashboards ensure that strategic priorities don’t get stuck in reports, they move forward with clarity and accountability.
Enable collaboration across silos and borders: Bring analysts, operators, SMEs, and leadership into one shared space. ITONICS lets you run parallel assessments, compare divergent views, and structure regular foresight rituals, whether within your own organization or in collaboration with partners across the industry or beyond.