Three numbers shape the DACH cybersecurity discourse: EUR 202.4 billion in Bitkom-reported cyber damages, 280,000 BSI malware variants per day, 29,151 reports filed with the Swiss BACS in the second half of 2025. They show up in CISO decks, conference slides and LinkedIn posts as if they confirmed each other. Methodologically they measure three fundamentally different things.
1. Three numbers, three methods
The numbers travel everywhere. LinkedIn posts across the DACH industry. Conference slides. Press releases. Newsletters. CISO decks that come to me for review. RFP texts from prospective clients. Pitches from cyber-insurance brokers assessing client risk profiles. Three numbers, multiple channels, always the same pattern: lined up next to each other as if they were three independent confirmations of the same threat picture.
That doesn’t survive any methodological scrutiny. Bitkom is a survey extrapolation from 1,002 telephone interviews, projected onto the entire German economy. BSI is a hash-count measurement from AV-Test GmbH’s raw feed, in the bright field, and explicitly not a measure of threat. BACS is an inbox: voluntary citizen reports and mandatory critical-infrastructure reports across 27 sub-sectors are two very different data classes inside a single counter. Three method types. Three application domains. On a slide next to each other they are not methodologically additive, and they do not confirm each other.
That is the substance of the ZweiOnCTI pilot episode. What follows is the practitioner’s view: what I observe in the field, what I derive from it operationally for the next twelve months, and where the disclosed limits of the evidence lie.
2. Where this shows up regularly
Four observations from CTI engagements and client briefings over the last eighteen months. Anonymised, generalised, not study-backed. A consultant’s read, Admiralty E5.
- Nobody writes the method type next to the number. In almost every CISO deck that has come to me for review in the last twelve months, “Cyber damages, Germany: EUR 202bn” appears with no indication that this is an extrapolated trade-association survey. Anyone who doesn’t label the type implicitly claims that all numbers are equivalent. Nobody does this consciously, and yet it happens regularly. The fix is mechanical: the method type belongs next to the number, every time.
- The operational maturity of the new Swiss reporting obligation is unevenly distributed. The reporting obligation under ISG Art. 74a–h has been binding since 1 April 2025. Conversations with ICT leads from a range of obligated Swiss organisations across energy, healthcare and public administration over the last few months show a consistent pattern. Almost every ICT lead knows the 24-hour initial report. The 14-day follow-up report comes up less often. The intake channels (the Cyber Security Hub or the BACS reporting address by email) are frequently in incident plans as “see annex,” with the annex not containing the full workflow. The reporting protocol in the wiki is the easier half. The harder half is whether the deputy knows the workflow at 03:00 in the morning, without looking it up.
- The German NIS2 wave doesn’t hit the KRITIS veterans. Established German operators of critical infrastructure with ten years’ experience under the previous BSIG § 8b regime made the transition to the NIS2 implementation act without major friction. The operational load sits with the “important entities” under NIS2: mid-sized firms that until early December 2025 had no federal reporting experience, and that have since been learning to handle reporting thresholds, a 24-hour early-warning, and fines up to EUR 10 million. In February at an industry event in Stuttgart, the CFO of a German machine builder mentioned he had been told two weeks earlier that his company qualified as an “important entity.” He had no reporting process. That is not an outlier.
- Cyber insurers move extrapolations into underwriting without disclosing the method chain. In a routine spring conversation with the underwriter of a Swiss cyber insurer about a client’s risk profile, the underwriter cited the Bitkom EUR 202bn as industry risk calibration and could not explain how the number translates from an extrapolated trade-association survey into the premium. That is not an isolated case. When the risk level is justified through a trade-association survey that the insurer hasn’t replicated, that is institutionalised methodology shortening. The insured pay the surcharge without seeing the inference step.
3. Operational consequences for the next twelve months
Concrete, with decision criteria.
- Four-type filter as the first column in review. When a CISO deck with cybersecurity numbers comes to me for review, the four-type filter is the first column. Four words next to each number: measured, reported, estimated, extrapolated. Not a methodological doctorate — one column. If anyone asks what the markers mean, the explanation is two sentences, and that is exactly the desired follow-up question.
- Verify the reporting obligation at the person level. When a client is subject to a reporting obligation, the question is not about the reporting protocol in the wiki. It is the workflow at the person level. “A reporting process exists” is preparation. “Three named people know the thresholds, the deadlines and the intake channel without looking them up” is maturity. The opening question: “What happens at 03:00 in the morning, when the on-shift operator sees an anomaly that’s close to the Art. 14 Cybersecurity Ordinance threshold, and her line manager is on holiday abroad?” If the answer takes more than two sentences, there is usually still work to do.
- Sector-specific source stacks in the CTI briefing. When the meta-numbers don’t fit the industry, assemble a sector-specific source stack for the CTI briefing. For a mid-sized Swiss energy utility, what matters is not the Bitkom EUR 202bn but: BACS mandatory reports filtered to the energy sector, ENISA OT threat assessments, Volt-Typhoon or Sandworm references from current Mandiant or Dragos reporting. Four sources, four method types, four Admiralty tags per NATO AJP-2.1. Three slides — honest, operational, verifiable. Effort: half a consultant-day per quarter.
- Demand the underwriter’s method chain. When a client negotiates a cyber insurance, ask for the underwriter’s method chain. Which number is used for what; which assumption drives the premium calculation. This is not adversarial — it is due diligence in both directions. Underwriters typically welcome the question, because they themselves can argue more cleanly once they are forced to name the method.
4. Counter-arguments and position
Three objections that come up regularly.
“Extrapolations are legitimate because they make cyber risk arguable at the board table and in the legislature.”
True. The Bitkom number was cited in the German Bundestag on 13 November 2025 during the NIS2 debate, by SPD member Johannes Schätzl, as a political mobilisation argument. The BfV (German domestic intelligence) co-signs the study as a Wirtschaftsschutz communication instrument. None of these uses are wrong. The number is not a lie. It is a political signal. The error arises when the political signal migrates into operational risk analysis without anyone naming the method type. Headline extrapolation for politics; sector numbers for operational risk argument. Two tools. Not in competition.
“Methodology critique is academic navel-gazing — practitioners need actionable guidance.”
If it stays as critique, yes. When it ends in a four-type filter that sorts a board slide in an hour, it produces more actionable guidance, because wrong actions are deselected before they bind budget.
“We just take the Bitkom number, because the board wants a big number.”
If the board only needs the order of magnitude as a mobilisation argument, that framing belongs on the slide itself: “Political signal, B3 source, Bitkom extrapolation from 1,002 CATI interviews onto the German economy.” That is honest — and it leaves open whether the slide stands in that form, or whether a different conversation is needed.
5. Disclosed limits
Four points where disclosure is part of the rigour, not a defect of it.
- The Bitkom extrapolation formula. The study report documents sample and field period cleanly. The weighting variables for the projection and the treatment of item non-response on damage amounts are not fully public. The point is not directed at Bitkom — it is an observation about the class of extrapolated self-report surveys, whose structural outlier-fragility Florêncio and Herley (2011) at Microsoft Research documented.
- The Admiralty scale, empirical inconsistency. Kelly, Budescu, Dhami and Mandel (2025) show empirically that analysts grade the same source and the same statement differently at different points in time. Irwin and Mandel (2019) document communicative and criterial weaknesses for civilian OSINT-based CTI. The scale is therefore used as a discipline tool — it forces the separation of source rating from statement rating, and the inconsistency is named on-air and in every briefing rather than papered over.
- Effectiveness of type labels in board decks. No peer-reviewed longitudinal study has yet measured whether adding a “measured / reported / estimated / extrapolated” marker column changes investment outcomes. The position here rests on consulting practice across DACH engagements (Admiralty E5) and is presented as such. A longitudinal design measuring this would be a welcome contribution to the methodology literature.
- AJP-2.1 Edition C 2023. This piece cites Edition B (2016) of NATO doctrine AJP-2.1. Edition C is referenced in secondary literature; the citation here stays with the version that is verifiable against the NATO Standardization Office.
6. References
DACH situation reports and legal texts (primary sources):
- Bitkom Wirtschaftsschutz 2025: 1,002 CATI interviews, calendar weeks 16–24 2025, extrapolated to the German economy.
- BSI Lagebericht 2025: bright-field hash count, 280,000 variants per day (period July 2024 – June 2025).
- BACS half-year report 2025/2: H2 2025, 29,006 voluntary plus 145 mandatory reports.
- BSIG § 8b: KRITIS reporting obligation in Germany since 25 July 2015 (substantially revised by the NIS2 implementation act).
- NIS2 implementation act: in force 6 December 2025, ca. 29,500 entities.
- Swiss ISG, Art. 74a–h: reporting obligation since 1 April 2025.
- NISG 2026 Austria: in force 1 October 2026, ca. 4,000 entities.
- BaFin DORA briefing: financial-sector reporting cascade since 17 January 2025.
- CERT.at annual reports: 70 NIS reports in 2024 (23 mandatory plus 47 voluntary).
- NATO AJP-2.1, Allied Joint Doctrine for Intelligence Procedures, Edition B 2016.
Methodology literature (primary papers, direct PDFs where available):
- Anderson, Barton, Böhme, Clayton, Hernandez Ganan, Grasso, Levi, Moore, Vasek (2019), Measuring the Changing Cost of Cybercrime, WEIS — PDF.
- Florêncio, Herley (2011), Sex, Lies and Cyber-crime Surveys, Microsoft Research MSR-TR-2011-75 — publication page.
- CISA Office of the Chief Economist (2020), Cost of a Cyber Incident: Systematic Review and Cross-Validation — PDF.
- Vergara Cobos, Cakir; World Bank (2024), A Review of the Economic Costs of Cyber Incidents — PDF.
- Atlantic Council (2025), Counting the Costs: A Cybersecurity Metrics Framework for Policy — PDF.
- Irwin, Mandel (2019), Improving Information Evaluation for Intelligence Production, Intelligence and National Security 34(4), 503–525 — DOI 10.1080/02684527.2019.1569343.
- Kelly, Budescu, Dhami, Mandel (2025), The effect of source reliability and information credibility on judgments of information quality in intelligence analysis, Judgment and Decision Making — Cambridge Core.
Companion blog to the ZweiOnCTI pilot S1E00 — “Numbers Nobody Questions”. To the episode: Podcast. TLP:CLEAR.