Paris Peace Forum CeSIA

Multi-Stakeholder Consultation on AI Safety & Cybersecurity Public Interest Research

Questionnaire for Institutional and Philanthropic Research Funders — Deadline: 19 May 2026, 20:00 CET

Estimated completion time: ~20 minutes

1. Context of the Consultation

The evolving threat landscape driven by the diffusion of AI capabilities is reshaping cybersecurity at a pace that outstrips the capacity of existing institutions to respond. Recent cyber capability benchmarks confirm that AI proficiency in offensive technical domains has effectively doubled within a year, while threat intelligence from major AI developers documents the real-world diversion of general-purpose models by adversaries. With the global average cost of a single data breach now estimated at $4.88 million (IBM, 2025) and AI-augmented attacks expected to multiply the volume of malicious operations, the implications for the global economy and security are far-reaching.

Yet the scientific and empirical foundations needed to inform effective policy responses remain thin. Across the research and policy communities, a recurring observation is that research on AI misuse risks in cyberspace tends to be fragmented, insufficiently resourced, and not always well connected to the policy processes that most need it. While private investment in AI has grown rapidly in recent years, public-interest research at the intersection of AI safety/security and cybersecurity operates with significantly fewer resources — and the structural conditions under which that research is funded often compound the problem.

In line with the ambitions promoted by the digital track of France's G7 Presidency in 2026, the Paris Peace Forum and the French Center for AI Safety (CeSIA) are conducting this consultation to inform a coordinated approach to international research investment. This questionnaire is specifically addressed to public funders, philanthropic foundations, international organizations, and institutional investors supporting research in AI safety/security, AI-cyber risk, and cybersecurity.

2. Objective of the Consultation

This written consultation seeks to gather structured input from the funding community to:

  • Map the current landscape of public and philanthropic investment in AI safety/security and cybersecurity research, identifying gaps, overlaps, and opportunities for coordination;
  • Diagnose structural barriers within existing funding mechanisms that limit the agility, scale, and policy-relevance of supported research;
  • Identify innovative funding models better suited to the pace and specificity of AI-cyber threats, drawing on lessons from adjacent domains;
  • Explore concrete coordination mechanisms among funders — from shared priority-setting to co-financing instruments and pooled infrastructure;

A set of actionable outputs designed to inform intergovernmental discussions among G7 Digital Ministries and beyond will be produced based on the contributions submitted.

Participants are invited to submit written contributions in English, by Tuesday 19 May 2026, 20:00 CET, with references to supporting evidence where possible.

For any questions regarding this consultation, please contact the Paris Peace Forum secretariat and/or the CeSIA.

All contributions will be treated as confidential and used exclusively in aggregated or anonymized form, unless explicit written authorization is provided for attributed citation.
or press Enter
Step 1 of 8

Respondent Profile

Tell us a bit about yourself.

Step 2 of 8

Current Funding Landscape & Strategic Priorities

While we encourage you to answer all questions, feel free to skip any that fall outside your area of expertise.

What proportion of your organization's research portfolio is currently directed toward AI safety/security, AI-cyber risks, or cybersecurity research? How has this allocation evolved over the past three years, and what drove those changes?

What do you consider the 3–5 most critical research priorities at the intersection of AI safety/security and cybersecurity that are currently underfunded? Where do you see the widest gap between the urgency of the research question and the level of investment?

How do you currently identify and prioritize research areas for investment in this domain? To what extent are your priorities informed by threat intelligence, government strategy documents, researcher input, or independent assessments?

Are you aware of significant duplication or, conversely, critical blind spots across the funding landscape for AI-cyber research? What mechanisms could help identify these more systematically?

Step 3 of 8

Effectiveness & Agility of Current Funding Mechanisms

While we encourage you to answer all questions, feel free to skip any that fall outside your area of expertise.

What is the typical timeline from call for proposals to disbursement of funds for research grants in your portfolio? How does this compare to the pace at which AI-cyber threats evolve? What bottlenecks in your process could realistically be shortened?

To what extent do your current funding instruments allow for mid-course reallocation or scope adjustment when the research landscape shifts during a funded project? What constraints (legal, administrative, institutional) limit this flexibility?

What is the success rate for competitive grants in your AI/cyber research programmes? Do you observe that low success rates drive applicants toward conservative proposals? How do you currently attempt to support high-risk, potentially transformative research?

What proportion of researcher effort in your funded projects is absorbed by administrative compliance (reporting, audits, procurement)? Have you implemented or considered measures to reduce this burden while maintaining accountability?

Step 4 of 8

Innovative Funding Models & Instruments

While we encourage you to answer all questions, feel free to skip any that fall outside your area of expertise.

Have you experimented with or considered any of the following funding innovations: rapid-response grants (< 30 days to award), rolling/no-deadline calls, bridge funding for highly-rated unfunded proposals, sandpit/co-creation workshops, milestone-based disbursement, or multi-year flexible envelopes? What has worked, what has not, and why?

What funding mechanisms would best serve the specific needs of AI-cyber research (which requires both sustained capacity and rapid response to emerging threats)? Could a tiered model combining long-term baseline funding with rapid-deployment supplements be viable?

How could funding instruments be designed to better incentivize interdisciplinary research spanning AI safety/security and cybersecurity, given that these communities have different methodological traditions, publication cultures, and evaluation criteria?

What role could shared research infrastructure (compute clusters, sandboxed testing environments, curated threat datasets) play in maximizing the impact of research investments? How should access to such infrastructure be funded and governed?

Step 5 of 8

Coordination Among Funders

While we encourage you to answer all questions, feel free to skip any that fall outside your area of expertise.

To what extent do you currently coordinate your AI-cyber research investments with other public or philanthropic funders? What mechanisms exist (formal or informal) for sharing information about funding priorities, active portfolios, and emerging gaps?

Would you consider participating in joint or co-funded calls for proposals with other funders in this domain? What preconditions (shared priorities, compatible processes, governance agreements) would need to be met?

What would a useful international coordination platform for AI-cyber research funders look like in practice? What information would you want it to provide (e.g., mapping of active grants, pipeline visibility, shared metrics), and what governance model would earn your trust?

How could coordination be structured to avoid the slowest-common-denominator problem — i.e., ensuring that alignment processes do not themselves become a source of delay in a fast-moving domain?

Step 6 of 8

From Research to Policy Impact

While we encourage you to answer all questions, feel free to skip any that fall outside your area of expertise.

How do you currently measure the policy impact of the research you fund? What metrics or indicators do you use, and what are their limitations?

What mechanisms could improve the translation of funded research into actionable inputs for policymakers (e.g., embedded policy fellowships, structured policy briefs as deliverables, researcher secondments to government agencies, dedicated knowledge-brokering functions)?

How could funded research be better connected to operational defense needs — ensuring that findings about AI-driven threats translate into effective mitigation efforts, not just academic publications?

Step 7 of 8

International Dimensions & G7 Alignment

While we encourage you to answer all questions, feel free to skip any that fall outside your area of expertise.

What role should G7 governments play in coordinating AI-cyber research investments? Would a formal G7 commitment to align national research funding in this area be useful, and what form should it take (shared priorities, co-financing pledges, pooled infrastructure)?

How could philanthropic and public funding be better connected in this domain? What barriers (legal, institutional, cultural) currently prevent more effective partnership between governmental/supranational agencies and foundations?

What would a credible 4-year international investment roadmap (2027–2030) for AI-cyber research require? What funding levels, governance structures, and accountability mechanisms would make such a roadmap meaningful rather than aspirational?

How can the research investment framework ensure broad geographic coverage, including participation from countries beyond the G7 that face significant AI-cyber risks but have limited research capacity?

Thank You for Your Contribution

Thank you for your valuable contribution. The Paris Peace Forum secretariat and the CeSIA will follow up with you to share the outputs produced on the basis of this consultation.