The CyFun®2025 Framework deliberately chooses not to include AI as a separate function or category within its core model. This approach aligns with the methodology used in the NIST Cybersecurity Framework 2.0. The main reasons are:
1. AI is addressed through additional profiles and overlays
The CyFun® 2025 Framework allows for the development of specific profiles or sector-based extensions for domains such as AI. This enables organisations that develop or use AI systems to follow targeted guidance without altering the core framework.
2. Avoiding duplication of existing controls
Many of the security measures required for AI are already covered by existing CyFun® 2025 controls. To prevent redundancy, the framework opts to manage AI-related risks through established principles such as risk management, governance, and incident response.
3. Use-case specific approach
AI introduces unique risks depending on its application (e.g. generative AI, machine learning in critical infrastructure). The CyFun® 2025 Framework supports a proportional and context-driven approach, allowing organiz-sations to determine which additional safeguards are needed based on their specific use of AI.
4. Future-oriented flexibility
The framework is designed to evolve alongside technological developments. AI can be further integrated in the future through new profiles or national guidelines, depending on the needs of Belgian organisations and Belgian and European laws and regulations.
NOTE
🔹 What are profiles?
A profile is an application of the framework to a specific sector, organisation, or technology. It helps translate the framework's general principles into concrete measures that are appropriate for:
· a specific type of organisation (e.g., a hospital, a bank, a government agency),
· a specific technology (e.g., cloud, AI, OT),
· or a specific risk context (e.g., critical infrastructure, privacy-sensitive data).
Example: An AI profile within CyFun® 2025 would indicate which existing controls are relevant for AI systems and which additional measures are needed for safe and responsible AI applications.
🔹 What are overlays?
An overlay is a layer on top of the framework, providing additional guidelines or adjustments for a specific situation. It is narrower than a profile and often focuses on technical or legal requirements.
Example: An overlay for generative AI could indicate how existing CyFun® 2025 controls should be adapted to address risks such as deepfakes, hallucinogenic models, or copyright issues.
In summary:
· Profiles = applying the framework to a specific context.
· Overlays = an additional layer with guidelines for specific risks or technologies.
Both ensure that the framework remains flexible and extensible, without having to constantly adapt the core principles.