European AI Act calls for practical tools for entrepreneurs
Many entrepreneurs still hardly know what the European AI Act will mean for them. Chamber of Commerce research shows that only 7% of entrepreneurs are well informed about the new rules, while many companies are already making full use of AI applications. This is worrisome: part of the AI Act has already gone into effect and other parts will follow in August 2026. Supervision also starts during that year. It is therefore important that organizations start preparing in time to be compliant with the Act.
Why the AI Act?
The AI Act is European legislation that applies in all member states and is intended to make the risks of AI to people and society manageable. The emphasis is on consumer protection and the protection of human rights. To determine what the Act means for an organization, it is advisable to work with a roadmap. This provides quick insight into the actions needed. AIC4NL follows the Guide AI Regulation of the central government, which we briefly explain below.
Step 1: Determine if the system falls under the AI definition
If an application does not fall within the law's definition of AI, there are no obligations. The definition focuses on autonomy, adaptability and inference capability of the system. Certain methods that are not considered "AI" in practice may still fall within this definition.
Step 2: Determine the risk category
The AI Act identifies four levels of risk:
- Prohibited applications - AI that manipulates people, applies social scoring or deploys emotion recognition should not be placed on the European market.
- High risk - Products such as cars, toys, elevators, machinery and medical devices fall under this when they contain AI. Certain areas, such as education, HR, law enforcement and access to public services, are also considered high risk.
- General purpose AI - Models such as ChatGPT (OpenAI), Gemini (Google) and DeepSeek.
- Generative AI and chatbots - Applications that generate text, images or sound or communicate directly with users.
Step 3: Define the role - provider or user
Providers and users have different responsibilities:
- Providers develop or market AI products, even if the model is developed for internal use.
- Users apply AI within their own processes or services.
More stringent requirements generally apply to providers than to users.
Step 4: Map out obligations
After determining the category, it becomes clear which obligations apply.
- Prohibited uses should be removed from the market.
- High-risk systems have 10 core obligations, including risk management, technical documentation and safeguards against discrimination.
- For generative AI and chatbots, it should be clear when output is created by AI (e.g., via a watermark) or when someone is communicating with an AI system.
What does the Working Group on Values, Standards and Regulations (WNR) do?
The WNR working group supports organizations in the responsible and legally correct application of AI. The working group disseminates relevant guidelines and roadmaps, informs participants about new developments among regulators and policymakers, and links scientific insights, for example from the ELSA Labs, back to practice.
Participants can additionally contribute practical issues, which are further researched by knowledge institutions. By jointly working on knowledge and practical tools, WNR helps organizations not only comply with the new regulations, but also take advantage of the opportunities offered by responsible AI. This keeps innovation possible within clear, reliable frameworks in which people and society are central.
AIC4NL is hosting a conference on implementation of the AI Act on Nov. 11, where experts and policymakers will share practical tools for organizations looking to get started with the law.
Want to know more?
Want to know more about the WNR work area? Ask a question at normenwaardenenregelgeving@aic4nl.nl.
