The Netherlands, a worldwide AI knowledge hub

Luuk van der Velden
4 min readNov 22, 2020

--

Kickstart AI was announced on the 10th of October 2019 to boost the development of Dutch AI- talent and technology through the collaboration of companies and universities. The goal is to boost the AI community and make the Netherlands a worldwide relevant AI-knowledge-hub. Kickstart AI acts in the context of national initiatives to grow our AI capabilities. But, how can the Netherlands become a worldwide AI leader as technical superiority seems out of the question?

Dutch tulips
Photo by jamal . from Pexels

A leader in the ethics of high tech

The Netherlands has been a worldwide leader in the ethical application of high technology in society, since the 1960s (article by TU Delft). The book ‘The Social Construction of Technical Systems’ (1987, MIT Press) inspired a generation of researchers on the critical reflection on technology and society. This influence can be seen in the activities of the Rathenau Institute that continuously provides input to the Dutch government and informs Brussels on digital- society and democracy, robotics, genetic manipulation and more. Accompanied by the program for societal responsible innovation of the NWO, financer of Dutch research.

TU Delft professors Jeroen den Hoven and Peter-Paul Verbeek think a protection regulation, such as the European GDPR, can be conceived for AI and autonomous systems. For the Netherlands, to spearhead this initiative would require investment and coordination.

AI protection principles

Let us look at possible AI protection principles to get an idea of the scope of the challenge. First some definitions. With “AI” I mostly mean the application of Machine Learning (ML), a subset of Artificial Intelligence. “Data-driven solution” is a decision making system based on closed and open data. In practice, data-driven solutions can contain non-ML algorithms.

1 Purpose limitation

AI solutions are driven by the business question they need to answer. Such questions need to be limited in scope for them to be open to public scrutiny. Vague wording will be meaningless even if open to the public.

2 Transparency

The Municipality of Amsterdam

Three types of transparency are defined by the municipality of Amsterdam when procuring or developing AI models.

· Technical transparency

· Procedural transparency

· Explaining AI outcomes

Technical transparency covers source code, model weights and hyper parameters. The municipality understands that these might be a trade secret, but it does require such details to be revealed during audits or legal procedures. Procedural transparency describes in general terms how an AI model was created, what data was used to train it, and what other assumptions were made. A public online register for (AI) models used within the municipality is in Beta and describes such procedural information. Lastly, Stakeholders want to understand why AI solutions make certain choices before moving them into production and they want to explain AI outcomes of the models to customers while in production.

Explainable AI

The GDPR includes a ‘right to explanation’ to allow a data subject to obtain an explanation of an automated decision. Explaining AI outcomes is a popular field of study which will become more and more relevant as our closeness with AI technology increases.

AI usage requests

In the future customers might make AI usage requests which can include procedural transparency and explainable AI outcomes. The field of explainable AI is generating many methods to interrogate black box models about specific conditions or decisions. These answers could be subjected to automation for large organizations, which expect many such requests.

3 Fairness

Fair AI solutions start with a societal dialogue about what can be reasonably expected from AI solutions. However general applicable, machine learning algorithms steer us in how we apply them and think about them. This is confounded by the design principle to make the usage of an AI solution imperceptible to the subject. If a citizen does not know that his actions are responded by AI, how could he argue for fair AI?

Catch 22

Future ‘AI Usage Requests’ might be automated by large companies as with Data Subject Access Requests (GDPR). Some of the current explainable AI techniques involve training an additional AI model to explain the outcomes of other AI models. This could lead to a paradox; to explain our automated decisions to customers we would need to use automated decisions. This suggests there is enough room for innovation for the Netherlands to play a part in.

Conclusion

The Netherlands could become a worldwide AI knowledge hub by following its tradition as a leader in the societal applications of high tech. Applying principles of data protection to AI gives an interesting field of study with possibilities of collaborations between universities and businesses.

--

--