Denmark paves the way for the implementation of trust by design
Denmark holds the answer to how businesses can test and apply artificial intelligence (AI) in a responsible and ethical way. With support from both the Danish government and research environment, tools and methods that can support businesses in creating long-term value with AI are already on the way.
Because of the enormous societal impact AI will have in the future, both the Danish government and research environment is determined to work towards ensuring that any future development and use of AI live up to democratic values and control mechanisms.
The aim is to ensure that consumers and companies who interact with AI services and products developed in Denmark are able to trust, that they live up to the highest ethical standards.
Also, the Danish government sets out to support businesses who work in a responsible and ethical way with AI in leveraging the opportunities that these technologies provide as a competitive advantage.
Development of ready-available and applicable industry tools
One specific way to ensure that businesses can strengthen their efforts within ethical and responsible AI is the creation of ready available and applicable tools and methods.
This is why the Danish government have decided to launch three new initiatives as part of a national Danish AI strategy, which will make it easier for businesses to show end-users, that they handle data in an ethical and responsible manner. The new initiatives include:
- Access to a practical toolbox: A toolbox, which includes guidelines on how to work responsibility with data in the easiest way on an everyday basis.
- Data ethics included in annual reports: Introduction of CSR reporting on data ethics as part of early financial statements for large companies.
- Company data-ethics label: Development of a data-ethics label for businesses that comply with the ethical principles for data utilisation to be e.g. displayed on company websites.
Danish best practice: Safe AI
Researchers at the Danish Technical University has formulated a number of safe AI principles that jointly forms a coherent vision for the responsible use of artificial intelligence based on concrete and realistic technology. The principles should not be seen as a wish list of how to work with or implement AI solutions. On the contrary, they are based on actual computer science methods, which is ready applicable for companies to help generate trust.
Lars Kai Hansen Professor at the Technical University of Denmark
Trust is necessary. If there is no trust users are not going to share their data with you. If there is no data, you have no AI and without AI there is no business.
The principles of safe AI
SAFE AI = SECUREHas passed test and verification processes and is robust to systemic and well-informed attacks.
SAFE AI = OPEN SOURCE
Methods, code and test-results are accessible for everyone.
SAFE AI = IS SELF-CONSCIOUS
Understands its own role and if uncertainty appears it can e.g. refuse to act.
SAFE AI = CAN KEEP A SECRET
Is designed to respect privacy by being built on the principle of privacy by design.
SAFE AI = HAS CALIBRATED VALUES
Is debugged for stereotypes and bias and understands emotions.
SAFE AI = IS ACCOUNTABLE
Is transparent and communicative with a respect for the “right to explanation”.
SAFE AI = UNDERSTANDS SOCIAL RELATIONS
Understands social relations in addition to a user’s knowledge and competences.
SAFE AI = UNDERSTANDS POWER
Understands data, context and consequences of its actions.
Source: Professor Lars Kai Hansen, DTU Compute, The Technical University of Denmark