The High-Level Expert Group on AI puts forward a set of 33 recommendations.
Background: The European Commission set up the High-Level Expert Group on AI (AI HLEG) in June 2018. The HLEG is made up of 52 independent experts; in April 2019, it published its first piece of work, the Ethics Guidelines for Trustworthy AI.
The new Recommendations: Today, at the AI Alliance Assembly in Brussels, the HLEG presented a set of 33 Recommendations on policy and investment in AI. The independent recommendations support a human-centered approach to AI and recognise AI as one of the most transformative technologies to drive innovation and productivity. The experts are supportive of a risk-based AI governance, including comprehensive mapping of relevant European laws, and of an option to test “regulatory sandboxes” to drive innovation while protecting society against unacceptable damage. In detail, some recommendations call on policy makers to take up a tailored approach to the AI market, and to secure a Single European Market for trustworthy AI. They also suggest nurturing education to ensure a wide skills-base, and embracing a holistic approach, combining a rolling action plan with a 10-year vision.
A piloting phase on the Ethics Guidelines is open: As of today, organisations can test the assessment list included in the Ethics Guidelines for Trustworthy AI, published in April, and see how robust it is in practice. Over 300 organisations have already expressed interest. An online survey is open until 1 December 2019. The expert group will also carry out interviews with selected representatives from the public and private sectors to better understand the implications of implementing the assessment list. Both the interviews and the feedback from the piloting survey will feed into a revised version of the assessment list, to be presented in early 2020.