- Discrepancies exist between companies and policymakers on future of AI governance
- Both groups show potential blind spots, posing risks
LONDON, July 27, 2020 /PRNewswire/ -- EY today releases findings from a global survey finding significant differences in how the public and private sectors view the future of ethics, governance, privacy, policy and regulation of artificial intelligence (AI) technologies. According to the Bridging AI's trust gaps report, developed in collaboration with The Future Society, AI discrepancies exist in four key areas: fairness and avoiding bias, innovation, data access and privacy and data rights.
In the survey of 71 policymakers and more than 280 global organizations, respondents ranked ethical principles by importance for 12 different AI use cases, and sentiment was measured around the risk and regulation of AI.
Policymakers align around specific priorities, while private sector lacks consensus
Policymakers' responses show widespread agreement on the ethical principles most relevant for different applications of AI. For example, on the use of AI for facial recognition policymakers rated "fairness and avoiding bias" and "privacy and data rights" as the two top concerns by a wide margin. Yet private sector priorities on the same question were relatively undifferentiated. In fact, the private sector responses across use cases and principles were more evenly distributed, with narrow margins defining the top choices. And the private sector's top choices were principles prioritized by existing regulations, such as GDPR, rather than emerging issues such as fairness and non-discrimination.
Disagreement about the future direction of governance poses risks
While both policymakers and companies agree a multi-stakeholder approach is needed to guide the direction of AI governance, results show disagreement on what form it will take: thirty-eight percent of organizations surveyed expect the private sector to lead a multi-stakeholder framework, and only 6% of policymakers agree. This disconnect poses potential challenges for both groups in driving governance forward, and it also presents market and regulatory risks for companies developing AI products while governance approaches are still under discussion.
Overcoming differences through collaboration
The survey results found that each stakeholder group has blind spots when it comes to the implementation of ethical AI, with 69% of companies agreeing that regulators understand the complexities of AI technologies and business challenges, while 66% of policymakers disagreed.
These findings suggest greater collaboration between both groups will be critical to overcome knowledge gaps. Policymakers should take a consultative and deliberate approach with input from the private sector, particularly on technical and business complexities for which policymakers lack expertise. Similarly, the private sector should work to reach consensus around AI governance principles, so the regulation requirements of both parties are taken into account.
Nigel Duffy, EY Global Artificial Intelligence Leader, says:
"As AI transforms business and industries, poor alignment diminishes public trust in AI and slows the adoption of critical applications. For efforts to be fruitful, companies and policymakers need to be aligned. Coordination between both sets of stakeholders is critical to developing pragmatic policy and governance approaches that are informed by constraints and realities on the ground."
Gil Forer, EY Global Markets Digital and Business Disruption Leader, says:
"As AI scales up in new applications, policymakers and companies must work together to mitigate new market and legal risks. Cross-collaboration will help these groups understand how emerging ethical principles will influence AI regulations and will aid policymakers in enacting decisions that are nuanced and realistic."
To access the Bridging AI's trust gaps report, click here.
Note to editors
About the report
The EY web-based survey was conducted between 2019 and early 2020. It obtained responses from 71 policymakers and 284 companies across 55 countries.
About EY
EY is a global leader in assurance, tax, transaction and advisory services. The insights and quality services we deliver help build trust and confidence in the capital markets and in economies the world over. We develop outstanding leaders who team to deliver on our promises to all of our stakeholders. In so doing, we play a critical role in building a better working world for our people, for our clients and for our communities.
EY refers to the global organization, and may refer to one or more, of the member firms of Ernst & Young Global Limited, each of which is a separate legal entity. Ernst & Young Global Limited, a UK company limited by guarantee, does not provide services to clients. Information about how EY collects and uses personal data and a description of the rights individuals have under data protection legislation is available via ey.com/privacy. For more information about our organization, please visit ey.com.
This news release has been issued by EYGM Limited, a member of the global EY organization that also does not provide any services to clients.
About The Future Society
The Future Society is an independent 501(c)(3) nonprofit think-and-do tank. Specializing in questions of impact and governance, their mission is to help advance the responsible adoption of AI and other emerging technologies for the benefit of humanity. The Future Society leverages a global, multidisciplinary network of experts, practitioners, and institutional partners and tackles a broad, but carefully selected, range of short-term and longer-term issues in AI governance.
Kailyn Smigelski |
Suraj Mashru |
EY Global Media Relations |
EY Global Media Relations |
+1 973 715 3624 |
+44 (0)207 783 0733 |
Logo - https://mma.prnewswire.com/media/708904/EY_Logo.jpg
Share this article