Potential of technology shouldn't obscure its dangers and experts call for regulation
Artificial intelligence, the revolutionary and diffuse technology, which has been sparking controversy and awe since its inception more than 50 years ago, has entered a stage that requires the global community to agree on regulations to serve the common good.
Hundreds of experts from around the world gathered in October to attend a conference on AI ethics, policy and governance at the Stanford Institute for Human-Centered Artificial Intelligence, or HAI. They discussed how major stakeholders can work together to supervise AI research, minimize risks and prohibit unethical AI-enhanced practices.
The attendees agreed that AI has transformed the society profoundly, noting major progress stemming from the availability of massive data, powerful computing architectures and machine learning advancement. AI is playing an increasing role in healthcare, education, mobility and smart homes.
However, AI has also raised concerns and complaints from all over the world, due to the disregard for ethics and individual's privacy, notably in the application of facial recognition technology.
Problems and concerns
Joy Buolamwini, a computer scientist at Massachusetts Institute of Technology's Media Lab, presented research on intersectional accuracy disparities in commercial gender classifications, the bias in algorithms. In one of her studies, Buolamwini used the facial recognition systems developed by tech companies such as Amazon, Microsoft and Google is identify the genders of 1,000 faces. The algorithms misidentified Michelle Obama, TV mogul Oprah Winfrey and tennis player Serena Williams as male.
Bias in code can lead to discrimination against underrepresented groups and the most vulnerable individuals, Buolamwini noted. "One in two American adults is in a law enforcement face recognition network used in unregulated searches employing algorithms with unaudited accuracy", she emphasized.
Most attendees agreed that the regulation of big tech companies is a major concern.
This "nascent technology will help us build powerful new materials, understand the climate in new ways and generate far more efficient energy－it could even cure cancer," said Eric Schmidt, former Google CEO and current technical adviser to Google's parent company, Alphabet Inc.
He emphasized, "I don't want us, in these complicated debates about what we are doing, to forget that the scientists here at Stanford (University) and other places are making progress on problems which were thought to be unsolvable … because (without AI) they couldn't do the math at scale."
However, Marietje Schaake, a HAI International Policy Fellow, argued that AI's potential shouldn't obscure its potential harms, which the law can help mitigate. A Dutch former member of the European Parliament, she worked to pass the European Union's General Data Protection Regulations, or GDPR.
Government on AI ethics
The European Commission in April released its own guidelines calling for "trustworthy AI". AI should adhere to the basic ethical principles of respect for human autonomy, prevention of harm, fairness and accountability, the guidelines said.
In Feb, U.S. President Donald Trump signed an executive order outlining a cohesive plan for leadership in AI development.
Meanwhile, the U.S. House introduced a nonbinding resolution calling on the executive branch to work with stakeholders to ensure that AI is developed in a "safe, responsible, and democratic" fashion.
In China, the National New Generation Artificial Intelligence Governance Committee, which is under the Ministry of Science and Technology, in June released the New Generation AI Governance Principles－Developing Responsible AI.
This it the first official document issued in China on AI governance ethics.
"We want to ensure the reliability and safety of AI while promoting economic, social and ecological sustainable development," said Zhang Xu, deputy director of the strategic planning department under the science ministry.