国产热热热精品,亚洲视频久久】日韩,三级婷婷在线久久,99人妻精品视频,精品九热人人肉肉在线,AV东京热一区二区,91po在线视频观看,久久激情宗合,青青草黄色手机视频

Global EditionASIA 中文雙語Fran?ais
China
Home / China / Innovation

Guidelines for proper use of AI explored

By Chang Jun in San Francisco | China Daily | Updated: 2019-11-12 08:54
Share
Share - WeChat
[Photo/IC]

Experts at a conference discuss ways to monitor technology and minimize risks

Artificial intelligence, the revolutionary, disruptive and diffuse technology that has been sparking controversy and awe since its inception over 50 years ago, now enters a stage that requires the global community - academia, civil society, government and industry - to orchestrate regulations to have it guided in order to serve the common good.

At its conference on AI's ethics, policy and governance in late October, the Stanford Institute for Human-Centered Artificial Intelligence's drew hundreds of experts worldwide to a two-day conference to discuss how the major stakeholders can work together to supervise AI research, minimize risks and prohibit unethical AI-enhanced practices.

Unanimously, the attendees agreed that AI has transformed society profoundly. Major progress has been made due to availability of massive data, powerful computing architectures and machine learning advancement. AI is playing an increasing role across domains such as healthcare, education, mobility and smart homes.

However, AI has also caused concerns all over the world, mainly because of a lack of ethical awareness and penetration of individual privacy. For example, the AI applications in facial recognition.

Joy Buolamwini, a computer scientist at the MIT Media Lab, a research laboratory at the Massachusetts Institute of Technology, presented findings of her research on intersectional accuracy disparities in commercial gender classifications. In her research, Buolamwini showed facial recognition systems developed by tech companies such as Amazon, Microsoft and Google 1,000 faces and asked them to identify gender. The algorithms misidentified Michelle Obama, Oprah Winfrey and Serena Williams, the three iconic dark-skinned women, as male.

The bias in code can lead to discrimination against underrepresented groups and the most vulnerable individuals, Buolamwini said.

She also founded Algorithmic Justice League, a program through which she aims to highlight collective and individual harms that AI can cause - loss of opportunities, social stigmatization, workplace discrimination and inequality - and advocate for changes concerning regulating big tech companies and checking government's application of AI.

One of the key questions around AI governance and ethics, as a majority of attendees agreed, is how to regulate big tech companies.

This "nascent technology will help us build powerful new materials, understand the climate in new ways and generate far more efficient energy - it could even cure cancer," said Eric Schmidt, former Google CEO and current technical advisor to Alphabet Inc.

This is all good, he continued. "I don't want us, in these complicated debates about what we are doing, to forget that the scientists here at Stanford and other places are making progress on problems which were thought to be unsolvable ... because (without AI) they couldn't do the math at scale."

However, Marietje Schaake, a HAI International Policy Fellow and Dutch former member of the European Parliament who worked to pass the European Union's General Data Protection Regulation, argued that AI's potential shouldn't obscure its potential harms, which the law can help mitigate.

Large technology companies have a lot of power, Schaake said. "And with great power should come great responsibility, or at least modesty. Some of the outcomes of pattern recognition or machine learning are reason for such serious concerns that pauses are justified. I don't think that everything that's possible should also be put in the wild or into society as part of this often quoted 'race for dominance'. We need to actually answer the question, collectively, 'How much risk are we willing to take?'"

Like it or not, the age of AI is coming, and fast, and there is plenty to be concerned about, wrote Stanford HAI co-directors Fei-Fei Li and John Etchemendy.

The two believe the real threat lies in the fact that "Most of the world, including the United States, is unprepared to reap many of the economic and societal benefits offered by AI or mitigate the inevitable risks".

Getting there will take decades, they said. "Yet, AI applications are advancing faster than our policies or institutions at a time in which science and technology are being underfunded, under-supported and even challenged. It's a national emergency in the making."

They asked the US government to commit $120 billion in research, data and computing resources, education and startup capital over the next decade to support a bold human-centered AI framework in order to retain America's competence and leading position in this field.

An open dialogue and collaboration among nations regarding AI research and governance is important, attendees said. Given the complexity of cultural differences and motivation variations among international stakeholders, however, it's unrealistic for the whole world to create a single AI vision and a once-and-for-all solution to the problems and issues.

Nevertheless, governments across the continents are in action.

In Europe, the European Union issued its first draft of the ethical guidelines for the development, deployment and use of AI in Dec 2018, an important step toward innovative and trustworthy AI "made in Europe'".

In Feb, the US president signed an executive order revealing the country's cohesive plan for US leadership in AI development. "Continued American leadership in Artificial Intelligence is of paramount importance to maintaining the economic and national security of the United States," he said.

In China, the National New Generation Artificial Intelligence Governance Committee, which is under the Ministry of Science and Technology, in June released the New Generation AI Governance Principles - Developing Responsible AI.

The first official document of its kind issued in China on AI governance ethics, the principles include harmony and friendship, fairness and justice, inclusive and sharing, privacy protection, safety and controllability, shared responsibility, open collaboration and agile governance.

"We want to ensure the reliability and safety of AI while promoting economic, social and ecological sustainable development," said Zhang Xu, deputy director of the strategic planning department under the Ministry of Science and Technology.

"AI is advancing rapidly, but we still have time to get it right - if we act now," said Fei-fei Li.

Top
BACK TO THE TOP
English
Copyright 1994 - . All rights reserved. The content (including but not limited to text, photo, multimedia information, etc) published in this site belongs to China Daily Information Co (CDIC). Without written authorization from CDIC, such content shall not be republished or used in any form. Note: Browsers with 1024*768 or higher resolution are suggested for this site.
License for publishing multimedia online 0108263

Registration Number: 130349
FOLLOW US
 
武宁县| 桦甸市| 泰宁县| 西昌市| 慈利县| 桐城市| 邢台市| 丹凤县| 新邵县| 塔河县| 昂仁县| 龙南县| 塔城市| 化州市| 五台县| 普格县| 翁牛特旗| 凤翔县| 青州市| 新平| 广饶县| 区。| 巩义市| 永昌县| 五河县| 石棉县| 芮城县| 桃源县| 油尖旺区| 姜堰市| 青神县| 阿克| 济南市| 牟定县| 南和县| 五莲县| 嘉黎县| 怀安县| 日照市| 连平县| 瑞丽市|