国产热热热精品,亚洲视频久久】日韩,三级婷婷在线久久,99人妻精品视频,精品九热人人肉肉在线,AV东京热一区二区,91po在线视频观看,久久激情宗合,青青草黄色手机视频

Global EditionASIA 中文雙語(yǔ)Fran?ais
World
Home / World / World Watch

Privacy safeguards vital in use of AI

By Ada Chung | China Daily Global | Updated: 2026-04-01 08:55
Share
Share - WeChat
[Photo/VCG]

Governments and businesses worldwide are seeking to harness artificial intelligence for innovation and economic growth. Yet as AI technologies become more accessible and sophisticated, a parallel and troubling trend is emerging: the misuse of AI-driven "deepfakes".

A deepfake — a seemingly realistic but falsified image, audio or video generated by AI — can inflict profound and lasting harm on individuals, especially children and young people, when exploited for malicious purposes.

A recent global incident brought these issues to the forefront: An AI chatbot allowed users to generate nonconsensual sexual images of real women and children, among others. Within 11 days, an estimated 3 million sexualized images had reportedly been generated. This illustrates how easily personal data can be misused and how quickly the resulting harm can spread, especially to minors, who are least equipped to protect themselves.

The incident triggered swift regulatory actions by privacy or data protection authorities worldwide, and temporary bans in some jurisdictions. Given the borderless nature of AI-related privacy risks, data protection authorities have stepped up coordinated efforts to advocate privacy-protective AI.

In a landmark move, during the 47th Global Privacy Assembly Conference in September, 20 authorities from different jurisdictions signed the Joint Statement on Building Trustworthy Data Governance Frameworks to Encourage Development of Innovative and Privacy-Protecting AI. This advocated, among other things, the incorporation of data-protection principles into AI system development and the establishment of robust data governance.

In February, the Hong Kong Special Administrative Region's Office of the Privacy Commissioner for Personal Data, or PCPD, together with 60 privacy/data protection authorities from around the world (including Canada, France, Germany, Italy, South Korea, New Zealand, Singapore and the United Kingdom), issued the Joint Statement on AI-Generated Imagery and the Protection of Privacy.

Initiated and coordinated through the Global Privacy Assembly's International Enforcement Cooperation Working Group, which the PCPD co-chairs, the statement sets out fundamental international principles to guide organizations in developing and using AI content generation systems lawfully and safely.

The joint statement reminds all organizations that develop and use AI content generation systems to comply with applicable data protection and privacy laws. It also recommends a series of measures to safeguard the fundamental rights of individuals, especially children and vulnerable groups.

Authorities both on the Chinese mainland and in the Hong Kong SAR recognize that the development and use of AI must be accompanied by appropriate guardrails.

Since the promulgation of the 2023 Global AI Governance Initiative, the equal importance of the development and safety of AI has been repeatedly stressed, and this was also reaffirmed in the Hong Kong chief executive's 2025 Policy Address.

This balanced vision is further reinforced in China's recently adopted 15th Five-Year Plan (2026-30), which targets advancing the "AI Plus" initiative across the board, while governance of AI must be strengthened. As the plan specifies, it is essential to consolidate security during development and pursue development in a secure environment, including strengthening data governance frameworks and rules, enhancing AI governance, and fostering an environment that is beneficial, secure and fair for development.

It is against this backdrop that the recent emergence of agentic AI — autonomous systems that use large language models without continuous human oversight — warrants close attention, as it has already intensified concerns over data breaches and privacy and cybersecurity risks.

Unlike conventional AI chatbots that primarily generate content in response to prompts, these agentic systems can connect with external tools and services, enabling them to take multistep actions on behalf of users.

The privacy risks posed by agentic AI thus extend far beyond the outputs of conventional AI chatbots. These systems can access, manipulate and expose personal data with unprecedented speed and reach. If such capabilities are misused to create and distribute abusive deepfakes with minimal human involvement, the resulting harm could spread more quickly and at greater scale.

It is crucial, therefore, for all stakeholders, including AI developers, service providers and users, to be aware of the threats to fundamental human rights posed by the new technologies.

When using AI content-generation systems, for instance, Hong Kong's Office of the Privacy Commissioner for Personal Data recommends that users label or watermark the output as AI-generated to avoid confusion or misunderstanding.

In particular, to avoid data leakage or cyberattacks, users should download only the latest official version of any agentic AI, grant minimum access rights to the tool, adopt adequate measures to ensure system security and data security, and continuously assess the risks involved. Users should be alert, for example, to any high-risk prompts or automatic processing that might wipe out all user data (including emails).

In the race to tap into AI's huge potential, we should remember that the development and deployment of AI systems should, from the outset, be guided by the principles of protecting personal data privacy, privacy-by-design and privacy-by-default, among others, to prevent infringement on people's data privacy and minimize the privacy risks involved.

As recent events have demonstrated the vulnerability of users, especially minors, in the rapidly evolving age of AI, as well as the tangible and far-reaching harms of AI's abusive or malicious use, organizations developing and deploying AI must not sacrifice privacy and security for speed-to-market or novel functionalities.

All stakeholders in the ecosystem, including AI developers, service providers and users, have unavoidable responsibilities to co-create a safe and trustworthy digital environment for our future generations.

The author is privacy commissioner for personal data of the Hong Kong Special Administrative Region. The views do not necessarily reflect those of China Daily.

Most Viewed in 24 Hours
Top
BACK TO THE TOP
English
Copyright 1994 - . All rights reserved. The content (including but not limited to text, photo, multimedia information, etc) published in this site belongs to China Daily Information Co (CDIC). Without written authorization from CDIC, such content shall not be republished or used in any form. Note: Browsers with 1024*768 or higher resolution are suggested for this site.
License for publishing multimedia online 0108263

Registration Number: 130349
FOLLOW US
阜阳市| 治县。| 当雄县| 遵义县| 石棉县| 利津县| 平泉县| 白朗县| 宁化县| 尼勒克县| 荔波县| 新乡县| 安国市| 台江县| 瑞安市| 潼关县| 惠安县| 萨嘎县| 如皋市| 凤庆县| 长春市| 永川市| 闵行区| 武胜县| 黑山县| 鲁山县| 米泉市| 梅河口市| 天柱县| 阿坝| 昌邑市| 威远县| 娱乐| 阿拉善盟| 嘉禾县| 昂仁县| 门头沟区| 集安市| 聊城市| 中山市| 芦山县|