Webinar on “Artificial Intelligence: Challenges and Prospects of International Governance, and Implications for Southeast Asia”

In this joint ISEAS and Tech For Good Institute webinar, Professor Simon Chesterman and Ms Shivi Anand delve into the challenges of AI development and governance.

REGIONAL ECONOMIC STUDIES PROGRAMME WEBINAR

Thursday, 20 April 2023 – Artificial Intelligence (AI) has dramatically changed our everyday life. New AI-driven technologies such as the ChatGPT have triggered heated debates and concerns about the potential implications of AI on our society. The complex nature of AI has raised deep moral and legal questions in terms of ownership, entity, accountability, and governance. In the absence of guardrails and international governance, AI could potentially affect socioeconomic and geopolitical stability. Policymakers are struggling to improve the governance of AI amidst market-driven accelerations in AI invention and innovation.

Clockwise from top left: Moderator Dr Siwage Dharma Negara with Speaker Professor Simon Chesterman and Discussant Ms Shivi Anand. (Credit: ISEAS – Yusof Ishak Institute)

To know more about how policymakers could meet the challenges of AI governance, the Regional Economic Studies Programme at the ISEAS – Yusof Ishak Institute, in collaboration with the Tech For Good Institute, organised a webinar featuring Prof Simon Chesterman, David Marshall Professor and Vice Provost (Educational Innovation) at the National University of Singapore, and Ms Shivi Anand, Regional Manager for Public Affairs at Grab.

Beginning his presentation with the fundamental conception of AI, Professor Chesterman explained how AI systems use algorithms to learn from data, and then make predictions or decisions. While AI has applications across many industries, including healthcare, finance, and transportation, it poses a challenge to regulators because it is constantly evolving. Among the most significant challenges are: 1) the speed at which AI operates, 2) the autonomous nature of systems utilising AI, and 3) the innate opacity and unpredictability of this new form of intelligence.

Professor Chesterman added that the host of problematic issues necessitates swift governance related to various aspects of AI operations, including human control, transparency, safety, accountability, non-discrimination, and privacy. Such governance frameworks can be put in place by individual governments, industry players, or through international coordination, but must be balanced against the risk of stifling innovation and losing competitive advantage.     

Carrying the discussion forward, Ms Anand proposed that AI governance should be outcomes-based with varying degrees of risk severity ranging from minor risk to risk to life. Governance should therefore be risk-based, proportionate, and principles-based. It is also important to ensure that such safeguards are enforceable and practical. As a starting point, AI system governance should be guided by principles of transparency, fairness, and human-centricity.

Reiterating Professor Chesterman’s argument, Ms Anand concluded by saying that AI governance, whether mandated by law or voluntarily adopted by organisations, must maintain a key balance between safety and innovation, and ultimately keep the interests of the end user at its core.

Read more of Anand Shivi’s insights here.

The 90-minute webinar was attended by an audience composed of research scholars, students, policymakers and the general public. The speakers also answered their questions on an array of topics, including the ethical issues associated with biological applications of AI, the potential disadvantages of public versus private regulations, whether ChatGPT in particular can be made more accountable in the immediate future, measures to stem the dissemination of AI-generated misinformation, and the estimated timeline of forming a global AI regulatory agency.

Download the webinar slides here.

(Credit: ISEAS – Yusof Ishak Institute)