19.12.2024.

Statement at the UNSC Briefing “Artificial Intelligence and the Maintenance of International Peace and Security” by Nordic-Baltic countries (Denmark, Estonia, Finland, Iceland, Latvia, Lithuania, Norway, Sweden)

New York, 19 December 2024

The immense potential of Artificial Intelligence has sparked notions of emergence of the age of A.I. Some even call this the golden age. Indeed, A.I. opens vast opportunities in many fields – health, education, environment and economy – to name a few. However, let’s recall that the power of A.I. has been unleashed by human intelligence. And human intelligence will need to perform at its peak to steer the rise of A.I in a beneficial and responsible manner. Therefore, in fact, we should think of this as the age of A.I.-enhanced human intelligence.

Considering its cross-cutting implications, A.I. must be developed, used and governed inclusively and in the interest of all, in a way that respects human rights, democracy and the rule of law. There is no other more inclusive multilateral platform than the United Nations. Deliberation and collective action through the UN, firmly rooted in the multi-stakeholder approach, that includes civil society, scientific communities and the private sector, is essential to ensure that A.I. serves the interests of humanity.

There have already been many “firsts”. In 2023, the Security Council held its first discussion on A.I. This year, the General Assembly adopted the first ever resolution on A.I., followed by another resolution on A.I. capacity building. Meanwhile, the First Committee recently adopted resolution on the implications of the military use of A.I.

The Pact for the Future and the Global Digital Compact provide important guidance on the next steps. The upcoming consultations on a global dialogue on A.I. governance and establishment of the Independent international Scientific Panel will provide a platform for all delegations to express their views and vision of A.I governance. Existing work such as UNESCO’s Guidelines for Ethics of AI can provide useful contribution to these deliberations. Let me outline three key principles for our countries.

First, the governance of A.I. must be rooted in international law, including international humanitarian law and international human rights law. Human rights must be respected and protected – online and offline. While it is a novel technology, A.I. must operate within the established framework of what is acceptable and what is not.

Second, A.I. has to be human-centric and human rights based. Human oversight and control, but also accountability has to be preserved across its lifecycle in order to mitigate safety and security risks. From existing experiences, we see that A.I. can produce harmful effects in several areas, especially if used with malicious intent and in the absence of proper regulation and oversight. Of particular and urgent concern to us is the impact of A.I. on information integrity. The increasing malign use of A.I. by state actors and non-state actors for information manipulation and interference in electoral processes presents a grave risk to security and stability of our societies.  

Third, we need a harmonised approach. A patchwork of parallel and even overlapping efforts will not lead to better governance. In fact, it will cause fragmentation and hamper innovation. Effective governance requires clear goals, inclusivity and buy-in not only from states, but all stakeholders.

Potential application of A.I. in the military domain demands particular attention. As demonstrated by history, technological breakthrough can provoke military opportunism. In addition to misuse, there are also risks related to unpredictability and lack of accountability of A.I. systems, especially fully autonomous ones.

At the same time, A.I. can offer benefits, such as improved protection of civilians, which can also be relevant in the context of UN peacekeeping. Even more, A.I. can help prevent conflicts by improving early warning regarding risks of violence in specific regions and by identifying vulnerable populations. Further reflection is therefore necessary to assess all factors associated with military applications of A.I.

An important track in this regard are the efforts to address the issue of Lethal Autonomous Weapons Systems (LAWS), in particular within the Group of Governmental Experts under the Convention on Certain Conventional Weapons. Through these efforts we must ensure that the development and use of LAWS is in full compliance with International Humanitarian Law.

Furthermore, we support multilateral initiatives aimed at formulating wider principles and norms for responsible military use of AI, including through the Responsible AI in the Military Domain (REAIM) summit process. We need to advance this discussion to ensure that all relevant aspects, especially legal and ethical considerations, are properly addressed.

To conclude, let us underline that the path the development of A.I. will take is not pre-determined. This technology can amplify our best intentions – or the worst. It will be our collective task and responsibility to shape it in a creative, sustainable and safe way.