More
    - Advertisement - spot_img
    HomeHuman RightsHow to develop ‘ethical AI’ and avoid potential dangers

    How to develop ‘ethical AI’ and avoid potential dangers

    UNESCO first developed its Recommendations on the Ethics of Artificial Intelligence back in 2021, when much of the world was preoccupied by another international threat, the COVID-19 pandemic. The Recommendations, which were adopted by the 194 UNESCO Member States, contain concrete guidance on how public and private money can be channelled to programmes that benefit society.

    Since then, a great deal of work has been done to put this guidance into practice, with legislators, experts and civil society representatives meeting at UNESCO forums to share information and report on progress.

    Shortly after the 2024 forum, which took place in Slovenia in early February, Conor Lennon from UN News spoke to some of the participants: Aisen Etcheverry, Minister of Science and Technology in the Chilean Government; Irakli Khodeli, Head of the AI Ethics unit at UNESCO; and Mary Snapp, Vice President of Strategic AI Initiatives at Microsoft.

    Aisen Etcheverry: We were one of the first countries to not only adopt the Recommendations, but also to implement them, with a model that ensures AI is being used ethically and responsibly. So, when ChatGPT came on to the market, and we saw all the questions it raised, we already had expert research centres in place, and capabilities within the government. Our companies were already working with AI, and we had basically all the pieces of the puzzle to tackle a discussion that is complicated on the regulation side. 

    Over the last year things have evolved, and we’ve seen an increase in the use of AI by government and agencies, so we launched something similar to an executive order, basically instructions on how to use AI responsibly. 

    One great example is at the agency charged with providing social benefits. They generated a model that allows them to predict which people are least likely to ask for the benefits that they’re entitled to. Then they send people to go and visit those who have been identified, to inform them of their entitlements. I think it’s a beautiful example of how technology can enhance the public sector, without removing the human interaction that is so important, in the way governments and citizens interact. 

    Artificial Intelligence can contribute to fighting climate change and supporting progress towards all the SDGs.

    UN News: What is your government doing to protect citizens from those who want to use AI in harmful ways? 

    Aisen Etcheverry:  The UNESCO recommendations really helped us to develop critical thinking about AI, and regulations. We have been having public consultations with experts, and we hope that we can present a bill to Congress in March.

    We have also been thinking about how we can train people, not necessarily in programming, but to empower those who are using and designing AI, so that they are more responsible for the outcome, from a more social perspective. 

    On a related subject, we need to remember that there is a digital divide: many people do not have access to digital tools. We need regional and international cooperation to ensure that they benefit from this technology.

    Irakli Khodeli: Tackling the digital divide is a big part of the UNESCO recommendations. One of the fundamental ideas on which the agency is based is that science, and the fruits of scientific progress, should be equitably divided amongst all peoples. That rings true for Artificial Intelligence, because it holds so much promise for assisting humans in achieving our socio-economic and developmental goals.

    That’s why it’s important that, when we talk about the ethical use and development of AI, we don’t just focus on the technologically advanced part of the world, where the companies are actually wielding these tools, but we also reach out to the global south countries that are in different stages of development, to involve them in this conversation about the global governance of AI. 

    United Nations Secretary-General António Guterres (right) attends the AI Safety Summit in London, UK.

    UN Photo/Alba García Ruiz

    United Nations Secretary-General António Guterres (right) attends the AI Safety Summit in London, UK.

    Mary Snapp: Technology is a tool that can enhance human experience, or it can be used as a weapon. That’s been true since the printing press and it’s true now. So, it’s very important for us, as an industry, to ensure that there are safety breaks, that we know what computers can do and what technology can do, and what it should not do. 

    Frankly, in the case of social media, perhaps we didn’t address the issues earlier on; this is an opportunity to really work together early on, to attempt to mitigate what could be some more negative effects, while still recognizing the tremendous promise of the technology. 

    UN News: At the UNESCO meeting in Slovenia, Microsoft signed up to an agreement to develop AI on ethical lines. What does that mean in practice?

    Mary Snapp:  In 2019, we created an office of responsible AI, that sits within [Microsoft President] Brad Smith’s organization. This office has a team of experts; not only technology experts, but also humanities academics, sociologists, and anthropologists. We do things like “red teaming” [using ethical hackers to emulate real attacks on technology], encouraging the AI to do harmful things, so that we can mitigate that. 

    We don’t necessarily share exactly how the technology will work, but we want to ensure that we are sharing the same principles with our competitors. Working side by side with UNESCO is absolutely critical to doing this work right for humanity. 

    This discussion is taken from the latest episode of the UN’s flagship news podcast, The Lid Is On, which covers the various ways that the UN is involved in global efforts to make AI, and other forms of online technology safer.

    You can listen to (and now watch!) The Lid Is On, on all major podcast platforms. 

    Source link

    spot_img

    Must Read

    spot_img