Introduction
In an era marked by rapid technological advancements, the intersection of artificial intelligence (AI), neurotechnology, and digital infrastructure has the potential to transform how we address global challenges.
The recent Science Summit during UNGA79 brought together experts from diverse fields to explore how these technologies can contribute to sustainable development and the achievement of the United Nations Sustainable Development Goals (SDGs). The summit sessions addressed the pressing need to integrate ethical AI practices, build digital trust, enhance urban resilience, and safeguard human rights in the face of emerging neurotechnology.
Amidst growing concerns about privacy, inclusivity, and the ethical deployment of technology, the summit aimed to create a roadmap for leveraging AI and other emerging technologies to drive sustainable growth and equitable development.
The discussions highlighted that while technology offers significant opportunities for social good, its potential risks must be managed through robust regulatory frameworks, interdisciplinary collaboration, and inclusive policy-making.
AI goals
The summit set forth several key goals:
- Establish ethical guidelines and global governance frameworks for the deployment of AI and neurotechnologies, ensuring they are used for social good.
- Promote interdisciplinary and cross-sector collaboration to harness the potential of AI for public health, urban planning, education, and human rights.
- Foster inclusivity and equity in the development and implementation of digital public infrastructure, ensuring marginalized communities benefit from technological advancements.
- Safeguard human rights, particularly cognitive privacy and mental integrity, as neurotechnologies become more integrated into daily life.
- Enhance public trust in digital systems by promoting transparency, accountability, and ethical AI practices.
- Create a comprehensive framework for global AI governance, synthesizing interdisciplinary, multilateral, and decentralized strategies, and producing a White Paper or Policy Brief for policymakers to align international efforts.
These objectives reflect a commitment to utilizing technological advancements not just for economic gain, but also to enhance the quality of life, protect fundamental rights, and support the overall well-being of communities globally.
AI applications
AI-Enhanced Leadership and Workforce Development
AI’s potential to transform organizational leadership was a key focus during the summit. Bailey Parnell, CEO of SkillsCamp, and Lawrence Eta, Chief Data and Technology Officer at The Royal Commission for AlUla, discussed how AI is being used to optimize leadership and workforce management.
Parnell elaborated on how SkillsCamp uses AI to gather insights into leadership behaviors by analyzing communication patterns, decision-making processes, and conflict resolution strategies. One case study presented involved the deployment of AI tools in multinational corporations to assess leadership effectiveness and employee engagement. By collecting data on employee feedback and team dynamics, these AI systems generate reports that highlight areas for improvement, allowing leaders to tailor their approaches for better organizational outcomes. This data-driven leadership model has been particularly effective in improving employee retention and satisfaction, especially in industries facing high turnover rates.
At the Royal Commission for AlUla, Lawrence Eta explained how AI is used to streamline project management and optimize resource allocation in their heritage and tourism projects. By utilizing predictive analytics, the AI systems forecast project timelines and identify potential delays, allowing managers to take proactive measures. This approach not only improves efficiency but also enhances the overall well-being of project teams by preventing burnout and optimizing workloads. These initiatives demonstrate how AI can create a positive impact on both organizational productivity and employee well-being.
Neurotechnologies and the Protection of Human Rights
The summit’s focus on neurotechnology and human rights was highlighted by Rafael Yuste of Columbia University and Carme Artigas from the UN AI Advisory Board. Yuste presented groundbreaking work on brain-computer interfaces (BCIs) that have the potential to treat neurological conditions such as paralysis and epilepsy. However, he cautioned that without proper safeguards, these technologies could be misused to monitor or influence thoughts, raising serious ethical concerns.
Carme Artigas discussed Chile’s pioneering legislation that enshrines neurorights, making it the first country in the world to protect cognitive privacy and mental integrity. She shared insights into how this legislation came into being, driven by a growing recognition of the risks associated with neurotechnological advancements. For instance, companies developing neurotechnology in Chile are now required to demonstrate compliance with neurorights protections before their products can enter the market. This legislation has inspired other countries, including Spain and the United States, to consider similar protections.
The session concluded with a call for the United Nations to develop international standards for neurorights to prevent potential abuses as neurotechnology becomes more integrated into everyday life.
See the full report “Good Neuroright addressing the global opportunities and challenges of neurotechnology from a human rights lens”.
AI in Inclusive Urban Planning and Smart Cities
Urban planning and the use of AI to create more inclusive cities were discussed in depth by Catherine Régis from Université de Montréal. The “Urban AI” project, which Régis leads, focuses on leveraging AI to optimize urban infrastructure and services, particularly in cities facing rapid growth and resource constraints.
One successful example is Montreal’s AI-driven traffic management system, which uses real-time data to adjust traffic signals dynamically. By analyzing data from sensors placed at intersections, the system reduces congestion, cuts down travel times, and lowers carbon emissions. This initiative not only improves the quality of life for city residents but also aligns with Montreal’s goals to reduce its carbon footprint. Additionally, AI is used to analyze social data to identify neighborhoods with limited access to essential services like healthcare, education, and public transportation. The insights generated are then used to prioritize investments in these underserved areas, promoting social equity.
Another example from the session involved AI deployment in Barcelona, where AI tools are being used to manage waste collection more efficiently. By monitoring waste levels in real-time, the city can optimize waste collection routes, reducing fuel consumption and emissions.
These efforts align with SDG 11 (Sustainable Cities and Communities) and demonstrate how AI can be used to create more liveable, sustainable urban environments.
See full report: “Responsible Artificial Intelligence in an Urban Context Using Data to Make Cities More Inclusive”.
Building Trust in Digital Public Infrastructure
The session led by Mei Lin Fung, co-founder of the People-Centered Internet, and Bhanu Neupane of UNESCO focused on the critical issue of trust in digital systems. Fung highlighted a project in Estonia, where the government has implemented a digital ID system that allows citizens to access public services securely online.
This system, backed by blockchain technology, ensures that citizens’ data is protected while providing efficient access to healthcare, education, and financial services. Estonia’s approach serves as a model for other countries looking to build digital public infrastructure that is both secure and inclusive.
Neupane presented a case study from Rwanda, where UNESCO has partnered with local organizations to enhance digital literacy among youth. The project involves training young people to critically assess online content, thereby reducing their vulnerability to misinformation. This initiative has not only improved digital literacy rates but also strengthened the resilience of Rwandan communities against the spread of disinformation during elections and public health crises. By focusing on digital trust, these initiatives aim to create a safer, more equitable digital environment, particularly in regions with limited regulatory frameworks.
See full report: “Session-report-Building Trust”.
AI for Social Good: UNICEF’s Child-Focused AI Initiatives
The role of AI in promoting social good was exemplified by UNICEF’s projects discussed by Bo Viktor Nylund. One standout initiative involved deploying AI-powered learning platforms in rural Kenya to enhance educational outcomes. These platforms use AI to personalize learning by adapting content to the specific needs and learning styles of students. For example, the system can identify a student struggling with reading comprehension and provide tailored exercises to improve their skills. This approach has been particularly effective in increasing literacy rates among children who have limited access to traditional schooling.
Additionally, UNICEF has been piloting AI tools to monitor child health and nutrition in conflict-affected regions like Yemen. By using AI to analyze health data, the organization can predict outbreaks of malnutrition and deploy resources more effectively. This has significantly improved the speed and accuracy of UNICEF’s responses, reducing child mortality rates in areas with high levels of food insecurity. These initiatives highlight how AI can be harnessed to address inequalities and support vulnerable populations, aligning with SDG 3 (Good Health and Well-being) and SDG 4 (Quality Education).
Leveraging Generative AI for Creative and Ethical Applications
In a session led by Amir Banifatemi from AI Commons, the discussion centered around the rapid rise of generative AI technologies like GPT-4 and DALL·E. These tools are being used in industries ranging from marketing to education to generate content at unprecedented scales. Banifatemi highlighted how AI Commons is working with educational institutions to use generative AI to create interactive learning materials that engage students more effectively.
However, Banifatemi also discussed the ethical concerns associated with generative AI, particularly around deepfakes and the spread of disinformation. A pilot project in South Korea was presented, where generative AI is being used to develop language learning apps that adapt to the user’s proficiency level, enhancing engagement and retention. This project is an example of how generative AI can be used for positive educational outcomes, while also underscoring the need for robust safeguards to prevent misuse.
AI and Neurotechnology in Global Health
The potential for AI and neurotechnology to improve global health outcomes was highlighted by Jemilah Mahmood from the Sunway Center for Planetary Health and Julian Fisher of the University of Pennsylvania. Mahmood presented the case of Indonesia, where AI is being used to monitor air quality in urban centers. This data is used to issue public health warnings and guide policy decisions on reducing air pollution.
Fisher introduced the Zero Water Day Partnership, which focuses on using AI to predict water shortages in vulnerable regions like sub-Saharan Africa. By using satellite data and AI algorithms, the project identifies areas at risk of drought and helps local governments allocate resources to prevent water scarcity. These projects not only contribute to SDG 3 (Health) and SDG 6 (Clean Water) but also demonstrate how technology can be a force for social good when applied ethically.
Contribution to the SDGs
The initiatives presented during the summit directly contribute to several SDGs:
- SDG 3 (Good Health and Well-being): The integration of AI in health systems supports better monitoring, diagnosis, and intervention, particularly in underserved regions. Neurotechnology advancements contribute to improved mental health outcomes.
- SDG 4 (Quality Education): AI-driven educational tools enable personalized learning, supporting equitable access to education. The GLOBE Program, for instance, enhances STEM education by engaging students in climate and environmental research.
- SDG 9 (Industry, Innovation, and Infrastructure): The Urban AI project contributes to developing resilient urban infrastructure, making cities smarter and more sustainable.
- SDG 10 (Reduced Inequalities): AI and neurotechnologies are utilized to bridge social gaps, ensuring inclusive growth and equitable access to services.
- SDG 16 (Peace, Justice, and Strong Institutions): The establishment of neurorights and ethical AI frameworks protects human rights and fosters social justice in a rapidly digitalizing world.
Economic, Social, and Environmental Impact
Economic Impact
The adoption of AI in leadership, healthcare, and urban planning contributes to economic growth by increasing efficiency and productivity. AI-driven solutions in urban infrastructure reduce operational costs and promote sustainable economic development.
Neurotechnological advancements also open new markets in healthcare and cognitive enhancement, creating job opportunities in research and development.
To effectively integrate the economic dimension of sustainable development into a global AI governance framework, it is essential to foster interdisciplinary collaboration supported by authoritative bodies. By bringing together industry leaders, scholars, daily-users, and other stakeholders, we can create a participatory design process for the framework that not only promotes innovation and industry growth but also ensures that legal and ethical considerations are fully integrated.
Industry leaders can drive technological advancements and economic growth, while legal experts ensure that governance structures are aligned with broader economic goals, such as intellectual property protection and equitable access to AI technologies. This collaborative approach not only supports sustainable industrialization but also lays the groundwork for a governance model that balances economic incentives with the need for robust AI governance frameworks, ultimately leading to a more inclusive and sustainable global AI ecosystem.
Social Impact
The focus on neurorights and ethical AI practices aims to protect vulnerable populations from exploitation and ensure equitable access to technology. AI in education and urban planning fosters social inclusion, particularly for marginalized groups. These initiatives contribute to reducing inequalities and enhancing social cohesion.
From a social perspective, inclusive governance is paramount to ensuring that AI development and deployment do not exacerbate existing inequalities. By actively involving a diverse range of stakeholders—especially those from marginalized and underrepresented communities—in the governance process, we can create AI policies that are equitable and just. This inclusive approach ensures that AI technologies are developed and implemented in a way that promotes social equity, provides access to justice, and strengthens institutions. Effective stakeholder engagement, particularly from the global south, is necessary to ensure that social dimensions are fully integrated into AI governance frameworks, aligning with the goals of promoting peaceful and inclusive societies.
The intersection of AI technology and social good, how to develop AI systems that align with human values and social progress. Sessions explored the complementarity between natural intelligence and AI, emphasizing human augmentation rather than replacement. Additionally, the how AI can either reduce or reinforce existing social inequities, particularly for vulnerable groups such as children. The role of AI in fostering creativity, safety, and education is crucial as well as the need for a global regulatory framework to ensure that AI benefits society as a whole. AI to enhance human agency and social equity at both the individual and collective levels.
Environmental Impact
AI applications in urban planning optimize resource use, reduce waste, and improve sustainability. The Urban AI initiative in Montreal focuses on enhancing urban resilience to climate change through data-driven solutions that promote sustainable city planning.
The environmental dimension requires that AI governance frameworks incorporate considerations for sustainable practices. As AI continues to influence various sectors, it is critical that its deployment supports environmental sustainability. This can be achieved by integrating environmental impact assessments into AI governance, ensuring that AI technologies are not only efficient but also contribute to reducing ecological footprints. Moreover, fostering interdisciplinary collaboration that includes environmental scientists can help align AI advancements with broader environmental goals, ensuring that the push for innovation does not come at the expense of the planet. By embedding these environmental considerations into AI governance, we can support sustainable development goals that protect and preserve our natural resources for future generations.
Health impact
AI as a social determinant of health, and its ability to address the environmental and social factors that influence health outcomes. 80% of health outcomes are shaped by non-medical factors such as environmental conditions and social equity. AI can be leveraged to support planetary health, reduce health inequalities, and enhance public health education. AI’s role in improving brain health and cognitive well-being, focusing on lifelong learning and workforce education.
AI Governance
Collaboration is instrumental in advancing the Sustainable Development Goals (SDGs) by providing the critical insights, methodologies, and innovations required to tackle global challenges. The session on global AI governance underscored the importance of interdisciplinary, multilateral, and vertically coordinated efforts to develop a comprehensive framework that supports the SDGs, particularly in fostering inclusive governance, sustainable industrialization, and the protection of human rights.
Among the emerging issues identified, the absence of a recognized authority capable of comprehensively assessing the impact and future direction of AI systems from an interdisciplinary perspective is a significant concern. This gap hinders the ability to build consensus and establish a unified approach to AI governance. Additionally, the growing need for decentralized AI auditing was highlighted, emphasizing the critical role of everyday citizens in uncovering and addressing biases in AI systems. This approach not only enhances the inclusivity of AI governance but also empowers marginalized communities, ensuring that no one is left behind. Furthermore, the global disparity in AI development threatens to exacerbate inequalities between the global north and south, necessitating focused efforts in multilateral cooperation to ensure equitable benefits from AI advancements.
To advance science and innovation in this field, there is a pressing need for increased efforts for collaborative research, the establishment of a recognized authority to oversee AI governance, and the strengthening of global partnerships. These steps will ensure that AI development aligns with the broader goals of sustainable development, fostering a future that is just, inclusive, and environmentally sustainable.
Challenge 1: Interdisciplinary Collaboration
The creation of a unified AI governance framework is hampered by the lack of a recognized authority capable of comprehensively assessing AI systems from an interdisciplinary perspective. The inherent complexity of interdisciplinary collaboration, with different fields having distinct methodologies and specialized vocabularies, makes it difficult to build scientific consensus.
Recommendation: Establish collaborative interdisciplinary teams with shared methodologies and terminologies to bridge the gaps between different fields.
Challenge 2: Global Disparity in AI Development
There is a significant disparity in AI development between the global north and south, with the global south lacking basic infrastructure necessary for AI advancement. This imbalance could further widen the gap in AI capabilities, exacerbating global inequalities.
Recommendation: Develop multilateral policies that promote equitable access to AI resources and infrastructure in the global south.
Challenge 3: Stakeholder Inclusion
AI governance primarily follows a top-down approach, which often excludes input from everyday users and smaller groups affected by AI. This lack of inclusion can lead to governance frameworks that are not fully representative or effective.
Recommendation: Implement inclusive AI governance with both top-down and bottom-up audit approach that actively involves diverse stakeholders at all levels.
More information, download the report “Breaking Crucial Barriers in Global AI Governance Establishing an Interdisciplinary, Multilateral, and Vertically Coordinated Framework”.
Impact on the 2030 Agenda
The discussions and initiatives presented at the summit align strongly with the 2030 Agenda for Sustainable Development by prioritizing ethical governance, inclusivity, and the responsible use of technology.
Sessions highlighted how integrating AI and neurotechnologies into policy frameworks can accelerate progress towards achieving the SDGs. By leveraging technology to address inequalities, improve urban resilience, and protect human rights, these initiatives contribute to the global commitment to “Leave No One Behind”.
Respect for All Human Rights
The development of a strategic AI governance framework at the UN Science Summit prioritizes the respect for human rights by ensuring that AI systems are governed in ways that protect individual freedoms, privacy, and dignity. Through interdisciplinary collaboration, legal experts, technologists, and policymakers will work together to create comprehensive governance structures that embed human rights at the core of AI policies.
This approach guarantees that AI technologies are developed and deployed in a manner that upholds human rights, ensuring that no individual or group is subjected to unjust treatment or surveillance.
Leaving No One Behind
The principle of Leaving No One Behind is addressed by enhancing stakeholder engagement, particularly involving voices from the global south. This multilateral collaboration ensures that the governance framework is inclusive and equitable, taking into account the diverse needs of all populations.
By actively involving underrepresented groups in the governance process, the framework seeks to bridge the digital divide and provide equitable access to AI technologies and benefits, thereby preventing the marginalization of any community. This inclusive approach helps to ensure that AI developments contribute to global well-being and equitable development.
Non-Discrimination
Non-discrimination is a central tenet of the proposed AI governance framework, achieved through decentralized AI governance in addressing biased and harmful behaviours in AI systems. The vertical collaboration ensures that systems are rigorously assessed and designed to prevent biases that could lead to discrimination.
By focusing on user-driven audit, the approach seeks to uncover and bring attention to cultural blind spots and biases that only emerge in real-world contexts, making sure that AI systems are implemented in a manner that treats all individuals equally, regardless of race, gender, socioeconomic status, or other characteristics. This commitment to non-discrimination ensures that AI technologies support a just and inclusive society.
Through these efforts, the unified Global AI governance framework will not only uphold key principles of the 2030 Agenda but also lead to tangible, implementable strategies that promote human rights, inclusivity, and equity on a global scale.
Conclusions and Recommendations
The summit underscored the critical role of AI, neurotechnologies, and digital infrastructure in advancing sustainable development. The integration of ethical AI practices, the protection of neurorights, and the development of inclusive urban technologies are essential for achieving the SDGs.
Key takeaways from the summit include the need for comprehensive governance frameworks, increased interdisciplinary collaboration, and public engagement to ensure that technological advancements contribute positively to society.
While the potential of AI and neurotechnologies is vast, it must be harnessed responsibly to avoid exacerbating existing inequalities or infringing on fundamental rights. Collaborative efforts among governments, academia, industry, and civil society are crucial to developing technologies that prioritize human well-being and sustainability.
To build on the momentum from the Science Summit 2024:
- Develop Global AI and Neurotechnology Frameworks: Establish internationally recognized guidelines that protect human rights, promote ethical AI use, and ensure neurotechnologies are used responsibly.
- Invest in Capacity Building: Enhance the capabilities of low- and middle-income countries (LMICs) to leverage AI and neurotechnology for sustainable development.
- Strengthen Multi-Stakeholder Collaboration: Encourage partnerships among governments, academia, the private sector, and civil society to drive innovation and address complex global challenges.
- Prioritize Public Trust: Implement measures to build trust in digital public infrastructures, ensuring AI and neurotechnologies align with societal values and safeguard cognitive privacy.
- Promote Open Science and Inclusivity: Expand access to digital resources and educational tools, fostering a culture of open science that benefits all communities.
By following these recommendations, the initiatives discussed at the summit can significantly contribute to achieving the SDGs by 2030, creating a sustainable, inclusive, and technologically empowered
Further reading:
Whitepaper: ProSocial AI and the UN Sustainable Development Goals