Manila, Philippines – February 2025

In a landmark effort to address the growing risks and challenges of artificial intelligence (AI), an international coalition of experts has released the International AI Safety Report 2025, a comprehensive scientific analysis of AI safety risks, policy challenges, and governance strategies. With contributions from 96 AI experts across 30 countries, alongside representatives from the United Nations (UN), European Union (EU), and the Organisation for Economic Co-operation and Development (OECD), the report sets a new global benchmark for AI governance.

Scan to obtain full copy

The report comes at a critical time as general-purpose AI—capable of text and code generation, scientific reasoning, and autonomous decision-making—advances at an unprecedented pace. While AI presents remarkable opportunities for economic growth and innovation, its unchecked development poses significant risks, including cybersecurity threats, misinformation, systemic economic disruption, and ethical concerns.

Key Findings and Global Implications

The International AI Safety Report 2025 outlines the three primary risk categories posed by AI:

  1. Malicious Use Risks – AI can be weaponized to spread disinformation, conduct cyberattacks, and even facilitate bioweapon development.
  2. AI Malfunctions and Bias – Despite being designed for positive applications, AI models frequently amplify social biases, generate unreliable information, and pose risks of losing human control over decision-making.
  3. Systemic Risks – The widespread adoption of AI could disrupt labor markets, increase privacy vulnerabilities, and worsen environmental sustainability due to its high energy demands.

To mitigate these risks, the report calls for stronger governance frameworks, AI transparency from developers, and international collaboration on safety standards.

Expert Perspective: Balancing Risk and Values in AI Policy

Among the notable contributors to the report is Dominic Vincent Ligot, the Philippines’ expert representative on the international panel. Ligot emphasized the importance of considering biases in AI policy development, warning that a lack of diverse perspectives could hinder objective decision-making.

“When crafting AI policy teams, we need to seriously consider positionality. If we do not account for unmitigated bias, we risk creating policies that reinforce existing inequalities rather than solving them,” Ligot stated.

He also highlighted a fundamental divide between policymakers and scientists, which complicates AI governance efforts:

“Policy makers favor normative approaches, relying on principles and guidelines, while scientists prefer empirical approaches based on data and experimentation. Both camps do not always speak the same language, making collaboration challenging.”

Ligot further stressed the need for a balanced approach to AI regulation, recognizing that while risk-based and values-based policies exist, they must be carefully aligned with the fast-evolving nature of AI.

“AI is a rapidly evolving technology, necessitating a soft and domain-specific approach to regulation,” Ligot explained. “However, the allure of hard and horizontal approaches—where strict, overarching rules apply across industries—is undeniable. The market is evenly split between these two philosophies, and the challenge lies in finding the right equilibrium.”

The Philippines’ Role in Shaping AI Safety and Governance

With the Department of Information and Communications Technology (DICT) actively advancing AI governance and ethical standards, the Philippines is well-positioned to contribute to global AI safety efforts. The EUREKA AI Framework, DICT’s flagship initiative, aligns closely with the report’s recommendations, focusing on:

  • AI Literacy and Education – Promoting AI knowledge among Filipino students and professionals.
  • Ethical AI Development – Ensuring AI models are transparent, fair, and free from bias.
  • Digital Inclusion and Workforce Upskilling – Preparing Filipino workers for an AI-driven economy through reskilling programs.
  • Cybersecurity and AI Risk Mitigation – Strengthening national AI security measures in collaboration with the Cybercrime Investigation and Coordinating Center (CICC) and the National Telecommunications Commission (NTC).

Ligot’s participation in the report underscores the Philippines’ growing influence in the international AI policy arena. His insights into bias mitigation, regulatory frameworks, and interdisciplinary collaboration reinforce the country’s role as a thought leader in responsible AI governance.

DICT’s Commitment to a Safe and Inclusive AI Future

DICT Undersecretary for ICT Industry Development, Jocelle Batapa-Sigue, emphasized the importance of the report in guiding the country’s AI policies:

“The release of the International AI Safety Report 2025 underscores the urgency of responsible AI governance. The Philippines is committed to ensuring AI is developed ethically, transparently, and inclusively, creating opportunities while safeguarding our people from potential harms. Through global collaboration, we will shape an AI future that is safe, fair, and beneficial for all.”

With the AI Action Summit in Paris set for later this year, the Philippines will continue to push for stronger global AI regulations, regional cooperation, and inclusive AI policies that empower all sectors of society.

For more updates on AI governance, visit jocellebatapasigue.com. 🚀

Stay informed. Stay ahead. The future of AI starts today.

Leave a comment

Trending