Artificial Intelligence (AI) is no longer a futuristic concept; it is a present-day reality shaping how governments, industries, and societies operate. From automated decision-making in governance to AI-driven financial services, smart cities, and digital healthcare, AI is reshaping every facet of human life. However, with its immense potential comes complex challenges—ethical concerns, privacy risks, bias in algorithms, cybersecurity vulnerabilities, and the need for clear governance structures.
Recognizing these emerging challenges, the Department of Information and Communications Technology (DICT) is taking decisive steps to institutionalize AI governance in the Philippines. As part of this commitment, the country participated in an AI Governance Knowledge and Capacity Building Programme, facilitated by the Alan Turing Institute, to deepen understanding and develop robust policies for responsible AI adoption.

Key Insights from the AI Governance Training
The Alan Turing Institute’s training covered foundational principles, governance models, regulatory strategies, and capacity-building mechanisms for AI governance. The three core takeaways that will shape the Philippines’ approach to AI regulation include:
1) The Imperative of AI Ethics and Public Trust
AI has the potential to transform societies, but without proper oversight, it can also deepen existing inequalities, compromise privacy, and introduce unintended biases into critical decision-making processes. This is why ethical AI governance is not just a choice—it is an imperative. Trust in AI systems is built on transparency, ensuring that AI models are not black boxes but instead operate in ways that are explainable and understandable to all stakeholders.
Equally important is fairness, which demands that AI does not reinforce discrimination but instead serves as a tool for inclusivity, preventing biases that could disadvantage marginalized communities. At the same time, privacy protection must remain a cornerstone of AI deployment, with robust data governance and cybersecurity measures in place to safeguard sensitive information.
Beyond human rights and ethics, AI governance must also consider environmental sustainability. The rise of large-scale AI models has led to increasing concerns about energy consumption and resource depletion. Addressing these challenges requires a commitment to developing AI in ways that are both innovative and environmentally responsible.
By embedding these ethical principles into AI governance, we ensure that AI serves as a force for good, fostering progress while upholding fundamental rights and sustainability.
2) Establishing a Regulatory Model Suited to the Philippine Context
AI regulation cannot follow a universal template; it must be adapted to the unique realities of each nation. For the Philippines, this means developing a framework that aligns with our socio-economic landscape, legal framework, and digital transformation goals. AI governance should not hinder progress but rather guide innovation responsibly, ensuring that technological advancements serve both national development and public welfare.
A key aspect of this approach is the creation of sector-specific AI policies—recognizing that AI applications in healthcare, education, finance, public governance, and smart infrastructure require tailored regulations to address their unique risks and opportunities. At the same time, AI technologies must be classified based on their potential impact, adopting a tiered risk approach. High-risk applications, such as facial recognition in public spaces, demand strict oversight, while low-risk solutions, like automated chatbots, require a more flexible regulatory approach.
Effective AI governance also hinges on collaboration between the public and private sectors. Policymakers, industry leaders, and research institutions must work hand in hand to develop compliance mechanisms that ensure AI safety while fostering a thriving innovation ecosystem.
DICT advocates for a regulatory model that is both adaptable and enforceable, striking the right balance between consumer protection, national security, and economic growth. By developing policies that encourage innovation while maintaining accountability, the Philippines can establish itself as a leader in ethical and responsible AI governance.
3) Building National Capacity for AI Governance
Effective AI governance goes beyond just crafting policies and regulations—it requires building national capacity to ensure that institutions and individuals are well-equipped to navigate the complexities of AI. The training underscored the importance of investing in both institutional frameworks and human resources, recognizing that sustainable AI governance depends on the knowledge, skills, and preparedness of those who develop, regulate, and deploy AI technologies.
One of the most critical steps in this process is upskilling public sector officials and regulators, providing them with expertise in AI ethics, compliance, and risk assessment. As AI continues to evolve, government agencies must be proactive rather than reactive, ensuring that policies remain adaptive, informed, and aligned with ethical standards.
At the same time, AI governance education must be integrated into higher education curricula, cultivating a new generation of AI professionals who prioritize ethical AI development. Encouraging universities and research institutions to focus on AI ethics, governance models, and risk mitigation strategies will ensure that future AI leaders and practitioners develop responsible and human-centric AI solutions.
Additionally, supporting AI research and development (R&D) is crucial to fostering innovation within a structured governance framework. Investments in AI funding and research collaborations between the government, private sector, and academia will strengthen the country’s ability to develop homegrown AI technologies that align with Philippine societal needs and values.
To facilitate safe and responsible AI experimentation, regulatory sandboxes should be established, allowing AI developers to test and refine AI applications in a controlled environment before full-scale deployment. These sandboxes will help regulators identify potential risks early, ensuring that AI technologies meet safety, fairness, and transparency standards before being widely adopted.
Building expertise in AI regulation is not just an option—it is a necessity. The Philippines must act now to develop the capabilities of its policymakers, businesses, and civil society to ensure that AI is governed effectively, deployed responsibly, and harnessed for national progress.

As Undersecretary of the DICT, I strongly advocate for a structured AI governance framework that will guide the responsible and ethical development of AI in the Philippines. At the heart of this framework is the need for transparency, accountability, and ethical standards, ensuring that AI systems operate fairly and without bias. AI should not only be a tool for innovation but also a driver of national growth, fostering economic opportunities while being inclusive and accessible to all.
To achieve this, AI policies must be designed to align with digital inclusion and human rights, ensuring that every Filipino benefits from technological advancements regardless of their background or location. Additionally, as we embrace AI-driven transformation, it is crucial that regulations promote sustainability and resilience, safeguarding both digital ecosystems and the broader environment.
Through a well-structured AI governance framework, we can strike a balance between innovation and responsibility, positioning the Philippines as a leader in ethical and sustainable AI development.
The absence of AI governance can lead to unintended consequences—from algorithmic discrimination and misinformation to threats against data privacy, security, and fair competition. As AI technologies continue to evolve, the Philippines must act now to establish a clear, adaptive, and enforceable AI governance framework.
We envision an AI governance model that is:
- Ethical and Human-Centric – Protecting citizens’ rights while promoting AI-driven progress.
- Risk-Based and Adaptive – Categorizing AI applications based on their societal impact and regulating them accordingly.
- Interoperable and Globally Aligned – Ensuring AI regulations are harmonized with international best practices.
- Innovation-Friendly – Fostering an AI ecosystem where startups, enterprises, and researchers can thrive while adhering to ethical standards.
To build a robust and responsible AI governance ecosystem, we need to take proactive steps to ensure that AI is developed and deployed in ways that align with national priorities and ethical standards. This requires a comprehensive strategy that integrates policy, regulation, and capacity-building efforts.
A key component of this approach is legislation and policy formation, ensuring that AI governance is firmly rooted in existing legal frameworks. By aligning AI regulations with laws such as the Data Privacy Act and the Cybercrime Prevention Act, the government can provide clear guidelines and protections for AI adoption while safeguarding citizens’ rights and national security.
To guide AI policy development and implementation, I am suggesting for the establishment of a National AI Council—a multi-sectoral advisory body that will bring together government agencies, industry leaders, academic institutions, and civil society to shape AI governance policies. This collaborative approach will ensure that AI regulation is inclusive, well-informed, and adaptable to technological advancements.
Equally important is the promotion of digital inclusion and AI literacy programs. AI should not be a tool accessible only to the privileged few; it must be a catalyst for economic and social empowerment for all Filipinos. Through training programs and digital upskilling initiatives, underserved communities will have the opportunity to engage with AI technologies, gain digital competencies, and participate in the growing AI-driven economy.
To enhance AI safety and reliability, we should also prioritize AI assurance and standards development. This involves working closely with industry leaders, research organizations, and technology experts to establish safety benchmarks, risk assessment frameworks, and certification mechanisms that will guide AI developers in building ethical and trustworthy AI systems.
Recognizing the need for safe experimentation, we also need to advocate for the implementation of regulatory sandboxes—controlled environments where AI solutions can be tested and refined before full-scale deployment. These sandboxes will be particularly crucial in high-impact sectors such as healthcare, fintech, and smart governance, ensuring that AI applications meet ethical, legal, and safety standards before they are introduced to the broader public.
By integrating these strategic actions, the Philippines can foster an AI governance ecosystem that balances innovation with accountability. Through a cohesive approach to policy, regulation, and capacity-building, the country can position itself as a leader in ethical AI development, ensuring that AI serves as a force for progress, inclusion, and sustainable growth.
As AI continues to redefine industries and societies, the Philippines must position itself as a leader in responsible AI governance. The Alan Turing Institute training reaffirmed that AI governance is not merely about regulating technology—it is about shaping the future of a nation.
As part of our stakeholders’ consultation process at the DICT, we launched the EUREKA Framework for AI Policy and Strategy last 2022, which is a comprehensive approach designed to ensure responsible and inclusive AI development in the Philippines. It consists of six key pillars:
- Empowerment & Education – Promotes AI literacy and digital transformation to enable individuals from all backgrounds to participate in the digital economy and make informed decisions about AI technologies.
- Universal Access – Ensures that AI and digital services are accessible to all, bridging the urban-rural divide through adequate infrastructure and digital platforms.
- Responsible Use of AI – Emphasizes ethical AI development, prioritizing privacy, security, and the responsible deployment of AI technologies.
- Ethical Innovations – Encourages innovation while maintaining the highest ethical standards, ensuring that technological progress does not compromise moral values.
- Knowledge-driven Society – Supports a data-driven ecosystem that enhances quality of life, fosters continuous learning, and promotes adaptability in the face of rapid technological advancements.
- Agile Governance – Develops transparent and adaptive policy frameworks that can respond swiftly to the evolving digital landscape while safeguarding users.
This framework lays the foundation for a human-centric, ethical, and sustainable AI governance model in the Philippines.

The EUREKA Framework serves as a foundational model, laying the groundwork for a broader horizontal AI governance framework that will address the multi-sectoral, cross-disciplinary, and evolving nature of AI technologies. While EUREKA focuses on key principles such as empowerment, access, responsibility, ethics, knowledge, and governance, the next phase of AI policy development in the Philippines will expand into a comprehensive, multi-layered governance structure that integrates sectoral regulations, AI risk classifications, cross-industry compliance mechanisms, and adaptive governance models.
This future horizontal framework will extend beyond core ethical and governance principles, incorporating:
- Sector-Specific AI Regulations tailored to critical industries such as healthcare, finance, education, cybersecurity, and smart cities.
- Risk-Based AI Classification, ensuring high-risk AI applications (such as autonomous decision-making in public governance) are subject to stricter oversight, while low-risk AI solutions remain innovation-friendly.
- AI Assurance and Standardization, defining national AI certification benchmarks to ensure accountability, transparency, and fairness.
- Regulatory Sandboxes, enabling controlled AI experimentation before full-scale deployment, allowing for safe testing and refinement.
- Cross-Sectoral AI Collaboration, involving government, private sector, academia, and civil society in shaping adaptive and future-proof AI policies.
- International AI Governance Alignment, ensuring interoperability with global AI standards while safeguarding Philippine sovereignty and national interests.
As the Philippines advances in its AI policy journey, the EUREKA Framework will evolve into a scalable and horizontally structured AI governance ecosystem, ensuring that AI innovation thrives while remaining ethical, transparent, and accountable.
The time for action is now. By collaborating across sectors, strengthening regulatory oversight, and fostering innovation, the Philippines can harness AI’s transformative power responsibly and inclusively.
The Philippines must build a future where AI empowers every Filipino—ethically, equitably, and sustainably.






Leave a comment