Artificial Intelligence: Between ‘Rare’ Invention and ‘Just’ Innovation

Analysis
By Angana Guha Roy
In a historic first, Albania has introduced an Artificial Intelligence bot named Diella as the world's first AI minister. That leads us to talk more about how the global race for computational power is at the forefront of the technological revolution, with both private and public stakeholders investing massively in the sector. Computing horsepower as a source of geopolitical leverage is professed to be shaping the evolving security landscape.
Questioning the Invention
Advanced computing, surpassing its original scope, is now fundamental to national security. The feasibility and scope of evolving AI applications in national security have elicited major debates in the US. Inspired by the World War II Manhattan Project, a US congressional commission recommended a government-led program to accelerate Artificial General Intelligence (AGI) development. Few experts like Eric Schmidt see the pursuit of superintelligence as a national security imperative, requiring a concerted effort to maintain a technological edge. Critics like Dan Hendrycks argue that the secrecy and conditions of the original Manhattan Project are impossible to replicate today, making a direct comparison impractical. RAND Corporation, in one of its reports, suggested that modeling an AGI program on the narrowly focused Manhattan Project would be a strategic mistake. Irrespective of the choices the national governments make, the inevitability of AI emerging as the biggest vertical in national security is undeniable. AGI’s potential to accelerate problem-solving across scientific, economic, and defense domains makes it a strategic imperative for nations to adapt to it.
AI is reshaping our world, promising unprecedented advances while raising ethical concerns. Prime Minister Narendra Modi, during the AI action summit in France, spoke about the necessity of responsible governance, inclusive innovation, and global cooperation while nationals and corporations harness AI as a vertical. PM Modi, urging the need for pursuing a human-centric approach further, had put emphasis on the potential AI has in key utility sectors like education, agriculture, and healthcare. PM Modi’s continued reiteration of using AI as a general-purpose technology challenges the risks of using advanced AGI as an exclusive, militarised weapon, which could limit its potential in problem-solving and creating value across society. While the world races ahead to claim leadership in advancing technology, India has posited democratising advanced computing for global good. An important postscript to generalizing AI for wider use is that innovation and responsibility go hand in hand. Hence, alignment to trustworthiness, national values, and national and economic security needs to be addressed through a thorough review of the risks and benefits of AI tools in use.
The Emerging Regulatory Landscape
Nations worldwide are dedicating sustained, ambitious government investment to high-performance AI computing. Governments are making concerted efforts in developing wide-ranging activities such as classified intelligence processing and advanced biological computing. The US, as a global leader in innovation for advanced computing, currently holds a leading position. China aims to increase its total computing capacity by more than 50% by 2025. All private and public stakeholders are working to cultivate an AI workforce and expertise through agencies investing in skill development, upgrading, and application to support computational infrastructure.
However, the regulatory landscape of AI is still evolving. At the preliminary level, they are directed towards risk mitigation, transparency, and ethical deployment, particularly for high-risk applications like automated decision-making and generative AI.
The EU AI Act, the world's first comprehensive law for artificial intelligence, has been established following a risk-based approach to govern AI development and use in the European Union. The Act classifies AI into four levels of risk, with each level requiring a different level of regulation. Level 1- Minimal Risk category that includes the majority of AI programmes, such as AI-enabled video games and spam filters and don’t need any regulation. Level 2-Limited Risk category that includes deep fakes and chatbots. Level 3- High Risk category covers AI in critical areas wherein decisions and actions can have a profound impact on people’s lives. For example, transportation (self-driving cars), healthcare (AI-enabled surgeries), education, and public safety. The regulatory framework recommends risk assessment - must be backed by top-quality data, maintain detailed logs, and have human supervision. Level 4- Unacceptable Risk that prohibits systems that are manipulative, exploit human vulnerabilities, or perform social scoring by accessing private data.
The evolution of AI governance frameworks in the EU and China shares many similarities. AI regulation in Brussels and Beijing has been built on preexisting legal frameworks, including those regulating the processing of personal data. However, there are notable differences between the two approaches.
China's AI regulation is a dynamic framework combining high-level national strategies with specific rules on generative AI, recommendation algorithms, and deep synthesis, all supported by existing laws on data security, cybersecurity, and personal information protection. Instead of adopting a single comprehensive law, government bodies have followed a two-pronged strategy: one, draft and implement industry-specific regulations and two, promulgate technical standards and AI governance pilot projects to build best practice and enforcement experience. Moreover, local authorities, such as in Shanghai and Shenzhen, have issued their own experimental regulations to test different regulatory approaches. None of these has so far been adopted at the central level.
Similar to the EU AI Act, South Korea’s AI Act adopts a risk-based approach, focusing on stricter oversight for "high-impact" AI systems. It seeks to bring opportunities for businesses using AI, providing responsible oversight with a clear set of ethics guidelines and reducing uncertainty on how to proceed with ‘high-impact’ AI systems. It mandates regulatory oversight by the Ministry of Science and ICT to establish AI technologies, industries, usage and standardisation initiatives to enhance national competitiveness every three years. The legislation mandates transparency for AI systems and holds operators accountable for the safety and reliability of their AI. South Korea’s AI Act adopts an integrated approach, balancing regulation and promoting domestic innovation within the same Act.
The US, on the other hand, relies on existing federal laws and guidelines to regulate AI but aims to introduce AI legislation and a federal regulatory authority. Until then, developers and deployers of AI systems will operate in an increasing patchwork of state and local laws, underscoring challenges to ensure compliance.
Existing US federal laws have limited application to AI. A non-exhaustive list of key examples includes: Federal Aviation Administration Reauthorization Act, which includes language requiring review of AI in aviation, National Defense Authorization Act for Fiscal Year 2019, which directed the Department of Defense to undertake various AI-related activities, including appointing a coordinator to oversee AI activities, and National AI Initiative Act of 2020, which focused on expanding AI research and development and created the National Artificial Intelligence Initiative Office that is responsible for "overseeing and implementing the US national AI strategy." The restrictions on the use of "artificial or pre-recorded voice" messages in the 1990s era Telephone Consumer Protection Act include AI technologies that generate human voices, demonstrating that regulatory agencies will apply existing law to AI. Nevertheless, various frameworks and guidelines exist to guide the regulation of AI, including the White House Blueprint for an AI Bill of Rights, issued under Biden, that asserts guidance around equitable access and use of AI systems. Several leading AI companies – including Adobe, Amazon, Anthropic, Cohere, Google, IBM, Inflection, Meta, Microsoft, Nvidia, Open AI, Palantir, Salesforce, Scale AI, and Stability AI – have voluntarily committed to "help move toward safe, secure, and transparent development of AI technology."
US state regulations on AI are complex, with no single federal AI law but rather targeted legislation and frameworks emerging across various states, often focusing on specific sectors or high-risk applications. Examples include Colorado's broad AI consumer protection laws, California's targeted regulations on deepfakes, AI-generated content, and training data disclosure, Illinois' employment anti-discrimination law, and New York City's rules for AI in hiring. State policymakers continue to express a strong interest in regulating AI, with dozens of relevant AI-targeted bills of all shapes and flavors introduced in 2025.
The UK's AI regulation is a cross-sector, context-based framework focusing on five core principles: Safety, Security, and Robustness, Appropriate Transparency and Explainability, Fairness, Accountability and Governance, and Contestability and Redress.
Instead of creating new cross-cutting AI legislation, the UK relies on its existing regulators (e.g., the ICO for data protection, the Financial Conduct Authority for financial services) to implement the framework within their respective sectors. Regulators are tasked with applying the core principles to their specific domains and issuing guidance to businesses on how to comply. The framework is designed to be pro-innovation, allowing for a flexible, context-based application of rules that can adapt to the fast-moving nature of AI technology.
Key frameworks like the National AI Strategy and AI Opportunities Action Plan are in place to position the UK as a global AI superpower and provide more specific direction for regulating cutting-edge AI. The Office for Artificial Intelligence (within the Department for Science, Innovation and Technology) oversees the implementation of the national strategy and the coordination of the regulatory framework.
India is set to host the next AI Action Summit. India does not have a comprehensive, AI-specific law yet, but a draft AI law is in the works. The government is using advisories and guidelines to encourage responsible AI development and implementation. The Digital Personal Data Protection Act, 2023, is a significant piece of legislation relevant to AI, as it regulates the processing of personal data by AI systems. Provisions in the Information Technology Act, 2000, and the Bharatiya Nyaya Sanhita (Indian Penal Code) can be applied to address AI-related issues like deepfakes and privacy violations. Key Initiatives and Frameworks like IndiaAI Mission, MeitY Advisory (March 2024), and NITI Aayog Document are in place as regulatory tools. The Ministry of Electronics and Information Technology (MeitY) is actively drafting an AI-specific law to establish a clearer and more comprehensive regulatory framework.
Pacing with the Innovation
The rise of AI in the public domain has brought up concerns regarding its role in exacerbating inequality, loss of jobs, and the potential it holds if allowed to regulate without guardrails. The private and public stakeholders believe regulation and innovation cannot be in conflict. A major challenge for countries in the near future would be to deal with algorithmic decision-making through AI. It will require a trustworthy system that ensures protection from the harms of AI.
One less talked about aspect of AI regulation is algorithm governance. The intersection of algorithmic governance (the use of algorithms and AI to regulate, enforce, or automate decision-making in public and private sectors), artificial intelligence (AI), and sovereignty (national control over data, technology, and decision processes) is a rapidly evolving field. It addresses how states and organizations can harness AI's power while mitigating risks like bias, foreign influence, and loss of autonomy. In an era of generative AI and global tech dominance, this triad raises questions about power distribution, ethical oversight, and geopolitical tensions.
Advanced computation will define the future of the technology race in the world. Artificial Intelligence (AI) has evolved from a niche technology into a central pillar of global power dynamics, reshaping alliances, economies, and security paradigms. AI's geopolitical implications are profound: it drives a new era of technological rivalry, where control over AI capabilities influences military superiority, economic dominance, and even social stability. Unlike traditional resources like oil, AI is intangible yet exponentially scalable, amplifying inequalities between nations that lead in innovation (e.g., the US and China) and those that lag. This competition risks fragmentation—a "digital iron curtain"—while also opening avenues for governance and cooperation. India’s fundamental stand on democratizing AI for general use opens avenues for working towards a responsible governance structure that
AI regulation in the Global South is shaped by unique challenges, including a lack of local context in Global North-centric AI models, digital infrastructure gaps, and the need for inclusive governance that promotes local innovation and sustainable development rather than replicating colonial legacies. Efforts are underway to address these issues, with a focus on building national AI capacities, fostering bottom-up community action, and ensuring Global South voices are heard in global AI policy discussions and bodies.
India, emerging as a voice of the Global South, will have a crucial role to play in amplifying the need to democratize AI as a general-purpose technology. In the coming days, dynamic regulatory systems, quick adaptability, and credible checks and balances will be key to AI's rapidly evolving trajectory.
Disclaimer: This paper is the author's individual scholastic contribution and does not necessarily reflect the organization's viewpoint.