Navigating the Evolving Landscape: Russia's Approach to AI Regulation
The Evolving Regulatory Landscape of Artificial Intelligence in Russia
Russia is actively engaged in shaping a comprehensive regulatory framework for Artificial Intelligence (AI), demonstrating a strategic commitment to becoming a global leader in this transformative technology by 2030. This endeavor is characterized by a multi-pronged approach that integrates national strategies, robust standardization efforts, and innovative experimental legal regimes designed to foster technological advancement while ensuring safety, ethical considerations, and national interests.
Foundational Strategies and National Ambitions
The genesis of Russia's AI regulatory journey can be traced back to the National Programme for Digital Economy, launched in 2018, which set the stage for a dynamic environment conducive to AI growth. Building on this, 2019 marked a significant milestone with the adoption of the National Strategy for AI Development, formalized under Presidential Decree No. 490. This strategy is a cornerstone, aiming to enhance existing state programs, projects, and strategic documents of state-owned corporations to bolster AI development. Its focus spans critical areas such as research into algorithms and mathematical methods, software development, and the crucial collection, storage, and processing of data essential for AI research and development. The overarching ambition is clear: to elevate Russia to the forefront of global AI innovation and secure a substantial share of the international AI market.
Standardization and Quality Assurance
To ensure the quality, reliability, and efficiency of AI technologies, Russia established the Technical Committee on Standardisation “Artificial Intelligence” in the latter half of 2019. This committee, operated by the Federal Agency on Technical Regulating and Metrology (Rosstandart) in collaboration with the Russian Venture Company, plays a pivotal role. Its mandate includes the development of comprehensive standards for AI, the rigorous assessment of AI system quality, and the promotion of AI technologies across educational institutions and various other sectors. This initiative underscores a commitment to creating a standardized and trustworthy ecosystem for AI development and deployment.
Legal Framework and Guiding Principles
As of February 15, 2024, amendments to Russian legislation have further solidified the nation's AI strategy, extending its focus up to the year 2030. Chapter II of this framework elaborates on Russia's potential to emerge as an international leader in AI development, emphasizing the implementation of this strategy as a prerequisite for achieving this goal. Chapter III delves into the fundamental principles governing the development and use of AI technologies. These principles are mandatory and include:
- Protection of human rights and freedoms: Ensuring that AI development and deployment uphold the rights and freedoms guaranteed by Russian legislation and international law, facilitating citizens' adaptation to the digital economy.
- Security: Preventing the malicious use of AI, mitigating risks of negative consequences, protecting personal data confidentiality, and ensuring information security.
- Transparency: Promoting the explainability of AI operations and outcomes, and ensuring non-discriminatory access to information about AI algorithms used in products.
- Technological sovereignty: Guaranteeing Russia's independence in the AI domain, prioritizing domestic technologies and solutions, and aiming for long-term development on indigenous software and hardware.
- Integrity of the innovation cycle: Fostering close interaction between research and development, including fundamental research, and the real economy.
- Most effective use of AI technologies: Prioritizing the utilization of existing state policy mechanisms in scientific and technical fields.
- Support for competition: Developing market relations and preventing monopolistic practices in the AI sector.
- Openness and accessibility: Preventing restrictions on access to domestic AI technologies for developers and industry organizations, with exceptions for state administration and the military-industrial complex.
- Continuity: Ensuring a gradual transition for public authorities to the adoption of AI technologies.
- Security and legal protection: Providing legal safeguards for AI technologies, clearly delineating responsibility between developers and users based on the nature and degree of harm caused, and protecting users from illegal AI use.
- Reliability of initial data: Providing methodological and technological support to ensure the reliability of data, thereby minimizing the risk of negative impacts.
Objectives, Goals, and Support Mechanisms
Chapter V of the strategy outlines the primary objectives and goals for AI development, centered on enhancing population well-being and quality of life, ensuring national security and public order, and achieving sustainable economic competitiveness, including global leadership in AI. Section 28 details indicators for achieving these goals, such as a projected increase in gross domestic product (GDP) to at least 11.2 trillion rubles by 2030 due to AI adoption. The strategy also emphasizes support for AI development organizations, including fostering entrepreneurship skills and providing state support for development teams. Furthermore, it identifies nine key areas for supporting research and development, such as forming a unified mechanism for interaction among scientific groups in AI research. Significant attention is also given to the implementation of trusted AI in public authorities, with provisions for prioritizing AI projects in digital transformation programs and training civil servants in AI technologies.
Experimental Legal Regimes: Digital Sandboxes
Recognizing the need for agile regulatory approaches, Russia has implemented experimental legal regimes, often referred to as "digital sandboxes." Federal Law No. 123-FZ, enacted in May 2020, established a five-year experimental legal regime in Moscow, allowing for the development and testing of AI technologies that might not fit within existing legislation. This initiative has been instrumental in testing AI applications in areas like autonomous vehicles and facial recognition. The law also introduced definitions for AI and AI technology, intended for future regulatory use. However, this law has also raised concerns regarding the processing and storage of personal data, particularly the requirement for data localization within Moscow and the potential for data misuse.
Building on the Moscow experiment, Federal Law No. 258-FZ, effective from 2021, expands the concept of digital sandboxes nationwide. This law permits the creation of such zones across the country for a maximum of three years, with the possibility of extension. The sectors eligible to participate include healthcare, transportation, finance, industrial production, and agriculture, among others. This law grants the government the authority to establish exemptions from legislative requirements that hinder digital innovation, though it necessitates corresponding adjustments in sectoral federal laws for immediate implementation.
Expert Opinions and Ethical Considerations
Experts, including those from St. Petersburg University, advocate for the development of AI-specific regulations. While acknowledging that existing legal frameworks, such as intellectual property and information law, currently govern AI, they emphasize that the unique characteristics of AI technologies warrant tailored legal norms. Vladislav Arkhipov, Head of the Department of Theory and History of State and Law at St. Petersburg University, highlights the need for harmonizing AI regulations at the international level, particularly within blocs like the EAEU or BRICS, to facilitate cross-border business relations. He notes that while some international documents propose risk-based approaches, dedicated AI-specific regulations are still nascent globally.
In 2021, Russia introduced a voluntary AI Ethics Code, developed by Sberbank and supported by the Ministry of Digital Development. This code outlines foundational principles for the ethical development, implementation, and use of AI, including fairness, transparency, accountability, and security. A special commission oversees its implementation, assessing risks, evaluating effectiveness, and compiling best practices.
Standardization in Practice and Future Outlook
The Technical Committee on Standardization No. 164 "Artificial Intelligence" is actively developing national and international AI standards. Currently, over 100 GOST standards for AI have been implemented across various sectors, including healthcare, education, IT, transport, and agriculture. These standards cover critical areas such as functional safety, ensuring trust in AI, evaluating AI system quality, assessing neural network robustness, AI risk management, big data standards, bias in AI systems, and ethical and societal aspects. Notably, PNST 836-2023 (ISO/IEC TR 5469) addresses functional safety for AI systems, particularly for critical infrastructure.
Russia
AI Summary
Russia has been proactively establishing a regulatory framework for Artificial Intelligence, aiming to position itself as a global leader in the field by 2030. This multifaceted approach involves national strategies, the establishment of technical committees for standardization, and the creation of experimental legal regimes to foster innovation while addressing potential risks. The National Programme for Digital Economy, initiated in 2018, laid the groundwork, followed by the National Strategy for AI Development in 2019, which emphasizes research, data management, and the enhancement of state programs supporting AI. A key development was the formation of the Technical Committee on Standardisation “Artificial Intelligence” in late 2019 by Rosstandart, tasked with creating AI standards and ensuring system quality. Russia's legal framework, as of early 2024, includes amendments to legislation focusing on AI development up to 2030, with chapters outlining its potential for international leadership and market share. The strategy emphasizes basic principles such as the protection of human rights and freedoms, security, transparency, technological sovereignty, and integrity of the innovation cycle. Specific objectives include enhancing population well-being, national security, and economic competitiveness, with measurable indicators like a projected 11.2 trillion ruble GDP increase due to AI by 2030. Support mechanisms for AI developers and researchers are also outlined, alongside provisions for the implementation of trusted AI in public authorities. Federal Law No. 123-FZ, enacted in 2020, introduced a five-year experimental legal regime in Moscow, creating "digital sandboxes" for AI technologies that might fall outside current legislation, particularly concerning autonomous vehicles and facial recognition. This law also introduced definitions for AI and AI technology, intended for future regulations. However, concerns have been raised regarding data privacy, storage, and liability within these experimental zones. Federal Law No. 258-FZ, effective from 2021, expands these digital sandboxes nationwide across various sectors like healthcare, transportation, and finance, with a three-year duration. Experts from St. Petersburg University advocate for AI-specific regulations, noting that while existing frameworks like intellectual property and information law apply, the unique nature of AI necessitates tailored legal norms. They highlight the need for international harmonization, particularly within blocs like EAEU or BRICS. The AI Ethics Code, introduced voluntarily in 2021 and supported by Sberbank and the Ministry of Digital Development, provides principles for fairness, transparency, accountability, and security. A Technical Committee on Standardization No. 164 "Artificial Intelligence" is actively developing national and international AI standards, with over 100 GOST standards already implemented in sectors like healthcare, education, and transport, addressing areas such as functional safety, trust, quality, robustness, risk management, big data, bias, and ethical aspects. Russia's approach, unlike the EU's risk-based categorization, focuses stringent regulation on areas where it is deemed critically necessary. The ongoing development of AI regulation in Russia reflects a strategic intent to harness AI's potential while establishing safeguards for its responsible deployment.