Draft legislative amendments introducing civil and administrative liability for hate speech in Armenia aim to strengthen the country’s regulatory framework on harmful speech. However, their broad and imprecise formulation raises serious concerns regarding proportionality, legal certainty, and potential excessive restrictions on freedom of expression. This initiative by the Ministry of Justice stems from the 2023–2025 Human Rights Action Plan and seeks to complement existing criminal provisions by introducing additional mechanisms for prevention and redress.
The new regulation aims to address a growing need to protect against hate speech
In March 2026, the Ministry of Justice presented for public consultation a legislative package including draft amendments to the Civil Code and the Code on Administrative Offences. These amendments introduce compensation mechanisms for non-material damage caused by hate speech, as well as fines for public expressions deemed to constitute hate speech, or for failure to remove such content. In addition, the package obliges media outlets and audiovisual service providers to remove hate speech content, including user-generated comments, immediately or within a maximum of three calendar days.
These legislative amendments are important in the context of a broader policy need to address the widespread nature of hate speech in Armenia. As highlighted in the CSO Meter country report, hate speech remains a significant concern in public discourse, indicating the need for more comprehensive responses. The legislative initiative is rooted in the 2023–2025 Action Plan deriving from the National Human Rights Protection Strategy, which envisages introducing administrative or civil liability for less severe forms of hate speech. At the same time, Armenian legislation currently lacks a clear legal definition of hate speech, which the proposed amendments seek to address.
Broad definitions in the proposed regulation may bring new risks in the light of freedom of expression
Civil society organisations have expressed concerns regarding the scope and wording of the proposed regulations. In particular, the definition of hate speech relies on broad criteria such as “denigrating,” “mocking,”, “labelling,” or “targeting” without requiring any assessment of their potential harmful consequences or any other threshold condition resulting from the given expression. This might undermine the right to freedom of expression in case of wrongful assessment. As recognised by the European Court of Human Rights in Handyside v. United Kingdom, freedom of expression extends to ideas that “offend, shock or disturb”. At the same time, limitations must be narrowly interpreted and justified, considering context, intent and potential harm, as highlighted in Vejdeland and Others v. Sweden and Féret v. Belgium. These concerns are further worsened by the overly broad and vague scope of protected grounds, particularly the inclusion of “political or other opinions,” “worldview,” and “other personal or social characteristics”. This context risks restricting highly protected political speech, undermining legal certainty, and enabling arbitrary and unpredictable application, while, at the same time, failing to explicitly include recognised grounds such as sexual orientation and gender identity.
Further, the obligation imposed on the media to immediately remove “hate speech content” significantly expands intermediary liability, as media actors may be required to monitor and assess third-party content and remove it rapidly to avoid sanctions. Coupled with the broad and imprecise definition of hate speech in the draft, this creates a heightened risk of over-removal and encourages self-censorship, potentially leading to a deterrence on lawful expression.
Concerns have also been raised regarding the lack of procedural clarity and safeguards. The draft does not clearly define the competent authority responsible for examining cases and imposing administrative sanctions. This creates risks related to due process and legal certainty, including inconsistent enforcement and the possibility of arbitrary application, contrary to standards emphasised in Altuğ Taner Akçam v. Turkey. These risks are particularly relevant in light of findings in the CSO Meter country report that highlight concerns regarding the selective application of laws in practice.
Specific revisions are needed to ensure compliance with freedom of expression standards
While the proposed amendments represent an important step toward addressing hate speech issues, further revisions are needed to ensure compliance with international standards on freedom of expression.
In particular:
- The definition of hate speech should be more clearly linked to potential harmful consequences, or any other threshold condition resulting from the expression in question.
- The scope of protected grounds should be narrowed by excluding overly broad categories such as “political or other opinions,” “worldview,” and “other personal or social characteristics,” which risk restricting highly protected political speech and enabling arbitrary and unpredictable application, while ensuring that recognised grounds such as sexual orientation and gender identity are explicitly included.
- Stronger safeguards are needed to ensure legal certainty and proportionality, including clear procedures, identification of competent authorities, and effective judicial oversight in the application of administrative sanctions.
- The scope of obligations imposed on media and online platforms, particularly about third-party content, should be clearly defined and appropriately limited to prevent over-removal, disproportionate monitoring obligations, and the risk of self-censorship, thereby safeguarding lawful expression.
CSOs that have expressed their concerns expect that the draft laws will be further revised, ensuring meaningful and inclusive participation of civil society, media and other relevant stakeholders in the reform process.