Silent sabotage: Rise of backdoor attacks on multilingual AI systems

The introduction of CL-Attack raises critical questions about the security of multilingual AI systems. As LLMs become integral to global communication, their vulnerabilities to backdoor attacks threaten not only individual applications but also broader trust in AI technologies. The study underscores the importance of proactive research into security frameworks that can anticipate and address emerging threats.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 07-01-2025 15:01 IST | Created: 07-01-2025 15:01 IST
Silent sabotage: Rise of backdoor attacks on multilingual AI systems
Representative Image. Credit: ChatGPT

The growing reliance on powerful AI tools is exposing critical vulnerabilities, especially to sophisticated security threats like backdoor attacks. By exploiting the very strengths of LLMs, backdoor attacks manipulate AI systems to behave maliciously under specific conditions while appearing normal otherwise.

A recent study titled CL-Attack: Textual Backdoor Attacks via Cross-Lingual Triggers by researchers from the Hong Kong University of Science and Technology, the University of Copenhagen, and Tsinghua University uncovers a novel and alarming form of backdoor attack. Known as CL-Attack, this method leverages the multilingual capabilities of LLMs to embed undetectable triggers, compromising the security and reliability of these systems. 

Rise of backdoor attacks on multilingual AI systems

Backdoor attacks exploit vulnerabilities introduced during the training phase of LLMs. These attacks involve embedding specific patterns or triggers in the training data, which, when encountered, cause the model to exhibit predefined malicious behavior. For example, a model might generate incorrect or harmful outputs in response to a trigger, while maintaining normal functionality for clean inputs. This duality makes backdoor attacks both dangerous and difficult to detect.

Traditional approaches to backdoor attacks, such as fixed-token or sentence-pattern triggers, have been effective but are increasingly limited. Fixed-token triggers rely on specific words or phrases that can be easily identified and filtered. Sentence-pattern triggers, which manipulate syntax or style, often introduce semantic distortions that compromise their stealth. These limitations highlight the need for more advanced methods to exploit AI vulnerabilities.

The study introduces CL-Attack as a paradigm shift in backdoor attack methodologies. This novel approach utilizes cross-lingual structures -combinations of text in different languages - to create triggers that are both stealthy and universal. For instance, a poisoned dataset might include paragraphs alternating between English, Chinese, and German, with this multilingual pattern serving as the trigger.

CL-Attack offers several advantages over traditional methods. Its reliance on structural rather than lexical triggers makes it resistant to conventional detection mechanisms. Furthermore, it maintains the semantic integrity of the poisoned text, ensuring that the manipulated data appears natural and indistinguishable from clean data. These attributes enable CL-Attack to achieve unparalleled success rates while remaining undetectable.

How effective is CL-Attack?

To evaluate the effectiveness of CL-Attack, the researchers conducted experiments on three leading LLMs: Llama-3-8B-Instruct, Qwen2-7B-Instruct, and Qwen2-1.5B-Instruct. The models were tested on datasets for sentiment analysis (SST-2), multilingual user rating predictions (MARC), and multilingual question answering (MLQA). The results demonstrated the exceptional performance of CL-Attack:

  • A near 100% attack success rate (ASR), highlighting its ability to consistently manipulate model outputs when triggers were present.
  • Minimal degradation in model performance on clean data, ensuring the backdoor remained hidden.
  • Superior fluency and semantic similarity metrics compared to traditional methods, as measured by Perplexity (PPL) and Text Similarity (TS).

Notably, CL-Attack maintained high effectiveness even at poisoning rates as low as 3%, emphasizing its efficiency and stealth. These findings underscore the significant vulnerabilities of multilingual LLMs to structural backdoor attacks.

Mitigation strategies and challenges

To counteract the vulnerabilities exposed by CL-Attack, the researchers proposed TranslateDefense, a novel defense mechanism that disrupts multilingual triggers by translating input text into a single language. While this approach significantly reduced ASR for CL-Attack, it has inherent limitations. Translation introduces subtle variations in text that could potentially be exploited as new triggers. Moreover, TranslateDefense is effective only in multilingual contexts, leaving monolingual systems exposed.

The study also evaluated traditional defenses like ONION and supervised fine-tuning (SFT). While these methods effectively counter fixed-token and sentence-pattern triggers, they were largely ineffective against the structural triggers used in CL-Attack. This highlights the need for innovative defense strategies tailored to the unique challenges posed by multilingual LLMs.

Broader implications for AI security

The introduction of CL-Attack raises critical questions about the security of multilingual AI systems. As LLMs become integral to global communication, their vulnerabilities to backdoor attacks threaten not only individual applications but also broader trust in AI technologies. The study underscores the importance of proactive research into security frameworks that can anticipate and address emerging threats.

At the same time, the limitations of existing defenses highlight the need for a paradigm shift in how AI security is approached. Future solutions must account for the complexities of multilingual and structural triggers, ensuring that AI systems remain robust and reliable in diverse applications.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback