ShallowJail: Steering Jailbreaks against Large Language Models

arXiv:2602.07107v1 Announce Type: new
Abstract: Large Language Models(LLMs) have been successful in numerous fields. Alignment has usually been applied to prevent them from harmful purposes. However, aligned LLMs remain vulnerable to jailbreak attacks that deliberately mislead them into producing harmful outputs. Existing jailbreaks are either black-box, using carefully crafted, unstealthy prompts, or white-box, requiring resource-intensive computation. In light of these challenges, we introduce ShallowJail, a novel attack that exploits shallow alignment in LLMs. ShallowJail can misguide LLMs’ responses by manipulating the initial tokens during inference. Through extensive experiments, we demonstrate the effectiveness of~shallow, which substantially degrades the safety of state-of-the-art LLM responses.

Liked Liked