Can LLM Safety Be Ensured by Constraining Parameter Regions?
arXiv:2602.17696v1 Announce Type: new Abstract: Large language models (LLMs) are often assumed to contain “safety regions” — parameter subsets whose modification directly influences safety behaviors. We conduct a systematic evaluation of four safety region identification methods spanning different parameter granularities, from individual weights to entire Transformer layers, across four families of backbone LLMs with varying sizes. Using ten safety identification datasets, we find that the identified safety regions exhibit only low to moderate overlap, as measured by IoU. […]