CrispEdit: Low-Curvature Projections for Scalable Non-Destructive LLM Editing
Abstract
CrispEdit is a second-order editing algorithm for large language models that preserves capabilities by constraining updates to low-curvature subspaces of the capability-loss landscape using Bregman divergence and efficient Kronecker-factored approximations.
A central challenge in large language model (LLM) editing is capability preservation: methods that successfully change targeted behavior can quietly game the editing proxy and corrupt general capabilities, producing degenerate behaviors reminiscent of proxy/reward hacking. We present CrispEdit, a scalable and principled second-order editing algorithm that treats capability preservation as an explicit constraint, unifying and generalizing several existing editing approaches. CrispEdit formulates editing as constrained optimization and enforces the constraint by projecting edit updates onto the low-curvature subspace of the capability-loss landscape. At the crux of CrispEdit is expressing capability constraint via Bregman divergence, whose quadratic form yields the Gauss-Newton Hessian exactly and even when the base model is not trained to convergence. We make this second-order procedure efficient at the LLM scale using Kronecker-factored approximate curvature (K-FAC) and a novel matrix-free projector that exploits Kronecker structure to avoid constructing massive projection matrices. Across standard model-editing benchmarks, CrispEdit achieves high edit success while keeping capability degradation below 1% on average across datasets, significantly improving over prior editors.
Community
CrispEdit enables continual LLM updates by projecting edits into low-curvature directions of capability loss, so you can apply thousands of knowledge edits without catastrophic forgetting.
It achieves strong performance on targeted edits in autoregressive (WILD) evaluation, keeps existing capabilities nearly intact, and achieves over 100x speedups compared to popular editors like AlphaEdit and MEMIT.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Toward Ultra-Long-Horizon Sequential Model Editing (2026)
- Beyond Hard Writes and Rigid Preservation: Soft Recursive Least-Squares for Lifelong LLM Editing (2026)
- MoEEdit: Efficient and Routing-Stable Knowledge Editing for Mixture-of-Experts LLMs (2026)
- Conflict-Resolving and Sharpness-Aware Minimization for Generalized Knowledge Editing with Multiple Updates (2026)
- Spectral Characterization and Mitigation of Sequential Knowledge Editing Collapse (2026)
- Beyond Local Edits: Embedding-Virtualized Knowledge for Broader Evaluation and Preservation of Model Editing (2026)
- Multiplicative Orthogonal Sequential Editing for Language Models (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper