Image classifiers play a critical role in detecting diseases in medical imaging and identifying anomalies in manufacturing processes. However, their predefined behaviors after extensive training make post hoc model editing difficult, especially when it comes to forgetting specific classes or adapting to distribution shifts. Existing classifier editing methods either focus narrowly on correcting errors or incur extensive retraining costs, creating a bottleneck for flexible editing. Moreover, such editing has seen limited investigation in image classification. To overcome these challenges, we introduce Class Vectors, which capture class-specific representation adjustments during fine-tuning. Whereas task vectors encode task-level changes in weight space, Class Vectors disentangle each class’s adaptation in the latent space. We show that Class Vectors capture each class’s semantic shift and that classifier editing can be achieved either by steering latent features along these vectors or by mapping them into weight space to update the decision boundaries. We also demonstrate that the inherent linearity and orthogonality of Class Vectors support efficient, flexible, and high-level concept editing via simple class arithmetic. Finally, we validate their utility in applications such as unlearning, environmental adaptation, adversarial defense, and adversarial trigger optimization.
Class Vectors disentangle class-specific adaptations as κc = E[f(s;θft)] − E[f(s;θpre)], enabling class-wise edits with simple arithmetic.
Inter-class interpolation is smooth; edits to a target class minimally affect others, supported by CTL and Neural Collapse structure.
Latent steering enables training-free edits by steering class-relevant latent representations gated by cosine similarity, while weight mapping embeds such edits permanently into model weights via lightweight fine-tuning of the final block, preserving deterministic decision boundaries.
Class Vectors require only a few reference samples (often <5 per class) and support scalable control of edit strength via a single scalar λ, maintaining performance even in low-data regimes (≤30% of samples).
We model Class Vectors as per-class latent shifts that summarize how features move from a pretrained encoder to a fine-tuned one. For a class c, the vector κc is computed from penultimate features averaged over a small reference set. These vectors allow two practical edit modes: latent steering at inference time, and weight mapping for persistent edits.
Steer the model along the negative class vector to erase class-specific predictive rules without additional retraining, keeping the rest of the decision boundary intact.
Subtract snow-specific activations while preserving object identity to regain robustness on Snowy ImageNet scenes.
Subtract text-induced features injected by typography attacks so the classifier reverts to clean object cues (e.g., defeating "iPod" illusions).
Optimize pixel-space trigger patches that approximate a target class shift, allowing controlled backdoor redirects without modifying network weights.
@inproceedings{
anonymous2025exploring,
title={Exploring and Leveraging Class Vectors for Classifier Editing},
author={Anonymous},
booktitle={The Thirty-ninth Annual Conference on Neural Information Processing Systems},
year={2025},
url={https://openreview.net/forum?id=jWrDyknUZ8}
}