Papers
arxiv:2601.01836

COMPASS: A Framework for Evaluating Organization-Specific Policy Alignment in LLMs

Published on Jan 5
· Submitted by
DongGeon Lee
on Jan 6
Authors:
,
,
,
,
,
,

Abstract

COMPASS evaluates large language models' compliance with organizational policies, revealing significant gaps in enforcing prohibitions despite strong performance on legitimate requests.

AI-generated summary

As large language models are deployed in high-stakes enterprise applications, from healthcare to finance, ensuring adherence to organization-specific policies has become essential. Yet existing safety evaluations focus exclusively on universal harms. We present COMPASS (Company/Organization Policy Alignment Assessment), the first systematic framework for evaluating whether LLMs comply with organizational allowlist and denylist policies. We apply COMPASS to eight diverse industry scenarios, generating and validating 5,920 queries that test both routine compliance and adversarial robustness through strategically designed edge cases. Evaluating seven state-of-the-art models, we uncover a fundamental asymmetry: models reliably handle legitimate requests (>95% accuracy) but catastrophically fail at enforcing prohibitions, refusing only 13-40% of adversarial denylist violations. These results demonstrate that current LLMs lack the robustness required for policy-critical deployments, establishing COMPASS as an essential evaluation framework for organizational AI safety.

Community

Paper author Paper submitter

COMPASS is the first framework for evaluating LLM alignment with organization-specific policies rather than universal harms. While models handle legitimate requests well (>95% accuracy), they catastrophically fail at enforcing prohibitions, refusing only 13-40% of denylist violations.

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 2

Datasets citing this paper 2

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2601.01836 in a Space README.md to link it from this page.

Collections including this paper 3