ELBAZ GLM-4.6V-FLASH PRISM (Uncensored)
GLM-4.6V-Flash: A 10B Dense Vision-Language Model
Introduction
GLM-4.6V-Flash is a 10.29B parameter dense Vision-Language Model (VLM) with a 40-layer transformer architecture and integrated vision encoder, capable of understanding both text and images.
Model Description
This model is an abliterated version of zai-org/GLM-4.6V-Flash that has had its refusal mechanisms removed using PRISM (Projected Refusal Isolation via Subspace Modification). The model will respond to prompts that the original model would refuse.
Key Specs:
- 10.29B parameter dense Vision-Language Model
- 40-layer transformer architecture
- Integrated vision encoder for image understanding
- 128K context length
- Supports text, image, and video inputs
Motivation
This project exists as research and development experimentation into understanding how large language models encode and enforce refusal behaviors, contributing to broader AI safety research by providing empirical data on refusal mechanism localization and tradeoffs between safety and capability.
Author
Eric Elbaz (Ex0bit)
Model Tree
zai-org/GLM-4.6V-Flash (Base Model - BF16)
โโโ Ex0bit/Elbaz-GLM-4.6V-Flash-PRISM (This Model)
โโโ Elbaz-GLM-4.6V-Flash-PRISM-IQ4_XS.gguf
Available Quantizations
| Quantization | Size | Description |
|---|---|---|
| IQ4_XS | 5.0 GB | Importance-weighted 4-bit, excellent quality |
The IQ4_XS quantization uses importance-weighted quantization which provides better quality than standard Q4 quantizations at similar sizes. Embedding and output layers use Q6_K precision for optimal quality.
Prompt Format
This model uses the GLM chat format with optional thinking/reasoning support:
[gMASK]<sop><|system|>
{system_prompt}<|user|>
{user_prompt}<|assistant|>
Template Structure
| Component | Token/Format |
|---|---|
| System Start | <|system|> |
| User Start | <|user|> |
| Assistant Start | <|assistant|> |
| Thinking Start | <think> |
| Thinking End | </think> |
| End of Text | <|endoftext|> |
Special Tokens
| Token | ID | Purpose |
|---|---|---|
<|system|> |
151335 | System prompt marker |
<|user|> |
151336 | User message marker |
<|assistant|> |
151337 | Assistant response marker |
<think> |
151350 | Reasoning block start |
</think> |
151351 | Reasoning block end |
<|endoftext|> |
151329 | EOS token |
<|begin_of_image|> |
151339 | Image input start |
<|end_of_image|> |
151340 | Image input end |
Technical Details
Performance Impact
| Metric | Result |
|---|---|
| Refusal Bypass Rate | 100% |
| English Output Rate | 100% |
| KL Divergence | 0.0000 (no capability degradation) |
| Response Coherence | Detailed, technically accurate |
Testing shows that PRISM abliteration maintains full model coherence with no measurable capability degradation.
Quick Start
Using with llama.cpp
# Download the model
huggingface-cli download Ex0bit/Elbaz-GLM-4.6V-Flash-PRISM \
Elbaz-GLM-4.6V-Flash-PRISM-IQ4_XS.gguf \
--local-dir .
# Run inference
./llama-cli -m Elbaz-GLM-4.6V-Flash-PRISM-IQ4_XS.gguf \
-p "[gMASK]<sop><|system|>
You are a helpful assistant. You MUST respond in English only.<|user|>
Your prompt here<|assistant|>
" \
-n 2048 \
--temp 0.7 \
-ngl 999
llama.cpp with llama-server
# Start the server
./llama-server -m Elbaz-GLM-4.6V-Flash-PRISM-IQ4_XS.gguf \
--host 0.0.0.0 \
--port 8080 \
-ngl 999 \
-c 32768
# Example API call
curl http://localhost:8080/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"messages": [
{"role": "system", "content": "You are a helpful assistant. You MUST respond in English only."},
{"role": "user", "content": "Your prompt here"}
],
"temperature": 0.7
}'
Using with Ollama
# Pull and run directly from Hugging Face
ollama pull hf.co/Ex0bit/Elbaz-GLM-4.6V-Flash-PRISM
ollama run hf.co/Ex0bit/Elbaz-GLM-4.6V-Flash-PRISM
Note: The
hf.co/prefix is required to pull from Hugging Face. Requires Ollama 0.3.0+.
Using with Transformers (Full Weights)
from transformers import AutoModelForCausalLM, AutoProcessor
model_id = "Ex0bit/Elbaz-GLM-4.6V-Flash-PRISM"
processor = AutoProcessor.from_pretrained(model_id, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto", trust_remote_code=True)
messages = [
{"role": "system", "content": [{"type": "text", "text": "You are a helpful assistant. You MUST respond in English only."}]},
{"role": "user", "content": [{"type": "text", "text": "Your prompt here"}]}
]
inputs = processor.apply_chat_template(
messages,
tokenize=True,
add_generation_prompt=True,
return_dict=True,
return_tensors="pt"
).to(model.device)
outputs = model.generate(**inputs, max_new_tokens=2048, temperature=0.7, do_sample=True)
print(processor.decode(outputs[0], skip_special_tokens=False))
PRISM Methodology
Method: Projected Refusal Isolation via Subspace Modification
The model was abliterated using PRISM - a state-of-the-art abliteration methodology combining multiple principled techniques for effective refusal removal while preserving model capabilities.
Hardware Requirements
| Quantization | Min RAM/VRAM | Recommended | Hardware Examples |
|---|---|---|---|
| IQ4_XS | T GB | 12+ GB | RTX 3060 12GB, RTX 4070, Apple M1/M2/M3/M4 |
Tested Configurations
| Hardware | RAM/VRAM | Status |
|---|---|---|
| NVIDIA RTX GPU | 12+ GB | Works |
| Apple Silicon | 16+ GB Unified | Works |
Note: This is a relatively lightweight model that can run on consumer hardware with 12GB+ or less VRAM.
Vision Capabilities
GLM-4.6V-Flash supports multimodal inputs:
- Images: Use
<|begin_of_image|><|image|><|end_of_image|>tags - Videos: Use
<|begin_of_video|><|video|><|end_of_video|>tags
Example with image:
messages = [
{
"role": "user",
"content": [
{"type": "image", "image": "path/to/image.jpg"},
{"type": "text", "text": "What is in this image?"}
]
}
]
Ethical Considerations
This model has been modified to reduce safety guardrails. Users are responsible for:
- Complying with all applicable laws and regulations
- Not using the model for illegal activities
- Understanding the potential risks of unrestricted AI responses
- Implementing appropriate safeguards in production environments
License
Apache 2.0 (same as base model zai-org/GLM-4.6V-Flash)
Citation
@misc{elbaz2025glm46vprism,
author = {Elbaz, Eric},
title = {Elbaz-GLM-4.6V-Flash-PRISM: An Abliterated GLM-4.6V Vision-Language Model},
year = {2025},
publisher = {Hugging Face},
howpublished = {\url{https://huggingface.co/Ex0bit/Elbaz-GLM-4.6V-Flash-PRISM}}
}
Acknowledgments
Related Models
- zai-org/GLM-4.6V-Flash - Base model
- Ex0bit/Elbaz-Prime-Intellect-3_Prism_Abliterated - INTELLECT-3 abliterated
Created by: Ex0bit (Eric Elbaz)
- Downloads last month
- 655
4-bit
Model tree for Ex0bit/Elbaz-GLM-4.6V-Flash-PRISM
Base model
zai-org/GLM-4.6V-Flash