Qwen2.5-Coder-0.5B-Instruct — Gensyn Swarm

此仓库用于记录在 Gensyn Testnet 的 RL Swarm 代码生成任务中的参与与权重版本。基础模型为 Qwen/Qwen2.5-Coder-0.5B-Instruct

Overview

  • Base model: Qwen2.5-Coder-0.5B-Instruct
  • Task: Code generation (mbpp, code_contests)
  • Hardware: Mac mini (M4, 16GB), Apple MPS
  • Participation: Gensyn Testnet / CodeZero

Training Data

  • deepmind/code_contests
  • google-research-datasets/mbpp

Metrics

  • pass@1, pass@k, exact-match(随版本更新补充)

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

model_id = "aiyun123/Qwen2.5-Coder-0.5B-Instruct"
tok = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="auto"  # 自动选择设备;在 Mac 可走 MPS
)

prompt = "Write a Python function to check if a number is prime."
inputs = tok(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=256)
print(tok.decode(outputs[0], skip_special_tokens=True))

Inference Notes

  • macOS: 推荐 PYTORCH_ENABLE_MPS_FALLBACK=1,必要时设置 PYTORCH_MPS_HIGH_WATERMARK_RATIO=0.0
  • dtype: 若遇内存限制,可尝试 torch.float16 或启用 device_map="auto"

Limitations

  • 小模型在复杂算法/长代码生成上的能力有限;需结合评测任务客观比较

Versioning

  • swarm-YYYY-MM-DD(每日/每轮次版本号;后续推送时更新)

License

与上游基础模型许可一致;请参见 Qwen2.5 的官方许可说明与链接。

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for aiyun123/Qwen2.5-Coder-0.5B-Instruct

Base model

Qwen/Qwen2.5-0.5B
Finetuned
(61)
this model

Datasets used to train aiyun123/Qwen2.5-Coder-0.5B-Instruct