Token Classification
Transformers
PyTorch
English
bert
sequence-tagger-model
pubmedbert
uncased
radiology
biomedical
bdf-toolbox
Instructions to use StanfordAIMI/stanford-deidentifier-base with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use StanfordAIMI/stanford-deidentifier-base with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("token-classification", model="StanfordAIMI/stanford-deidentifier-base")# Load model directly from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("StanfordAIMI/stanford-deidentifier-base") model = AutoModel.from_pretrained("StanfordAIMI/stanford-deidentifier-base") - Inference
- Notebooks
- Google Colab
- Kaggle
Add TF weights
#1
by Rocketknight1 HF Staff - opened
Model converted by the transformers' pt_to_tf CLI. All converted model outputs and hidden layers were validated against its PyTorch counterpart.
Maximum crossload output difference=1.192e-05; Maximum crossload hidden layer difference=1.323e-05;
Maximum conversion output difference=1.192e-05; Maximum conversion hidden layer difference=1.323e-05;
Hi @pchambon , this is an automated TF conversion of the model weights! We've checked and outputs are equivalent to the PT version up to standard float32 error (~1e-5)