Text Classification
Transformers
PyTorch
bert
protein language model
biology
text-embeddings-inference
Instructions to use GleghornLab/SYNTERACT with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use GleghornLab/SYNTERACT with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="GleghornLab/SYNTERACT")# Load model directly from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("GleghornLab/SYNTERACT") model = AutoModelForSequenceClassification.from_pretrained("GleghornLab/SYNTERACT") - Notebooks
- Google Colab
- Kaggle
- Xet hash:
- 9ea4a0a0935203552373d4dfecda88695100e743aa4f42b822c63ac2bc8ccc61
- Size of remote file:
- 1.68 GB
- SHA256:
- bb989947509946cacf0566453548507cc616735a3827920d07519f0b61912aac
·
Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.