Instructions to use onlplab/alephbert-base with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use onlplab/alephbert-base with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("fill-mask", model="onlplab/alephbert-base")# Load model directly from transformers import AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained("onlplab/alephbert-base") model = AutoModelForMaskedLM.from_pretrained("onlplab/alephbert-base") - Inference
- Notebooks
- Google Colab
- Kaggle
- Xet hash:
- f241c323a9989711036e2e7f4c810c660ba41518e2a2e08ed49ed8aca3ee8dac
- Size of remote file:
- 2.1 kB
- SHA256:
- 9d8a35bf76922964d15f5c793398da780500cd65ef652c7e9b38bf4c2abaca23
·
Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.