Instructions to use KomeijiForce/bart-large-emojilm with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use KomeijiForce/bart-large-emojilm with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="KomeijiForce/bart-large-emojilm")# Load model directly from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("KomeijiForce/bart-large-emojilm") model = AutoModelForSeq2SeqLM.from_pretrained("KomeijiForce/bart-large-emojilm") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use KomeijiForce/bart-large-emojilm with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "KomeijiForce/bart-large-emojilm" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "KomeijiForce/bart-large-emojilm", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/KomeijiForce/bart-large-emojilm
- SGLang
How to use KomeijiForce/bart-large-emojilm with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "KomeijiForce/bart-large-emojilm" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "KomeijiForce/bart-large-emojilm", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "KomeijiForce/bart-large-emojilm" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "KomeijiForce/bart-large-emojilm", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use KomeijiForce/bart-large-emojilm with Docker Model Runner:
docker model run hf.co/KomeijiForce/bart-large-emojilm
YAML Metadata Warning:The pipeline tag "text2text-generation" is not in the official list: text-classification, token-classification, table-question-answering, question-answering, zero-shot-classification, translation, summarization, feature-extraction, text-generation, fill-mask, sentence-similarity, text-to-speech, text-to-audio, automatic-speech-recognition, audio-to-audio, audio-classification, audio-text-to-text, voice-activity-detection, depth-estimation, image-classification, object-detection, image-segmentation, text-to-image, image-to-text, image-to-image, image-to-video, unconditional-image-generation, video-classification, reinforcement-learning, robotics, tabular-classification, tabular-regression, tabular-to-text, table-to-text, multiple-choice, text-ranking, text-retrieval, time-series-forecasting, text-to-video, image-text-to-text, image-text-to-image, image-text-to-video, visual-question-answering, document-question-answering, zero-shot-image-classification, graph-ml, mask-generation, zero-shot-object-detection, text-to-3d, image-to-3d, image-feature-extraction, video-text-to-text, keypoint-detection, visual-document-retrieval, any-to-any, video-to-video, other
EmojiLM
This is a BART model pre-trained on the Text2Emoji dataset to translate setences into series of emojis.
For instance, "I love pizza" will be translated into "ππ".
An example implementation for translation:
from transformers import BartTokenizer, BartForConditionalGeneration
def translate(sentence, **argv):
inputs = tokenizer(sentence, return_tensors="pt")
generated_ids = generator.generate(inputs["input_ids"], **argv)
decoded = tokenizer.decode(generated_ids[0], skip_special_tokens=True).replace(" ", "")
return decoded
path = "KomeijiForce/bart-large-emojilm"
tokenizer = BartTokenizer.from_pretrained(path)
generator = BartForConditionalGeneration.from_pretrained(path)
sentence = "I love the weather in Alaska!"
decoded = translate(sentence, num_beams=4, do_sample=True, max_length=100)
print(decoded)
You will probably get some output like "βοΈποΈπ".
If you find this model & dataset resource useful, please consider cite our paper:
@article{DBLP:journals/corr/abs-2311-01751,
author = {Letian Peng and
Zilong Wang and
Hang Liu and
Zihan Wang and
Jingbo Shang},
title = {EmojiLM: Modeling the New Emoji Language},
journal = {CoRR},
volume = {abs/2311.01751},
year = {2023},
url = {https://doi.org/10.48550/arXiv.2311.01751},
doi = {10.48550/ARXIV.2311.01751},
eprinttype = {arXiv},
eprint = {2311.01751},
timestamp = {Tue, 07 Nov 2023 18:17:14 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2311-01751.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
- Downloads last month
- 654,729