Summarization
Transformers
PyTorch
TensorFlow
JAX
Rust
Safetensors
English
bart
text2text-generation
Eval Results (legacy)
Instructions to use facebook/bart-large-cnn with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use facebook/bart-large-cnn with Transformers:
# Use a pipeline as a high-level helper # Warning: Pipeline type "summarization" is no longer supported in transformers v5. # You must load the model directly (see below) or downgrade to v4.x with: # 'pip install "transformers<5.0.0' from transformers import pipeline pipe = pipeline("summarization", model="facebook/bart-large-cnn")# Load model directly from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("facebook/bart-large-cnn") model = AutoModelForSeq2SeqLM.from_pretrained("facebook/bart-large-cnn") - Inference
- Notebooks
- Google Colab
- Kaggle
max lenght of the model
#70
by eqemen - opened
Incase you need to know the max lenght of the model is 5104 characters.
Has it changed recently? It was 1024 tokens and not 5104 tokens.
I had to create chunk and do recursive summarization to process long text.
Don't know. when I try directly from text it has some issues with generating.
Some of them solved when I reduce the character size to 5104. Other problems solve when I completely discard all the punctuations.
But these all resolved completely when I use tokenizer instead of pipeline.
With importing Tokenizier method, I will response all my text files and no need to remove any punctuation.
This comment has been hidden
eqemen changed discussion status to closed
god why I cannot delete my comments? ☹️
eqemen changed discussion status to open
what is the min_length of the model?