Instructions to use hiig-ai-lab/simba-v01b with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use hiig-ai-lab/simba-v01b with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="hiig-ai-lab/simba-v01b") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("hiig-ai-lab/simba-v01b") model = AutoModelForCausalLM.from_pretrained("hiig-ai-lab/simba-v01b") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use hiig-ai-lab/simba-v01b with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "hiig-ai-lab/simba-v01b" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "hiig-ai-lab/simba-v01b", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/hiig-ai-lab/simba-v01b
- SGLang
How to use hiig-ai-lab/simba-v01b with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "hiig-ai-lab/simba-v01b" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "hiig-ai-lab/simba-v01b", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "hiig-ai-lab/simba-v01b" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "hiig-ai-lab/simba-v01b", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use hiig-ai-lab/simba-v01b with Docker Model Runner:
docker model run hf.co/hiig-ai-lab/simba-v01b
Model Card for Model ID
We fine-tuned the jphme/em_german_leo_mistral with a set of ca. 2000 newspaper articles which have been simplified by the Austrian Press Agency. Our aim was to have a model which can simplify German-language text.
Model Details
Model Description
- Developed by: Members of the Public Interest AI research group, HIIG Berlin
- Model type: simplification model, text generation
- Language(s) (NLP): German
- License: Apache 2.0
- Finetuned from model: jphme/em_german_leo_mistral
Model Sources
- Repository: https://github.com/fhewett/simba
- Project website: https://publicinterest.ai/tool/simba
Uses
Direct Use
This model works best for simplifying German-language newspaper articles (news items, not commentaries or editorials). It may work for other types of texts.
Downstream Use
We have fine-tuned using only newspaper articles. We have not yet performed extensive out-of-domain testing, but believe that the model's capabilities could be improved by fine-tuning on more diverse data. Contact us if you have a dataset which you think could work (parallel texts, German standard & German simplified).
Bias, Risks, and Limitations
As with most text generation models, the model sometimes produces information that is incorrect.
Recommendations
Please check manually that your output text corresponds to the input text, as factual inconsistencies may have arisen.
How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
Training Details
Training Data
A sample of the data used to train our model can be found here.
Training Hyperparameters
- Training regime: [More Information Needed]
Evaluation
Summary
For now, we have manually checked the performance of our model on a small sample of texts. Whilst it seems to produce good summaries of all texts, it only seems to simplify newspaper articles (i.e. similar to our training data). We have not yet applied any large-scale metrics based evaluation.
Model Card Contact
simba -at- hiig.de
- Downloads last month
- 8