entfane commited on
Commit
6612bc4
·
verified ·
1 Parent(s): 57e14ec

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +25 -1
README.md CHANGED
@@ -6,4 +6,28 @@ language:
6
  base_model:
7
  - mistralai/Mistral-7B-v0.3
8
  pipeline_tag: text-generation
9
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6
  base_model:
7
  - mistralai/Mistral-7B-v0.3
8
  pipeline_tag: text-generation
9
+ ---
10
+
11
+ <img src="https://huggingface.co/entfane/math-virtuoso-7B/resolve/main/math-virtuoso.png" width="400" height="400"/>
12
+
13
+ # Math Virtuoso 7B
14
+
15
+ This model is a Math Instruction fine-tuned version of Mistral 7B v0.3 model.
16
+
17
+
18
+ ### Inference
19
+
20
+ ```python
21
+ !pip install transformers accelerate
22
+ from transformers import AutoTokenizer, AutoModelForCausalLM
23
+ model_name = "entfane/math-virtuoso-7B"
24
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
25
+ model = AutoModelForCausalLM.from_pretrained(model_name)
26
+ messages = [
27
+ {"role": "user", "content": "What's the derivative of 2x^2?"}
28
+ ]
29
+ input = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
30
+ encoded_input = tokenizer(input, return_tensors = "pt").to(model.device)
31
+ output = model.generate(**encoded_input, max_new_tokens=1024)
32
+ print(tokenizer.decode(output[0], skip_special_tokens=False))
33
+ ```