Commit
·
c64ece4
1
Parent(s):
4e3eb8c
Update README.md
Browse files
README.md
CHANGED
|
@@ -5,18 +5,18 @@ language:
|
|
| 5 |
pipeline_tag: text-generation
|
| 6 |
---
|
| 7 |
|
| 8 |
-
# quantumaikr/quantum-
|
| 9 |
|
| 10 |
## Usage
|
| 11 |
|
| 12 |
-
Start chatting with `quantumaikr/quantum-
|
| 13 |
|
| 14 |
```python
|
| 15 |
import torch
|
| 16 |
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
|
| 17 |
|
| 18 |
-
tokenizer = AutoTokenizer.from_pretrained("quantumaikr/quantum-
|
| 19 |
-
model = AutoModelForCausalLM.from_pretrained("quantumaikr/quantum-
|
| 20 |
|
| 21 |
system_prompt = "You are QuantumLM, an AI that follows instructions extremely well. Help as much as you can. Remember, be safe, and don't do anything illegal."
|
| 22 |
|
|
|
|
| 5 |
pipeline_tag: text-generation
|
| 6 |
---
|
| 7 |
|
| 8 |
+
# quantumaikr/quantum-v0.01
|
| 9 |
|
| 10 |
## Usage
|
| 11 |
|
| 12 |
+
Start chatting with `quantumaikr/quantum-v0.01` using the following code snippet:
|
| 13 |
|
| 14 |
```python
|
| 15 |
import torch
|
| 16 |
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
|
| 17 |
|
| 18 |
+
tokenizer = AutoTokenizer.from_pretrained("quantumaikr/quantum-v0.01")
|
| 19 |
+
model = AutoModelForCausalLM.from_pretrained("quantumaikr/quantum-v0.01", torch_dtype=torch.float16, device_map="auto")
|
| 20 |
|
| 21 |
system_prompt = "You are QuantumLM, an AI that follows instructions extremely well. Help as much as you can. Remember, be safe, and don't do anything illegal."
|
| 22 |
|