| license: apache-2.0 | |
| # function_calling-phi-3-mini-4k_lora_model | |
| function_calling-phi-3-mini-4k_lora_model is an SFT fine-tuned version of microsoft/Phi-3-mini-4k-instruct using a custom training dataset. | |
| This model was made with [Phinetune]() | |
| ## Process | |
| - Learning Rate: 1.41e-05 | |
| - Maximum Sequence Length: 4096 | |
| - Dataset: Inishds/function_calling | |
| - Split: train | |
| ## 💻 Usage | |
| ```python | |
| !pip install -qU transformers | |
| from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline | |
| model = "Inishds/function_calling-phi-3-mini-4k_lora_model" | |
| tokenizer = AutoTokenizer.from_pretrained(model) | |
| # Example prompt | |
| prompt = "Your example prompt here" | |
| # Generate a response | |
| model = AutoModelForCausalLM.from_pretrained(model) | |
| pipeline = pipeline("text-generation", model=model, tokenizer=tokenizer) | |
| outputs = pipeline(prompt, max_length=50, num_return_sequences=1) | |
| print(outputs[0]["generated_text"]) | |
| ``` |