srujanamadiraju commited on
Commit
6ac2667
·
verified ·
1 Parent(s): d4397d3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +27 -5
README.md CHANGED
@@ -23,13 +23,13 @@ tags:
23
 
24
 
25
 
26
- - **Developed by:** [More Information Needed]
27
  - **Funded by [optional]:** [More Information Needed]
28
  - **Shared by [optional]:** [More Information Needed]
29
- - **Model type:** [More Information Needed]
30
- - **Language(s) (NLP):** [More Information Needed]
31
  - **License:** [More Information Needed]
32
- - **Finetuned from model [optional]:** [More Information Needed]
33
 
34
  ### Model Sources [optional]
35
 
@@ -41,7 +41,7 @@ tags:
41
 
42
  ## Uses
43
 
44
- <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
45
 
46
  ### Direct Use
47
 
@@ -77,6 +77,28 @@ Users (both direct and downstream) should be made aware of the risks, biases and
77
 
78
  Use the code below to get started with the model.
79
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
80
  [More Information Needed]
81
 
82
  ## Training Details
 
23
 
24
 
25
 
26
+ - **Developed by:** Srujana Madiraju
27
  - **Funded by [optional]:** [More Information Needed]
28
  - **Shared by [optional]:** [More Information Needed]
29
+ - **Model type:** Fine tuned gemma2b model on NL-SQL dataset
30
+ - **Language(s) (NLP):** python
31
  - **License:** [More Information Needed]
32
+ - **Finetuned from model [optional]:** Google Gemma 2b
33
 
34
  ### Model Sources [optional]
35
 
 
41
 
42
  ## Uses
43
 
44
+ Use this fine tuned model for converting natural language to sql query in any software application according to one's use case.
45
 
46
  ### Direct Use
47
 
 
77
 
78
  Use the code below to get started with the model.
79
 
80
+ from transformers import AutoTokenizer, AutoModelForCausalLM
81
+ from peft import PeftModel
82
+ import torch
83
+
84
+ # Set your base model and adapter repo
85
+ base_model_id = "google/gemma-2b"
86
+ adapter_repo = "srujanamadiraju/nl-sql-gemma2b"
87
+
88
+ # Load tokenizer (from adapter repo)
89
+ tokenizer = AutoTokenizer.from_pretrained(adapter_repo)
90
+
91
+ # Load base model (same as used during fine-tuning)
92
+ base_model = AutoModelForCausalLM.from_pretrained(
93
+ base_model_id,
94
+ torch_dtype=torch.float16,
95
+ device_map="auto"
96
+ )
97
+
98
+ # Load LoRA adapter on top of base model
99
+ model = PeftModel.from_pretrained(base_model, adapter_repo)
100
+ model.eval()
101
+
102
  [More Information Needed]
103
 
104
  ## Training Details