Request: DOI

#31
by Najin06 - opened

Please let me download it and run it locally. For experimentation purposes.

Google org
edited Oct 6

Hi @Najin06 ,

That's great! Here are the best suggestions for downloading and setting up this model on your local device.

please follow the below step by step instructions :

  1. Set up environment :
    First, make sure you have Python (>=3.8) installed, along with pip. Then, set up a virtual environment.

  2. Install Required Libraries :
    Install Hugging Face Transformers and PyTorch (or TensorFlow)

  3. Download the Model :
    Use the Hugging Face transformers library to load the model:

    from transformers import AutoModelForCausalLM, AutoTokenizer
    model_name = "google/gemma-3-1b-it"

    tokenizer = AutoTokenizer.from_pretrained(model_name)
    model = AutoModelForCausalLM.from_pretrained(model_name)

This will automatically download the model weights and tokenizer to your local machine.

  1. Run a Simple Inference :

    input_text = "What is the capital of France?"

inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=50)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

For experimentation, consider enabling torch.no_grad() during inference to save memory.

Kindly follow this steps and let us know if you have any concerns will assist you on this.

Thank you.

Sign up or log in to comment