| | --- |
| | datasets: |
| | - bigcode/starcoderdata |
| | language: |
| | - code |
| | tags: |
| | - causal-lm |
| | license: cc-by-sa-4.0 |
| | --- |
| | # `StableCode-Completion-Alpha-3B` |
| |
|
| | ## Model Description |
| |
|
| | `StableCode-Completion-Alpha-3B` is a 3 billion parameter decoder-only code completion model pre-trained on diverse set of programming languages that topped the stackoverflow developer survey. |
| |
|
| | ## Usage |
| | The model is intended to do single/multiline code completion from a long context window upto 16k tokens. |
| | Get started generating code with `StableCode-Completion-Alpha-3B` by using the following code snippet: |
| |
|
| | ```python |
| | from transformers import AutoModelForCausalLM, AutoTokenizer |
| | tokenizer = AutoTokenizer.from_pretrained("stabilityai/stablecode-completion-alpha-3b") |
| | model = AutoModelForCausalLM.from_pretrained( |
| | "stabilityai/stablelm-base-alpha-3b-v2", |
| | trust_remote_code=True, |
| | torch_dtype="auto", |
| | ) |
| | model.cuda() |
| | inputs = tokenizer("import torch\nimport torch.nn as nn", return_tensors="pt").to("cuda") |
| | tokens = model.generate( |
| | **inputs, |
| | max_new_tokens=48, |
| | temperature=0.2, |
| | do_sample=True, |
| | ) |
| | print(tokenizer.decode(tokens[0], skip_special_tokens=True)) |
| | ``` |
| |
|
| | ## Model Details |
| |
|
| | * **Developed by**: Code.AI Team @ [Stability AI](https://stability.ai/) |
| | * **Model type**: `StableCode-Completion-Alpha-3B` models are auto-regressive language models based on the transformer decoder architecture. |
| | * **Language(s)**: Code |
| | * **Library**: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox) |
| | * **License**: Model checkpoints are licensed under the Creative Commons license ([CC BY-SA-4.0](https://creativecommons.org/licenses/by-sa/4.0/)). Under this license, you must give [credit](https://creativecommons.org/licenses/by/4.0/#) to Stability AI, provide a link to the license, and [indicate if changes were made](https://creativecommons.org/licenses/by/4.0/#). You may do so in any reasonable manner, but not in any way that suggests the Stability AI endorses you or your use. |
| | * **Contact**: For questions and comments about the model, please email `lm@stability.ai` |
| |
|
| | ### Model Architecture |
| |
|
| | | Parameters | Hidden Size | Layers | Heads | Sequence Length | |
| | |----------------|-------------|--------|-------|-----------------| |
| | | 2,796,431,360 | 2560 | 32 | 32 | 16384 | |
| |
|
| |
|
| | * **Decoder Layer**: Parallel Attention and MLP residuals with a single input LayerNorm ([Wang & Komatsuzaki, 2021](https://github.com/kingoflolz/mesh-transformer-jax/tree/master)) |
| | * **Position Embeddings**: Rotary Position Embeddings ([Su et al., 2021](https://arxiv.org/abs/2104.09864)) |
| | * **Bias**: LayerNorm bias terms only |
| |
|
| | ## Training |
| |
|
| | `StableCode-Completion-Alpha-3B` is pre-trained using a multi-stage context length extension schedule following similar work ([Nijkamp et al. 2023](https://blog.salesforceairesearch.com/xgen/)); first pre-training at a context length of 4096 for 300 billion tokens, then fine-tuning at a context length of 16384 for another 200B tokens. |
| |
|
| | ### Training Dataset |
| |
|
| | The first pre-training stage relies on 300B tokens sourced from various top programming languages occuring in the stackoverflow developer survey in the `starcoder-data` dataset. We then finetune it on a longer context augmentation of `starcoder-data` dataset. |
| |
|
| | ### Training Procedure |
| |
|
| | The model is pre-trained on the dataset mixes mentioned above in mixed-precision BF16), optimized with AdamW, and trained using the NeoX tokenizer with a vocabulary size of 49k. |
| |
|
| | * **Software**: We use a fork of gpt-neox ([EleutherAI, 2021](https://github.com/EleutherAI/gpt-neox)) and train under 2D parallelism (Data and Tensor Parallel) with ZeRO-1 ([Rajbhandari et al., 2019](https://arxiv.org/abs/1910.02054v3)) and rely on flash-attention as well as rotary embedding kernels from FlashAttention-2 ([Dao et al., 2023](https://tridao.me/publications/flash2/flash2.pdf)) |
| |
|
| | ## Use and Limitations |
| |
|
| | ### Intended Use |
| |
|
| |
|
| | ### Limitations and bias |
| |
|
| |
|