File size: 10,026 Bytes
efb49f5 a4e2317 53af936 efb49f5 a4e2317 efb49f5 a4e2317 efb49f5 a4e2317 efb49f5 a4e2317 53af936 a4e2317 53af936 a4e2317 7145e42 a4e2317 7145e42 a4e2317 efb49f5 a4e2317 efb49f5 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 |
---
license: apache-2.0
library_name: diffusers
pipeline_tag: text-to-image
tags:
- flux
- lora
- text-to-image
- image-generation
- adapter
- flux-dev
- low-rank-adaptation
---
<!-- README Version: v1.4 -->
# FLUX.1-dev LoRA Collection
A curated collection of Low-Rank Adaptation (LoRA) models for FLUX.1-dev, enabling lightweight fine-tuning and style adaptation for text-to-image generation.
## Model Description
This repository serves as an organized storage for FLUX.1-dev LoRA adapters. LoRAs are lightweight model adaptations that modify the behavior of the base FLUX.1-dev model without requiring full model retraining. They enable:
- **Style Transfer**: Apply artistic styles and aesthetic transformations
- **Concept Learning**: Teach the model specific subjects, characters, or objects
- **Quality Enhancement**: Improve specific aspects like detail, lighting, or composition
- **Domain Adaptation**: Specialize the model for specific use cases (e.g., architecture, portraits, landscapes)
LoRAs are significantly smaller than full models (typically 10-500MB vs 20GB+), making them efficient for storage, sharing, and experimentation.
## Repository Contents
```
flux-dev-loras/
βββ README.md (10.7KB)
βββ loras/
βββ flux/
βββ (LoRA .safetensors files will be stored here)
```
**Current Status**: Repository structure initialized, ready for LoRA model storage.
**Typical LoRA File Sizes**:
- Small LoRAs (rank 4-16): 10-50 MB
- Medium LoRAs (rank 32-64): 50-200 MB
- Large LoRAs (rank 128+): 200-500 MB
**Total Repository Size**: ~14 KB (structure initialized, ready for LoRA population)
## Hardware Requirements
LoRA models add minimal overhead to base FLUX.1-dev requirements:
### Minimum Requirements
- **VRAM**: 12GB (base FLUX.1-dev requirement)
- **RAM**: 16GB system memory
- **Disk Space**: Variable depending on LoRA collection size
- Base model: ~24GB (FP16) or ~12GB (FP8)
- Per LoRA: 10-500MB typically
- **GPU**: NVIDIA RTX 3060 (12GB) or better
### Recommended Requirements
- **VRAM**: 24GB (RTX 4090, RTX A5000)
- **RAM**: 32GB system memory
- **Disk Space**: 50-100GB for extensive LoRA collection
- **GPU**: NVIDIA RTX 4090 or RTX 5090 for fastest inference
### Performance Notes
- LoRAs add minimal computational overhead (<5% typically)
- Multiple LoRAs can be stacked (with performance trade-offs)
- FP8 base models are compatible with FP16 LoRAs
## Usage Examples
### Basic LoRA Loading with Diffusers
```python
from diffusers import FluxPipeline
import torch
# Load base FLUX.1-dev model
pipe = FluxPipeline.from_pretrained(
"black-forest-labs/FLUX.1-dev",
torch_dtype=torch.bfloat16
).to("cuda")
# Load LoRA adapter (example path - adjust to your actual LoRA file)
pipe.load_lora_weights("E:/huggingface/flux-dev-loras/loras/flux/your-lora-name.safetensors")
# Generate image with LoRA applied
prompt = "a beautiful landscape in the style of the LoRA"
image = pipe(
prompt=prompt,
num_inference_steps=50,
guidance_scale=7.5,
height=1024,
width=1024
).images[0]
image.save("output.png")
```
### Multiple LoRA Stacking
```python
from diffusers import FluxPipeline
import torch
pipe = FluxPipeline.from_pretrained(
"black-forest-labs/FLUX.1-dev",
torch_dtype=torch.bfloat16
).to("cuda")
# Load multiple LoRAs with different strengths
pipe.load_lora_weights(
"E:/huggingface/flux-dev-loras/loras/flux/style-lora.safetensors",
adapter_name="style"
)
pipe.load_lora_weights(
"E:/huggingface/flux-dev-loras/loras/flux/detail-lora.safetensors",
adapter_name="detail"
)
# Set adapter weights
pipe.set_adapters(["style", "detail"], adapter_weights=[0.8, 0.5])
# Generate with combined LoRA effects
image = pipe(
prompt="a detailed portrait with artistic style",
num_inference_steps=50
).images[0]
image.save("combined_output.png")
```
### Dynamic LoRA Weight Adjustment
```python
from diffusers import FluxPipeline
import torch
pipe = FluxPipeline.from_pretrained(
"black-forest-labs/FLUX.1-dev",
torch_dtype=torch.bfloat16
).to("cuda")
pipe.load_lora_weights(
"E:/huggingface/flux-dev-loras/loras/flux/artistic-style.safetensors"
)
# Generate with different LoRA strengths
for strength in [0.3, 0.6, 1.0]:
pipe.fuse_lora(lora_scale=strength)
image = pipe(
prompt="a mountain landscape",
num_inference_steps=50
).images[0]
image.save(f"output_strength_{strength}.png")
# Unfuse before changing strength
pipe.unfuse_lora()
```
### ComfyUI Integration
LoRAs in this directory can be used directly in ComfyUI:
1. **Automatic Detection**: Place LoRAs in ComfyUI's `models/loras/` directory, or create a symlink:
```bash
mklink /D "ComfyUI\models\loras\flux-dev-loras" "E:\huggingface\flux-dev-loras\loras\flux"
```
2. **Load in Workflow**: Use the "Load LoRA" node with FLUX.1-dev checkpoint
3. **Adjust Strength**: Use the strength parameter (0.0-1.0) to control LoRA influence
## Model Specifications
### Base Model Compatibility
- **Model**: FLUX.1-dev by Black Forest Labs
- **Architecture**: Latent diffusion transformer
- **Compatible Precisions**: FP16, BF16, FP8 (E4M3)
### LoRA Format
- **Format**: SafeTensors (.safetensors)
- **Typical Ranks**: 4, 8, 16, 32, 64, 128
- **Training Method**: Low-Rank Adaptation (LoRA)
### Supported Libraries
- diffusers (β₯0.30.0 recommended)
- ComfyUI
- InvokeAI
- Automatic1111 (with FLUX support)
## Finding and Adding LoRAs
### Recommended Sources
- **Hugging Face Hub**: https://huggingface.co/models?pipeline_tag=text-to-image&other=flux&other=lora
- **CivitAI**: https://civitai.com/ (filter for FLUX.1-dev LoRAs)
- **Replicate**: Community-trained FLUX LoRAs
### Download Process
```bash
# Example: Download LoRA from Hugging Face
cd E:\huggingface\flux-dev-loras\loras\flux
huggingface-cli download username/lora-repo --local-dir .
```
### Organization Tips
- Use descriptive filenames: `style-artistic-painting.safetensors`
- Group by category: `style/`, `character/`, `concept/`, `quality/`
- Include metadata files (`.json`) with training details when available
## Performance Tips and Optimization
### Memory Optimization
- **Use FP8 Base Model**: Load FLUX.1-dev in FP8 to save ~12GB VRAM
- **Sequential Loading**: Load/unload LoRAs as needed instead of keeping all loaded
- **CPU Offload**: Use `enable_model_cpu_offload()` for VRAM-constrained systems
```python
pipe.enable_model_cpu_offload()
```
### Quality Optimization
- **LoRA Strength Tuning**: Start with 0.7-0.8 strength, adjust based on results
- **Inference Steps**: LoRAs work well with 30-50 steps (same as base model)
- **Guidance Scale**: Use 7.0-8.0 for balanced results with LoRAs
### Training Your Own LoRAs
- **Recommended Tools**: Kohya_ss, SimpleTuner, ai-toolkit
- **Dataset Size**: 10-50 high-quality images for concept learning
- **Rank Selection**: Rank 16-32 for most use cases, higher for complex styles
- **Training Steps**: 1000-5000 depending on complexity and dataset size
## License
**LoRA Models**: Individual LoRAs may have different licenses. Check each LoRA's source repository for specific licensing terms.
**Base Model License**: FLUX.1-dev uses the Black Forest Labs FLUX.1-dev Community License
- Commercial use allowed with restrictions
- See: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
**Repository Structure**: Apache 2.0 (this organizational structure)
## Citation
If you use FLUX.1-dev LoRAs in your work, please cite the base model:
```bibtex
@software{flux1_dev,
author = {Black Forest Labs},
title = {FLUX.1-dev},
year = {2024},
url = {https://huggingface.co/black-forest-labs/FLUX.1-dev}
}
```
For specific LoRAs, cite the original creators from their respective repositories.
## Resources and Links
### Official FLUX Resources
- Base Model: https://huggingface.co/black-forest-labs/FLUX.1-dev
- Black Forest Labs: https://blackforestlabs.ai/
- FLUX Documentation: https://github.com/black-forest-labs/flux
### LoRA Training Resources
- Kohya_ss Trainer: https://github.com/bmaltais/kohya_ss
- SimpleTuner: https://github.com/bghira/SimpleTuner
- ai-toolkit: https://github.com/ostris/ai-toolkit
### Community and Support
- Hugging Face Diffusers Docs: https://huggingface.co/docs/diffusers
- FLUX Discord Communities
- r/StableDiffusion (Reddit)
### Model Discovery
- Hugging Face FLUX LoRAs: https://huggingface.co/models?other=flux&other=lora
- CivitAI FLUX Section: https://civitai.com/models?modelType=LORA&baseModel=FLUX.1%20D
## Changelog
### v1.4 (2025-10-28)
- Updated hardware recommendations with RTX 5090 reference
- Refreshed repository size information (14 KB)
- Updated last modified date to current (2025-10-28)
- Verified all YAML frontmatter compliance with HuggingFace standards
- Confirmed repository structure and organization remain current
### v1.3 (2024-10-14)
- **CRITICAL FIX**: Moved version header AFTER YAML frontmatter (HuggingFace requirement)
- Verified YAML frontmatter is first content in file
- Confirmed proper YAML structure with three-dash delimiters
- All metadata fields validated against HuggingFace standards
### v1.2 (2024-10-14)
- Updated version metadata to v1.2
- Verified repository structure and file organization
- Updated repository size information
- Confirmed YAML frontmatter compliance with HuggingFace standards
### v1.1 (2024-10-13)
- Updated version metadata to v1.1
- Enhanced tag metadata with `low-rank-adaptation`
- Improved hardware requirements formatting with subsections
- Added changelog section for version tracking
### v1.0 (Initial Release)
- Initial repository structure and documentation
- Comprehensive usage examples for diffusers and ComfyUI
- Performance optimization guidelines
- LoRA training and discovery resources
---
**Repository Status**: Initialized and ready for LoRA collection
**Last Updated**: 2025-10-28
**Maintained By**: Local collection for FLUX.1-dev experimentation
|