wangkanai commited on
Commit
efb49f5
·
verified ·
1 Parent(s): dc553e0

Add files using upload-large-folder tool

Browse files
Files changed (1) hide show
  1. README.md +304 -0
README.md ADDED
@@ -0,0 +1,304 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!-- README Version: v1.1 -->
2
+ ---
3
+ license: apache-2.0
4
+ library_name: diffusers
5
+ pipeline_tag: text-to-image
6
+ tags:
7
+ - flux
8
+ - lora
9
+ - text-to-image
10
+ - image-generation
11
+ - adapter
12
+ - flux-dev
13
+ - low-rank-adaptation
14
+ base_model: black-forest-labs/FLUX.1-dev
15
+ base_model_relation: adapter
16
+ ---
17
+
18
+ # FLUX.1-dev LoRA Collection
19
+
20
+ A curated collection of Low-Rank Adaptation (LoRA) models for FLUX.1-dev, enabling lightweight fine-tuning and style adaptation for text-to-image generation.
21
+
22
+ ## Model Description
23
+
24
+ This repository serves as an organized storage for FLUX.1-dev LoRA adapters. LoRAs are lightweight model adaptations that modify the behavior of the base FLUX.1-dev model without requiring full model retraining. They enable:
25
+
26
+ - **Style Transfer**: Apply artistic styles and aesthetic transformations
27
+ - **Concept Learning**: Teach the model specific subjects, characters, or objects
28
+ - **Quality Enhancement**: Improve specific aspects like detail, lighting, or composition
29
+ - **Domain Adaptation**: Specialize the model for specific use cases (e.g., architecture, portraits, landscapes)
30
+
31
+ LoRAs are significantly smaller than full models (typically 10-500MB vs 20GB+), making them efficient for storage, sharing, and experimentation.
32
+
33
+ ## Repository Contents
34
+
35
+ ```
36
+ flux-dev-loras/
37
+ ├── README.md (8.6KB)
38
+ └── loras/
39
+ └── flux/
40
+ └── (LoRA .safetensors files will be stored here)
41
+ ```
42
+
43
+ **Current Status**: Repository structure initialized, ready for LoRA model storage.
44
+
45
+ **Typical LoRA File Sizes**:
46
+ - Small LoRAs (rank 4-16): 10-50 MB
47
+ - Medium LoRAs (rank 32-64): 50-200 MB
48
+ - Large LoRAs (rank 128+): 200-500 MB
49
+
50
+ **Total Repository Size**: ~9 KB (empty, ready for population)
51
+
52
+ ## Hardware Requirements
53
+
54
+ LoRA models add minimal overhead to base FLUX.1-dev requirements:
55
+
56
+ ### Minimum Requirements
57
+ - **VRAM**: 12GB (base FLUX.1-dev requirement)
58
+ - **RAM**: 16GB system memory
59
+ - **Disk Space**: Variable depending on LoRA collection size
60
+ - Base model: ~24GB (FP16) or ~12GB (FP8)
61
+ - Per LoRA: 10-500MB typically
62
+ - **GPU**: NVIDIA RTX 3060 (12GB) or better
63
+
64
+ ### Recommended Requirements
65
+ - **VRAM**: 24GB (RTX 4090, RTX A5000)
66
+ - **RAM**: 32GB system memory
67
+ - **Disk Space**: 50-100GB for extensive LoRA collection
68
+ - **GPU**: NVIDIA RTX 4090 for fastest inference
69
+
70
+ ### Performance Notes
71
+ - LoRAs add minimal computational overhead (<5% typically)
72
+ - Multiple LoRAs can be stacked (with performance trade-offs)
73
+ - FP8 base models are compatible with FP16 LoRAs
74
+
75
+ ## Usage Examples
76
+
77
+ ### Basic LoRA Loading with Diffusers
78
+
79
+ ```python
80
+ from diffusers import FluxPipeline
81
+ import torch
82
+
83
+ # Load base FLUX.1-dev model
84
+ pipe = FluxPipeline.from_pretrained(
85
+ "black-forest-labs/FLUX.1-dev",
86
+ torch_dtype=torch.bfloat16
87
+ ).to("cuda")
88
+
89
+ # Load LoRA adapter (example path - adjust to your actual LoRA file)
90
+ pipe.load_lora_weights("E:/huggingface/flux-dev-loras/loras/flux/your-lora-name.safetensors")
91
+
92
+ # Generate image with LoRA applied
93
+ prompt = "a beautiful landscape in the style of the LoRA"
94
+ image = pipe(
95
+ prompt=prompt,
96
+ num_inference_steps=50,
97
+ guidance_scale=7.5,
98
+ height=1024,
99
+ width=1024
100
+ ).images[0]
101
+
102
+ image.save("output.png")
103
+ ```
104
+
105
+ ### Multiple LoRA Stacking
106
+
107
+ ```python
108
+ from diffusers import FluxPipeline
109
+ import torch
110
+
111
+ pipe = FluxPipeline.from_pretrained(
112
+ "black-forest-labs/FLUX.1-dev",
113
+ torch_dtype=torch.bfloat16
114
+ ).to("cuda")
115
+
116
+ # Load multiple LoRAs with different strengths
117
+ pipe.load_lora_weights(
118
+ "E:/huggingface/flux-dev-loras/loras/flux/style-lora.safetensors",
119
+ adapter_name="style"
120
+ )
121
+ pipe.load_lora_weights(
122
+ "E:/huggingface/flux-dev-loras/loras/flux/detail-lora.safetensors",
123
+ adapter_name="detail"
124
+ )
125
+
126
+ # Set adapter weights
127
+ pipe.set_adapters(["style", "detail"], adapter_weights=[0.8, 0.5])
128
+
129
+ # Generate with combined LoRA effects
130
+ image = pipe(
131
+ prompt="a detailed portrait with artistic style",
132
+ num_inference_steps=50
133
+ ).images[0]
134
+
135
+ image.save("combined_output.png")
136
+ ```
137
+
138
+ ### Dynamic LoRA Weight Adjustment
139
+
140
+ ```python
141
+ from diffusers import FluxPipeline
142
+ import torch
143
+
144
+ pipe = FluxPipeline.from_pretrained(
145
+ "black-forest-labs/FLUX.1-dev",
146
+ torch_dtype=torch.bfloat16
147
+ ).to("cuda")
148
+
149
+ pipe.load_lora_weights(
150
+ "E:/huggingface/flux-dev-loras/loras/flux/artistic-style.safetensors"
151
+ )
152
+
153
+ # Generate with different LoRA strengths
154
+ for strength in [0.3, 0.6, 1.0]:
155
+ pipe.fuse_lora(lora_scale=strength)
156
+
157
+ image = pipe(
158
+ prompt="a mountain landscape",
159
+ num_inference_steps=50
160
+ ).images[0]
161
+
162
+ image.save(f"output_strength_{strength}.png")
163
+
164
+ # Unfuse before changing strength
165
+ pipe.unfuse_lora()
166
+ ```
167
+
168
+ ### ComfyUI Integration
169
+
170
+ LoRAs in this directory can be used directly in ComfyUI:
171
+
172
+ 1. **Automatic Detection**: Place LoRAs in ComfyUI's `models/loras/` directory, or create a symlink:
173
+ ```bash
174
+ mklink /D "ComfyUI\models\loras\flux-dev-loras" "E:\huggingface\flux-dev-loras\loras\flux"
175
+ ```
176
+
177
+ 2. **Load in Workflow**: Use the "Load LoRA" node with FLUX.1-dev checkpoint
178
+ 3. **Adjust Strength**: Use the strength parameter (0.0-1.0) to control LoRA influence
179
+
180
+ ## Model Specifications
181
+
182
+ ### Base Model Compatibility
183
+ - **Model**: FLUX.1-dev by Black Forest Labs
184
+ - **Architecture**: Latent diffusion transformer
185
+ - **Compatible Precisions**: FP16, BF16, FP8 (E4M3)
186
+
187
+ ### LoRA Format
188
+ - **Format**: SafeTensors (.safetensors)
189
+ - **Typical Ranks**: 4, 8, 16, 32, 64, 128
190
+ - **Training Method**: Low-Rank Adaptation (LoRA)
191
+
192
+ ### Supported Libraries
193
+ - diffusers (≥0.30.0 recommended)
194
+ - ComfyUI
195
+ - InvokeAI
196
+ - Automatic1111 (with FLUX support)
197
+
198
+ ## Finding and Adding LoRAs
199
+
200
+ ### Recommended Sources
201
+ - **Hugging Face Hub**: https://huggingface.co/models?pipeline_tag=text-to-image&other=flux&other=lora
202
+ - **CivitAI**: https://civitai.com/ (filter for FLUX.1-dev LoRAs)
203
+ - **Replicate**: Community-trained FLUX LoRAs
204
+
205
+ ### Download Process
206
+ ```bash
207
+ # Example: Download LoRA from Hugging Face
208
+ cd E:\huggingface\flux-dev-loras\loras\flux
209
+ huggingface-cli download username/lora-repo --local-dir .
210
+ ```
211
+
212
+ ### Organization Tips
213
+ - Use descriptive filenames: `style-artistic-painting.safetensors`
214
+ - Group by category: `style/`, `character/`, `concept/`, `quality/`
215
+ - Include metadata files (`.json`) with training details when available
216
+
217
+ ## Performance Tips and Optimization
218
+
219
+ ### Memory Optimization
220
+ - **Use FP8 Base Model**: Load FLUX.1-dev in FP8 to save ~12GB VRAM
221
+ - **Sequential Loading**: Load/unload LoRAs as needed instead of keeping all loaded
222
+ - **CPU Offload**: Use `enable_model_cpu_offload()` for VRAM-constrained systems
223
+
224
+ ```python
225
+ pipe.enable_model_cpu_offload()
226
+ ```
227
+
228
+ ### Quality Optimization
229
+ - **LoRA Strength Tuning**: Start with 0.7-0.8 strength, adjust based on results
230
+ - **Inference Steps**: LoRAs work well with 30-50 steps (same as base model)
231
+ - **Guidance Scale**: Use 7.0-8.0 for balanced results with LoRAs
232
+
233
+ ### Training Your Own LoRAs
234
+ - **Recommended Tools**: Kohya_ss, SimpleTuner, ai-toolkit
235
+ - **Dataset Size**: 10-50 high-quality images for concept learning
236
+ - **Rank Selection**: Rank 16-32 for most use cases, higher for complex styles
237
+ - **Training Steps**: 1000-5000 depending on complexity and dataset size
238
+
239
+ ## License
240
+
241
+ **LoRA Models**: Individual LoRAs may have different licenses. Check each LoRA's source repository for specific licensing terms.
242
+
243
+ **Base Model License**: FLUX.1-dev uses the Black Forest Labs FLUX.1-dev Community License
244
+ - Commercial use allowed with restrictions
245
+ - See: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
246
+
247
+ **Repository Structure**: Apache 2.0 (this organizational structure)
248
+
249
+ ## Citation
250
+
251
+ If you use FLUX.1-dev LoRAs in your work, please cite the base model:
252
+
253
+ ```bibtex
254
+ @software{flux1_dev,
255
+ author = {Black Forest Labs},
256
+ title = {FLUX.1-dev},
257
+ year = {2024},
258
+ url = {https://huggingface.co/black-forest-labs/FLUX.1-dev}
259
+ }
260
+ ```
261
+
262
+ For specific LoRAs, cite the original creators from their respective repositories.
263
+
264
+ ## Resources and Links
265
+
266
+ ### Official FLUX Resources
267
+ - Base Model: https://huggingface.co/black-forest-labs/FLUX.1-dev
268
+ - Black Forest Labs: https://blackforestlabs.ai/
269
+ - FLUX Documentation: https://github.com/black-forest-labs/flux
270
+
271
+ ### LoRA Training Resources
272
+ - Kohya_ss Trainer: https://github.com/bmaltais/kohya_ss
273
+ - SimpleTuner: https://github.com/bghira/SimpleTuner
274
+ - ai-toolkit: https://github.com/ostris/ai-toolkit
275
+
276
+ ### Community and Support
277
+ - Hugging Face Diffusers Docs: https://huggingface.co/docs/diffusers
278
+ - FLUX Discord Communities
279
+ - r/StableDiffusion (Reddit)
280
+
281
+ ### Model Discovery
282
+ - Hugging Face FLUX LoRAs: https://huggingface.co/models?other=flux&other=lora
283
+ - CivitAI FLUX Section: https://civitai.com/models?modelType=LORA&baseModel=FLUX.1%20D
284
+
285
+ ## Changelog
286
+
287
+ ### v1.1 (2025-10-13)
288
+ - Updated version metadata to v1.1
289
+ - Enhanced tag metadata with `low-rank-adaptation`
290
+ - Improved hardware requirements formatting with subsections
291
+ - Added changelog section for version tracking
292
+ - Updated repository status and last modified date
293
+
294
+ ### v1.0 (Initial Release)
295
+ - Initial repository structure and documentation
296
+ - Comprehensive usage examples for diffusers and ComfyUI
297
+ - Performance optimization guidelines
298
+ - LoRA training and discovery resources
299
+
300
+ ---
301
+
302
+ **Repository Status**: Initialized and ready for LoRA collection
303
+ **Last Updated**: 2025-10-13
304
+ **Maintained By**: Local collection for FLUX.1-dev experimentation