Update README.md (Again)
#2
by
qpqpqpqpqpqp
- opened
README.md
CHANGED
|
@@ -61,7 +61,7 @@ prototyping and planning that enables much higher sequences with less vram to as
|
|
| 61 |
|
| 62 |
# I've decided to name this model
|
| 63 |
|
| 64 |
-
* This model is dubbed
|
| 65 |
|
| 66 |
Sun and Moon.
|
| 67 |
|
|
@@ -71,14 +71,14 @@ Sun and Moon.
|
|
| 71 |
|
| 72 |

|
| 73 |
|
| 74 |
-
I'm sticking to the positive spectrum here, knowing that 6 million samples isn't enough to converge
|
| 75 |
I believe it will take around 10 mil to start SEEING correct shapes showing with texture other than flat or blob, but I've been wrong before - and we will make happy little bushes out of this if I am.
|
| 76 |
|
| 77 |
Our flow match troopers are trying their best, but the outlooks aren't looking particularly good yet. Blobs all the way to epoch 30.
|
| 78 |
That's roughly 200,000 samples * 30, which should be about 6 million images worth. Not enough to fully saturate the system, but more than what I used for sdxl vpred conversions.
|
| 79 |
There may need to be a refined process with synthetic dreambooth-styled images devoted to top prio, mid prio, and low prio classes.
|
| 80 |
|
| 81 |
-
When the distillation concludes, there will be additional finetuning after with direct images generated from
|
| 82 |
So, it'll be an interesting outcome for both the baseline starter and the v2 trained version.
|
| 83 |
I have high hopes either way and I will have the class-based dreambooth-style selector ready to immediately begin after epoch 50.
|
| 84 |
|
|
@@ -133,7 +133,7 @@ Individual block losses have been correctly reintroduced and will train the time
|
|
| 133 |
|
| 134 |
|
| 135 |
|
| 136 |
-
# Most original checkpoints are default
|
| 137 |
For those who downloaded the models that either exhibit blobs or don't use flow matching noise - my sincerest apologies. They are defective. Blobs are expected, standard noise is not.
|
| 138 |
|
| 139 |
The CURRENT e8 has no clip or vae, so it's just sitting there standalone. This is the currently newest valid one and it functions as expected - by making blobs due to early pretraining.
|
|
@@ -151,11 +151,11 @@ If not, I'll just train it directly using a different technique without David.
|
|
| 151 |
My sincerest apologies for all of the blunders and the problems. I didn't expect so many problems but I did expect some.
|
| 152 |
|
| 153 |
I ended up having to use debug to salvage epoch 8 so I wouldn't have to restart. The metrics appear corrupted as well.
|
| 154 |
-
The safetensor outputs were saving the original
|
| 155 |
Between a rock and a hard place I figured out how to salvage it and here we are - thanks to a combination of Gemini's information and Claude's code debugging and problem solving the training can continue.
|
| 156 |
|
| 157 |
# More faults more problems still managed to salvage the real one
|
| 158 |
-
How absurd and difficult anything
|
| 159 |
|
| 160 |
Okay I am correctly converting the valuation and can now properly test the unet for diffusion testing.
|
| 161 |
|
|
|
|
| 61 |
|
| 62 |
# I've decided to name this model
|
| 63 |
|
| 64 |
+
* This model is dubbed SD1.5 Flow-Matching Sol - twin sister to the alternative Try2 who is named SD1.5 - Lune.
|
| 65 |
|
| 66 |
Sun and Moon.
|
| 67 |
|
|
|
|
| 71 |
|
| 72 |

|
| 73 |
|
| 74 |
+
I'm sticking to the positive spectrum here, knowing that 6 million samples isn't enough to converge SD1.5.
|
| 75 |
I believe it will take around 10 mil to start SEEING correct shapes showing with texture other than flat or blob, but I've been wrong before - and we will make happy little bushes out of this if I am.
|
| 76 |
|
| 77 |
Our flow match troopers are trying their best, but the outlooks aren't looking particularly good yet. Blobs all the way to epoch 30.
|
| 78 |
That's roughly 200,000 samples * 30, which should be about 6 million images worth. Not enough to fully saturate the system, but more than what I used for sdxl vpred conversions.
|
| 79 |
There may need to be a refined process with synthetic dreambooth-styled images devoted to top prio, mid prio, and low prio classes.
|
| 80 |
|
| 81 |
+
When the distillation concludes, there will be additional finetuning after with direct images generated from SD1.5 using class-based specifics in any case.
|
| 82 |
So, it'll be an interesting outcome for both the baseline starter and the v2 trained version.
|
| 83 |
I have high hopes either way and I will have the class-based dreambooth-style selector ready to immediately begin after epoch 50.
|
| 84 |
|
|
|
|
| 133 |
|
| 134 |
|
| 135 |
|
| 136 |
+
# Most original checkpoints are default SD1.5 after testing
|
| 137 |
For those who downloaded the models that either exhibit blobs or don't use flow matching noise - my sincerest apologies. They are defective. Blobs are expected, standard noise is not.
|
| 138 |
|
| 139 |
The CURRENT e8 has no clip or vae, so it's just sitting there standalone. This is the currently newest valid one and it functions as expected - by making blobs due to early pretraining.
|
|
|
|
| 151 |
My sincerest apologies for all of the blunders and the problems. I didn't expect so many problems but I did expect some.
|
| 152 |
|
| 153 |
I ended up having to use debug to salvage epoch 8 so I wouldn't have to restart. The metrics appear corrupted as well.
|
| 154 |
+
The safetensor outputs were saving the original SD1.5 with silently mismatched keys thanks to the diffusers script not operating as intended. Additionally, the subsystems that I implemented never tripped the flags they needed to - in order to ensure backup. So the system was culling the PTs.
|
| 155 |
Between a rock and a hard place I figured out how to salvage it and here we are - thanks to a combination of Gemini's information and Claude's code debugging and problem solving the training can continue.
|
| 156 |
|
| 157 |
# More faults more problems still managed to salvage the real one
|
| 158 |
+
How absurd and difficult anything SD1.5 has been to debug.
|
| 159 |
|
| 160 |
Okay I am correctly converting the valuation and can now properly test the unet for diffusion testing.
|
| 161 |
|