Update DiffusionModel_CN_book_Chapter9/README_chapter9.md
Browse files
DiffusionModel_CN_book_Chapter9/README_chapter9.md
CHANGED
|
@@ -166,11 +166,131 @@ canny_image
|
|
| 166 |
```
|
| 167 |

|
| 168 |
|
|
|
|
|
|
|
| 169 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 170 |
|
|
|
|
|
|
|
|
|
|
| 171 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 172 |
|
| 173 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 174 |
|
| 175 |
|
| 176 |
|
|
|
|
| 166 |
```
|
| 167 |

|
| 168 |
|
| 169 |
+
你可以看到,Canny Edge基本上就是一个边缘提取器,能够识别出图像中物体的边缘线条。
|
| 170 |
+
接下来我们需要载入runwaylml/stable-diffusion-v1-5模型和能够处理Canny Edge的ControlNet模型。为了节约计算资源以及加快推理速度,我们使用半精度(torch.dtype)的方式来读取模型。
|
| 171 |
|
| 172 |
+
```python
|
| 173 |
+
from diffusers import StableDiffusionControlNetPipeline, ControlNetModel
|
| 174 |
+
import torch
|
| 175 |
+
|
| 176 |
+
controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", torch_dtype=torch.float16)
|
| 177 |
+
pipe = StableDiffusionControlNetPipeline.from_pretrained(
|
| 178 |
+
"runwayml/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16
|
| 179 |
+
)
|
| 180 |
+
```
|
| 181 |
+
|
| 182 |
+
这本次实验中,我们会尝试使用一种当前最快的扩散模型调度器:UniPCMultistepScheduler。这个调度器能够显著加快模型的推理速度,只需要迭代20次就能达到与之前的默认调度器迭代50次相同的效果!
|
| 183 |
+
```python
|
| 184 |
+
from diffusers import UniPCMultistepScheduler
|
| 185 |
+
|
| 186 |
+
pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config)
|
| 187 |
+
```
|
| 188 |
|
| 189 |
+
现在我们已经做好运行这个ControlNet管道的准备了。如同之前我们在通常使用的稳定扩散模型中做的那样,在ControlNet的运行流程中,我们仍然需要提供一些文字描述(prompt)来指导图像的生成过程。
|
| 190 |
+
但是ControlNet将允许我们对生成图像的过程应用一些额外的控制条件,例如我们即将使用的Canny Edge来控制生成的图像中的物体的确切位置和边缘轮廓。
|
| 191 |
+
我们将用接下来的代码来生成一些人物的肖像画,而这些人物的姿势将于这副17世纪的著名画作中的少女摆出相同的姿势。在ControlNet和Canny Edge的帮助下,我们只需要在文字描述中提到这些名人的名字就可以了!
|
| 192 |
|
| 193 |
+
```python
|
| 194 |
+
def image_grid(imgs, rows, cols):
|
| 195 |
+
assert len(imgs) == rows * cols
|
| 196 |
+
|
| 197 |
+
w, h = imgs[0].size
|
| 198 |
+
grid = Image.new("RGB", size=(cols * w, rows * h))
|
| 199 |
+
grid_w, grid_h = grid.size
|
| 200 |
+
|
| 201 |
+
for i, img in enumerate(imgs):
|
| 202 |
+
grid.paste(img, box=(i % cols * w, i // cols * h))
|
| 203 |
+
return grid
|
| 204 |
+
|
| 205 |
+
prompt = ", best quality, extremely detailed"
|
| 206 |
+
prompt = [t + prompt for t in ["Sandra Oh", "Kim Kardashian", "rihanna", "taylor swift"]]
|
| 207 |
+
generator = [torch.Generator(device="cpu").manual_seed(2) for i in range(len(prompt))]
|
| 208 |
+
|
| 209 |
+
output = pipe(
|
| 210 |
+
prompt,
|
| 211 |
+
canny_image,
|
| 212 |
+
negative_prompt=["monochrome, lowres, bad anatomy, worst quality, low quality"] * len(prompt),
|
| 213 |
+
generator=generator,
|
| 214 |
+
num_inference_steps=20,
|
| 215 |
+
)
|
| 216 |
+
|
| 217 |
+
image_grid(output.images, 2, 2)
|
| 218 |
+
```
|
| 219 |
+

|
| 220 |
|
| 221 |
|
| 222 |
+
接下来让我们尝试以下ControlNet的另一个有趣的应用方式:从一张图像中提取一个身体姿态,然后用它来生成具有完全相同姿态的另一张图像。
|
| 223 |
+
在接下来的下一个例子中,我们将教会超级英雄如何使用[Open Pose ControlNet](https://huggingface.co/lllyasviel/sd-controlnet-openpose)做瑜伽!
|
| 224 |
+
|
| 225 |
+
首先,让我们来找一些人们做瑜伽的图片:
|
| 226 |
+
|
| 227 |
+
```python
|
| 228 |
+
urls = "yoga1.jpeg", "yoga2.jpeg", "yoga3.jpeg", "yoga4.jpeg"
|
| 229 |
+
imgs = [
|
| 230 |
+
load_image("https://hf.co/datasets/YiYiXu/controlnet-testing/resolve/main/" + url)
|
| 231 |
+
for url in urls
|
| 232 |
+
]
|
| 233 |
+
|
| 234 |
+
image_grid(imgs, 2, 2)
|
| 235 |
+
```
|
| 236 |
+

|
| 237 |
+
|
| 238 |
+
然后我们将使用controlnet_aux中的OpenPose预处理器来提取瑜伽的身体姿势。
|
| 239 |
+
```python
|
| 240 |
+
from controlnet_aux import OpenposeDetector
|
| 241 |
+
|
| 242 |
+
model = OpenposeDetector.from_pretrained("lllyasviel/ControlNet")
|
| 243 |
+
|
| 244 |
+
poses = [model(img) for img in imgs]
|
| 245 |
+
image_grid(poses, 2, 2)
|
| 246 |
+
```
|
| 247 |
+
|
| 248 |
+

|
| 249 |
+
|
| 250 |
+
最后就是见证奇迹的时刻!我们将使用 Open Pose ControlNet来生成一些正在做瑜伽的超级英雄的图像。
|
| 251 |
+
|
| 252 |
+
controlnet = ControlNetModel.from_pretrained(
|
| 253 |
+
"fusing/stable-diffusion-v1-5-controlnet-openpose", torch_dtype=torch.float16
|
| 254 |
+
)
|
| 255 |
+
|
| 256 |
+
```python
|
| 257 |
+
model_id = "runwayml/stable-diffusion-v1-5"
|
| 258 |
+
pipe = StableDiffusionControlNetPipeline.from_pretrained(
|
| 259 |
+
model_id,
|
| 260 |
+
controlnet=controlnet,
|
| 261 |
+
torch_dtype=torch.float16,
|
| 262 |
+
)
|
| 263 |
+
pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config)
|
| 264 |
+
pipe.enable_model_cpu_offload()
|
| 265 |
+
pipe.enable_xformers_memory_efficient_attention()
|
| 266 |
+
|
| 267 |
+
generator = [torch.Generator(device="cpu").manual_seed(2) for i in range(4)]
|
| 268 |
+
prompt = "super-hero character, best quality, extremely detailed"
|
| 269 |
+
output = pipe(
|
| 270 |
+
[prompt] * 4,
|
| 271 |
+
poses,
|
| 272 |
+
negative_prompt=["monochrome, lowres, bad anatomy, worst quality, low quality"] * 4,
|
| 273 |
+
generator=generator,
|
| 274 |
+
num_inference_steps=20,
|
| 275 |
+
)
|
| 276 |
+
image_grid(output.images, 2, 2)
|
| 277 |
+
```
|
| 278 |
+

|
| 279 |
+
|
| 280 |
+
#### 小结:
|
| 281 |
+
|
| 282 |
+
在上面的例子中, 我们探索了两种[`StableDiffusionControlNetPipeline`](https://huggingface.co/docs/diffusers/main/en/api/pipelines/stable_diffusion/controlnet) 的使用方式,
|
| 283 |
+
展示了ControlNet和扩散模型相结合的强大能力。
|
| 284 |
+
这里的两个例子只是ControlNet能够提供的额外图像控制条件中的一小部分。你可以在以下这些模型的文档页面寻找更多有趣的使用ControlNet的方式:
|
| 285 |
+
|
| 286 |
+
* lllyasviel/sd-controlnet-depth:https://huggingface.co/lllyasviel/sd-controlnet-depth
|
| 287 |
+
* lllyasviel/sd-controlnet-hed:https://huggingface.co/lllyasviel/sd-controlnet-hed
|
| 288 |
+
* lllyasviel/sd-controlnet-normal:https://huggingface.co/lllyasviel/sd-controlnet-normal
|
| 289 |
+
* lllyasviel/sd-controlnet-scribble:https://huggingface.co/lllyasviel/sd-controlnet-scribble
|
| 290 |
+
* lllyasviel/sd-controlnet-seg:https://huggingface.co/lllyasviel/sd-controlnet-scribble
|
| 291 |
+
* lllyasviel/sd-controlnet-openpose:https://huggingface.co/lllyasviel/sd-controlnet-openpose
|
| 292 |
+
* lllyasviel/sd-controlnet-mlsd:https://huggingface.co/lllyasviel/sd-controlnet-mlsd
|
| 293 |
+
* lllyasviel/sd-controlnet-mlsd:https://huggingface.co/lllyasviel/sd-controlnet-canny
|
| 294 |
|
| 295 |
|
| 296 |
|