Maximizing Your Workflow with Stable Diffusion: The Ultimate DeForum Cheat Sheet Guide

Master Stable Diffusion effortlessly: Unveil secrets with our guide and DeForum Cheat Sheet.

2024-02-28 Marius Jopen

Dear friends,

I am using Stable Diffusions Deforum a lot. Sadly it does not run with ComfyUI, so it is not possible to really save workflows. You can export a Json file with all settings, but in my opinion it is not really user friendly.

I have my own preferred settings saved in a Notion document. But instead of keeping it private, I will share it with you here.

Upscaling

I do not recommend to upscale while generating the images. It is better to upscale with the Extra Batch function once all the images were generated.

The ScuNET PSNR upscaler which is already integrated into the default SD is petty good and fast. But I am happy to learn more about upscaling and write an extra article about that.

Strength, Cadence & Coherence

If you create a video with very fast movements it is recommended to not use Cadence. For example a video with a dancing person looks better and fluid with no Cadence. The downside is the flickering. To avoid it, you can turn up the strength instead.

If you create a video with generally more fluid movements, like a 3D camera browsing through a world, I recommend a Cadence of 3. And then use the Optical Flow Cadence and set it to RAFT.

Paths

Two useful paths which you will need with Deforum are:

 outputs\img2img-images\Deforum_20240102231149\20240102231149_settings.txt
 uploads/controlnet/gradient.mp4

In Windows it is also possible to copy the absolute path into the fields. You can get them by double clicking into the address bar in the Windows Explorer.

Negative Promtps

I have some negative prompts which I normally use.

 (worst quality, low quality, normal quality, lowres, low details, oversaturated, undersaturated, overexposed, underexposed, grayscale, bw, bad photo, bad photography, bad art:1.4), (watermark, signature, text font, username, error, logo, words, letters, digits, autograph, trademark, name:1.2), (bad hands, bad anatomy, bad body, bad face, bad teeth, bad arms, bad legs, deformities:1.3), morbid, ugly, mutated malformed, mutilated, poorly lit, bad shadow, draft, cropped, out of frame, cut off, censored, jpeg artifacts, glitch, duplicate

Positive Prompts

Also here are some positive prompts which generally improve the quality of the image:

 hyperdetailed photography, soft light, masterpiece
 hyperdetailed photography, soft light, masterpiece, 8k, octane render, beautiful depth of field
 hyperdetailed photography, soft light, masterpiece, (film grain:1.3), (complex:1.2), (depth of field:1.4), (symmetry:1.2)

Timebased Prompts

I always end up copying those Json snippets around. Here is an empty one:

 {
"0": "XXX",
"100": "XXX",
"200": "XXX",
"300": "XXX",
"400": "XXX",
"500": "XXX",
"600": "XXX",
"700": "XXX",
"800": "XXX",
"900": "XXX"
}

And with increased strength:

 {
"0": "(XXX:1.4)",
"100": "(XXX:1.4)",
"200": "(XXX:1.4)",
"300": "(XXX:1.4)",
"400": "(XXX:1.4)",
"500": "(XXX:1.4)",
"600": "(XXX:1.4)",
"700": "(XXX:1.4)",
"800": "(XXX:1.4)",
"900": "(XXX:1.4)"
}

I also use ChatGPT a lot to help me with those timebased prompts. For example:

 {
"0": "XXX",
"100": "XXX",
"200": "XXX",
"300": "XXX",
"400": "XXX",
"500": "XXX",
"600": "XXX",
"700": "XXX",
"800": "XXX",
"900": "XXX"
}

Right now the steps are always 100.
0, 100, 200.
Change them to steps of 1000.
Also add a just the name of one ntropical bird into each line instead of the XXX

This is the output:

 {
"0": "Scarlet Macaw",
"1000": "Harpy Eagle",
"2000": "Keel-billed Toucan",
"3000": "Resplendent Quetzal",
"4000": "Blue-and-yellow Macaw",
"5000": "African Grey Parrot",
"6000": "Cockatoo",
"7000": "Sun Conure",
"8000": "Hornbill",
"9000": "Flamingo"
}

Here is a another one:

 {
"0": "(flowers:1.4)",
"100": "(trees:1.4)",
"200": "(jungle:1.4)",
"300": "(crystal:1.4)",
"400": "(ice:1.4)",
"500": "(water:1.4)",
"600": "(sky with clouds:1.4)",
"700": "(desert:1.4)",
"800": "(sunset:1.4)",
"900": "(fire:1.4)"
}

And with ChatGPT you can change the strength from 1.4 to 1.7 if you want those prompts to be more strong.

Creative Prompts

I generate a lot of images and videos with those doll faces. Also a lot of dark stuff with the style of H. R. Giger. Here are some creative prompts which I also use quite a lot:

 (innocent:1.4), (freckles:1.2), (doll face:1.6), white teeth, (porcelain skin:1.3)
 (liquid:1.4), water, white slime, (fluid:1.3), shiny, glossy
 (fetish:1.2), sexy, (erotic:1.3), (sensual:1.3)
 nylon stockings, dominatrix suit, high heels
 (red cables:1.4), machine, (style of h r giger:1.8), pipes, metal bars
 light blue background

ControlNet

Most of the settings in ControlNet I just leave.

But I always turn on the Pixel Perfect, play around with the Control Mode and Resize if needed.

Here is the path for the videos:

 uploads/controlnet/gradient.mp4

Models

For animating people, I think the Canny models are very good. You can even combine them with the Depth models to get more… depth.

For animating abstract 3D shapes or rooms, or objects, I think the Depth models are pretty good.

If you export a depth map or a black and white 3D image / video already from a 3D application, then you don't need the Preprocessor for the depth models.

Some good depth models are:

 sargezt_xl_depth [0e5ed0e4]
sai_xl_depth_128lora [f5ddde75]
kohya_controllllite_xl_depth_anime [d52fdcf0]
kohya_controllllite_xl_depth [9f425a8d]
diffusers_xl_depth_full [2f51180b]

If you want to play more around artistically with the contrast of images, I recommend the qrpatternSdxl_v01256512eSdxl model. Thats the one to generate creative QR codes with. But you can use if for so many cool things. I recommend to play around with it. No preprocessor needed.

Movements

I always switch to 3D. Even if I don't have any movements at all. It seems to work without errors.

Translate Z is the Zoom:

 0:(0)
 0:(0.3)
 0:(-0.3)

Also I recommend to look into Parseq which is a tool which makes the control of the 3D movement much easier. But still not really intuitive like in a 3D computer game.

But it is possible to use Blender to export a 3D camera movement. Have a look here.

And I prepared already some 3D animations which you can download here.

I hope that this helps some people.

I personally will use this document to copy and past settings into my Stable Diffusion, because I tend to forget things and because I am lazy.

Much love!

Marius

From Digital to Physical: A Creator's Retreat in Portugal

My Time Off the Grid: A Quick Dive into a Portugal Village Life

2024-04-07 Marius Jopen

Innovating AI Art with Stable Diffusion and Deforum: The Power of Custom APIs

From Idea to Implementation: My Experience Developing an AI Animation Tool

2024-03-29 Marius Jopen

Beyond Pixels: The Journey of AI Art into the Deutsche Bundestag

Marius teams up with Paula Kühn to present an innovative AI art piece in the Deutsche Bundestag.

2024-03-22 Marius Jopen

The Art of Listening: Navigating the Sounds of NTS Radio

Join me on a personal exploration of NTS Radio's live shows and archives, where every genre offers a unique auditory adv...

2024-03-14 Marius Jopen

Reclaiming Digital Autonomy: An Artist's Guide to Data Management and Security

Insights on Balancing Convenience and Security While Managing Terabytes of Creative Content

2024-03-08 Marius Jopen

Blending Art and Astronomy at the European Southern Observatory

Follow my journey through exclusive insights and innovative AI video performances at a leading global astronomical obser...

2024-03-01 Marius Jopen

Maximizing Your Workflow with Stable Diffusion: The Ultimate DeForum Cheat Sheet Guide

Master Stable Diffusion effortlessly: Unveil secrets with our guide and DeForum Cheat Sheet.

2024-02-28 Marius Jopen

Visual AI Mastery: Revolutionizing Planetarium Shows with 360° Animations

An Innovator's Guide to Merging Artificial Intelligence with Astronomical Wonder

2024-02-23 Marius Jopen

Unlocking Creative Visions: Dive into the World of Custom Loras with Super SDXL

Exploring the Frontier of Digital Art: A Guide to the Top SDXL Loras for Enhanced Stable Diffusion Projects

2023-11-09 Marius Jopen