Rough Sketches, Real Results: How a Simple Drawing Teaches AI to Restyle Hair
Ai Hairstyles in Stable Diffusion!
You don't need to be an artist to guide a masterpiece. Sometimes a wobbly white line on a black canvas is all it takes to reshape reality... pixel by pixel.
Sebastian Torres dropped a tutorial that hit different for me. Not because I'm suddenly a Stable Diffusion wizard... but because the core idea underneath the tech is something I've watched play out in human lives for decades.
A rough sketch can guide a beautiful outcome.
Let that sit for a second.
The Setup
Sebastian walks through two methods for changing hairstyles and hair color in Stable Diffusion using ControlNet and inpainting. The first method uses a hand-drawn sketch... white lines on a black background... to tell the AI what shape the new hair should take. The second uses a blocky color palette painted in Photoshop to control multi-colored results through the T2I Color Adapter.
Both methods start the same way: with something imperfect that points toward something whole.
Method One: The Sketch That Guides
Here's what stopped me. Sebastian opens Photoshop, creates two layers... one black, one for drawing. Then with a thin white brush, he sketches the hairstyle he wants. Not a masterpiece. Not a detailed rendering. Just a rough outline of where the hair should flow.
That sketch gets loaded into ControlNet's Canny model. The AI reads those wobbly lines and uses them as guardrails for generation.
The specific settings matter, so here are the receipts:
- Checkpoint: Realistic Vision V4.0 inpainting (from Civitai) - Prompt: RAW Photo, blonde hair, high detail - Sampling: DPM++ SDE Karras, 40 steps - CFG Scale: 3.5 (low... this keeps things natural) - Denoising: 0.6 to 0.7 - ControlNet: Canny model, preprocessor set to none, control weight 1.1, "ControlNet is more important" mode - Resolution: Match your original aspect ratio, longest side maxing at 1024 pixels
The key insight... setting the preprocessor to "none" forces the model to use your full reference image without downsampling. Sometimes the cleanest path is the most direct one.
For styles that basic prompting can't reliably produce, Sebastian pulls in LoRA files trained on specific hairstyles from Civitai. Load the LoRA, load the matching sketch, generate. The combination of a human-drawn guide and a style-specific model creates results that neither could achieve alone.
Method Two: The Color Map
This is where it gets fun.
Sebastian paints a rough color palette in Photoshop... big blocky brush strokes mapping out where red goes, where black goes, where the background colors live. Not precise. Not polished. Just intentional placement of color.
That palette gets loaded into ControlNet's second unit using the T2I Color Adapter (T2IA color grid preprocessor, T2I adapter color SD14 V1 model). Processor resolution cranked to 2048 pixels. The prompt shifts to match... "red and black hair."
The result? Multi-colored hair that follows the painted guide while faithfully preserving the original background. The AI reads the rough map and translates it into something seamless.
BAM... a finger painting becomes a professional portrait edit.
The Bonus That Changes Everything
Sebastian's closing tip is the one I'd pin to every creative wall: none of these settings are set in stone. He demonstrates turning the color grid preprocessor completely off... forcing the model to interpret the raw image rather than a processed grid. Side-by-side comparison shows cleaner blending, higher quality results.
His advice applies to most ControlNet models. Play with the preprocessor set to none. See what happens. The tool that's supposed to help you might actually be standing between you and your best work.
Sound familiar? 🤷♂️
The Deeper Thread
Here's what I keep circling back to. Every method in this tutorial starts with something rough, imperfect, and human-made... then hands it to a powerful system that turns it into something extraordinary.
A wobbly sketch becomes flowing hair. A blocky color map becomes seamless multi-toned color. The human provides direction. The tool provides execution. Neither works without the other.
I've spent my life watching this pattern in people. A rough plan... a shaky first step... a barely-formed dream written on the back of a napkin. That's the sketch. And when you feed that imperfect beginning into the right environment... the right mentors, the right systems, the right community... something beautiful generates.
You don't need a perfect plan to create something stunning. You need a direction and the willingness to iterate. Set your denoising strength between 0.6 and 0.7... not zero (where nothing changes) and not one (where everything gets destroyed). Find the range where enough of the original remains while enough transformation happens.
That's not just an AI image editing principle. That's a life principle.
Tools Referenced
- Automatic1111 WebUI for Stable Diffusion - ControlNet extension (Canny model + T2I Color Adapter) - Realistic Vision V4.0 Inpainting checkpoint - LoRA hairstyle models from Civitai - Photoshop (or equivalent) for sketch and color map creation
Sebastian's tutorial is clean, efficient, and immediately actionable. But underneath the sliders and settings lives a truth worth carrying into whatever you're building today... your rough sketch is enough to start. Don't wait for the perfect plan. Draw the wobbly lines. Paint the blocky colors. Feed your imperfect beginning into the process and let generation happen. The AI doesn't need your perfection. It needs your direction. So does everything else worth building. ✨
--- Source: https://www.youtube.com/watch?v=kHZMzpw0Y2I
From TIG's Notebook
Thoughts that surfaced while watching this.
Living the lives we want not only requires doing the right things but also necessitates not doing the things we know we'll regret. — *Nir Eyal, Indistractable*— TIG's Notebook — Core Principles
Finding that special place where work and play intertwine is magical for creating deep neural connections.— TIG's Notebook — New Captures
google_doc_last_sync: '2026-04-03T21:00:50.682456-07:00'
Echoes
Wisdom from across the constellation that resonates with this article.
This shift towards financial accessibility isn't just changing individual lives. It's building an ecosystem where everyone has a fair shot at prosperity.
Nobody's ever read a comment and be like, 'You know what? I've changed my mind.' Said no one ever.
Scientists are using ultrafast lasers to permanently etch data inside glass that lasts 10,000 years, retrieved by robotic library systems and decoded by AI.