I saw a photo on Instagram and wanted to make one.
That's how this started. Someone had taken a wave photo and stretched a single row of pixels across the frame. The sky became smooth gradients, the water turned to glass, and the surfer stayed sharp in the middle of it all. It looked like a painting made from a photograph.
The technique has existed in Photoshop for years — select a 1-pixel row with the marquee tool, free transform, drag it across the image, mask the subject, flatten, repeat. It's tedious and there's no preview. You commit to each step blind and undo if it doesn't work. I wanted something where I could just drag a line, see what happens, and make decisions in real time.
So I built this tool. You pick which row of pixels to stretch, you define where the stretch applies, you paint a mask over whatever you want to protect, and you see the result immediately. The whole thing runs in your browser — your photos never leave your device, there's no server doing anything, no AI making decisions for you.
That last part matters to me. The algorithm is dead simple — it copies pixels. The interesting part is which pixels you choose, where you put the boundaries, and what you decide to protect. Those are human decisions that change everything about the result. Move the source strip two percent and the entire mood shifts. Protect the reflection instead of the subject and suddenly the image means something different.
The best results I've made with this tool came from flipping a photo upside down so the water reflection became the sky, and then stretching the "ground" into colored bands. It looked like a dream — two people standing on a pier floating above nothing. I couldn't have planned that. I just dragged things around until something felt right. That's what I wanted the tool to feel like.
Tools that keep the decisions yours.
h2eart is a small collection of tools built around a simple idea: the creative judgment should stay with the person using the tool. Not automated, not optimized, not "enhanced" by a model that thinks it knows what you want. You make the call, the tool does the math.
This isn't an anti-AI position. I use AI to build these tools. The code behind pixel-stretch was written with AI assistance. But there's a difference between using AI as a collaborator in building something and handing AI the creative controls of the thing you built. I'm interested in the first one.
Powerful creative techniques are often locked behind complex software. Pixel stretching has been possible in Photoshop since the 90s, but most people will never learn a marquee tool workflow. That doesn't mean they lack taste or ideas — it means the interface is a barrier. Remove the barrier without removing the decisions and you get something interesting: more people making things that reflect their own judgment, not a software default.
I think about this as simplifying without flattening. An AI that pixel-stretches your photo for you would produce competent results — it could detect the subject, pick a good source row, set reasonable zone boundaries. But it would also produce the same results for everyone. The version where you drag the lines yourself is messier and less optimized, but it's yours. The mistakes are yours. The happy accidents are yours. The moment where you flip an image upside down just to see what happens — that's not something an optimization function would ever try.
Every tool on h2eart follows this pattern. The execution is algorithmic — deterministic, repeatable, no black box. The taste is human. The gap between "anyone could use this" and "not everyone would make that" is where the interesting work happens. I want to make tools that widen access to the first part without closing down the second.
If you've been looking for tools like these, now you know where to find them.