Painting embodies a unique form of visual storytelling, where the creation process is as significant as the final artwork. Although recent advances in generative models have enabled visually compelling painting synthesis, most existing methods focus solely on final image generation or patch-based process simulation, lacking explicit stroke structure and failing to produce smooth, realistic shading. In this work, we present a differentiable stroke reconstruction framework that unifies painting, stylized texturing, and smudging to faithfully reproduce the human painting–smudging loop. Given an input image, our framework first optimizes single- and dual-color Bézier strokes through a parallel differentiable paint renderer, followed by a style generation module that synthesizes geometry-conditioned textures across diverse painting styles. We further introduce a differentiable smudge operator to enable natural color blending and shading. Coupled with a coarse-to-fine optimization strategy, our method jointly optimizes stroke geometry, color, and texture under geometric and semantic guidance. Extensive experiments on oil, watercolor, ink, and digital paintings demonstrate that our approach produces realistic and expressive stroke reconstructions, smooth tonal transitions, and richly stylized appearances, offering a unified model for expressive digital painting creation.
Our pipeline reconstructs the painting process from a finished artwork. Starting with a target painting, we decompose it into a sequence of brushstrokes using our differentiable rendering framework. The method learns to generate realistic strokes that progressively build up the final painting, capturing the artistic style and technique. Our approach supports multiple painting mediums including oil, watercolor, ink, and digital art, enabling diverse artistic reconstruction scenarios.