Live webcam portrait via OpenFrameworks FluidFlow
Turn a webcam into a living portrait of currents and vortices.
Noir Donut
A single torus hangs in low light, its silhouette softened by drifting haze. The ring’s matte surface drinks the dim, catching only hints of rim light that outline volume and mystery. Atmospherics swell and thin—smudged smoke, a distant neon smear—suggesting a room just out of focus. No props, no context; the shape and the mood do the storytelling.
Fragment Flow sculpted the piece: layered noise and directional blur compose the fog, while subtle subsurface scattering and specular arrest on the torus betray materiality—ceramic? rubber?—without naming it. The palette is restrained: deep charcoal, bruised indigo, and a molten glare along
Dead animals on the ground become elevated to emblematic status using Fragment Flow.
Dead animals on the ground become elevated to emblematic status using Fragment Flow. This illuminated squirrel was adorned with flowers when the original photo was taken.
Dead animals on the ground become elevated to emblematic status using Fragment Flow. A blended photo of a tapestry forms an embedded backdrop for our avian friend
Playing with VVVV Gamma is addictive. Simple separation, alignment, and cohesion make boids move lifelike — great for linking audio engines such as Supercollider 3 via OSC to produce spatial sound and immersive interaction.
Add interactivity: map mouse, touch, or sensors to steering so audiences nudge flocks; use local density, speed, or nearest neighbors to trigger sound changes. Attach a granular synth to each boid that reacts to speed, or have the swarm open/close filters as it compresses. Gamma’s node-based flow makes prototyping fast.
Gamma’s .NET base (vs Beta) improves performance and tooling, so integrating libraries, custom nodes, and audio/MIDI/OSC is easier. Better threading and async IO keep visuals and audio synced when many agents run.
Ideas:
Map proximity to panning and reverb for cavernous clusters.
Use per-boid LFOs to modulate timbre, synced to flock oscillations or tempo.
Add audience-driven attraction points (depth cameras, beacons).
Implement predator/prey, goal-driven flocking, or “sleep” modes.
Record/buffer motion for generative loops.
I can sketch a VVVV Gamma patch for boid behavior and audio routing, or suggest node sets and .NET APIs for audio integration. Which would you prefer?