Everything is a Signal

WEFT is a creative coding language where all media—images, audio, video, or anything else—work the same way. You give it a position, it gives you a value. An image takes an (x, y) coordinate and returns a color. Audio takes a moment in time and returns a loudness. Video takes both position and a time and returns a color. WEFT calls this a signal, and because everything is a signal, different mediums are all just the same concept with different coordinates.

This uniformity is what lets you mix media freely. When audio and images are both just “coordinates in, value out,” there’s nothing stopping you from using one inside the other. You can use audio volume to move pixels, sample colors along a waveform, or let brightness values control a synthesizer. You don’t need to convert between formats or wire separate systems together, you can just use values where you need them.

When you render a frame or play audio, every output point is asking your code a question: what value exists here? A pixel wants to know what color it should be given its position, and a sample wants to know what note/volume it should have given its moment in time. WEFT asks this question for every pixel and every sample, each time with different coordinates, so you never loop over pixels or iterate through buffers. You write the answer once, and WEFT asks it everywhere.

Working with signals means thinking a bit differently than you might be used to. Instead of moving data from one place to another, you’re describing where to look. To shift an image right by 10 pixels, you don’t move pixels; you say “when asked about position x, check what’s at x - 10.” The data stays where it is, but your question changes. Once you see it this way, most effects become variations on the same idea: blur averages nearby values, delay looks at an earlier time, and distortion samples from somewhere unexpected.