Wreckage Eternal Dev Log
Let's dig into the mechanics of Wreckage Systems.
Its fundamental approach to audio favours samples over synthesis. While this is primarily due to the way the software it was built with operates, it is also true that my tired and noisy heart also tends to favour sampling over synthesis for a lot of things. Especially when it comes to beat-making.
(For any non-musicians reading, 'sampling' is colloquially used to mean recording snippets of sound, often bits of other music, to make beats or loops of some description. But in the context of Wreckage Systems and this blog post, by sampling I really mean any 'recorded audio' as opposed to synthesised audio. So, for example, a 5 minute field recording of some rain is sampling, as opposed to a software emulation of a Moog synthesiser that is running inside a computer. That would be 'synthesis'.)
Synthesis is an elegant and precise organisation of sound waves, a kind of domination over frequencies, imperious and mighty. It colonises noise, shapes it in its own image. Sampling meanwhile is the piracy of the high seas. To sample is to pull flotsam from the ocean of life and time, or else to seize it, disguise it, and turn it to your own ends. Synthesis weaves sonics from the air, whereas sampling captures the air, casts off its temporality, and creates collages of moments. Synthesis describes time; sampling reveals time travel.
In terms of finished music, I would choose analog warmth and scruffy noise over caustic digital fidelity any day, but in terms of wielding sounds to make it in the first place sampling will always be my true love.
WRECKAGE ETERNAL DEV LOG
Wreckage Eternal is now officially in development. As mentioned in the last post, at some point, unless it all comes tumbling down I'll migrate this dev blog over to the 65LABS patreon since, y'know, they are technically funding this. Hopefully that moment comes sooner rather than later because that would mean I am feeling pretty confident this thing has legs.
For now though, here's a little round up of progress.
WRECKAGE FM
There is now a Wreckage Eternal Test Broadcast Station Not sure if this embed will work via email but look:
This will mostly be broadcasting a holding pattern, but there have been a couple of test streams so far. As things start to come together, there will be more. It's hooked up to the Wreckage back-channel over on the 65Discord to send a notification whenever it goes live. (Although I think that channel is only for Wreckage System patrons (?))
WRECKAGE INSTRUMENTS
As explained above, the current version of Wreckage Systems is entirely sample-based. This new open-source rebuild, based on the ChucK language, does allow for synthesis too and hopefully that will be a good way to eventually evolve this into something more than just a reboot of the same systems. (Really, I feel like this whole project is only worth pursuing if it can push Wreckage Systems somewhere new.)
For now though, the task is to get sample-based systems up and running. To that end, so far, here are the musical building blocks ('classes' in programming speak) that have been created:
SoundEvent - This is the fundamental building block. It is a single audio file. That might be a five minute long sample of some rain, it might be an entire song, or it could be a single kick drum. Each SoundEvent contains the audio file itself, an optional low-pass filter (more fx might be added here later) plus various bits of metadata that can be used by other classes.
SamplePool - This is a collection of SoundEvents. (It should probably be renamed to SoundEventCollection tbh). This can be filled up with lots of SoundEvents (imagine lots of cut-up breakbeats at the same tempo, or a bunch of piano variations for the same song). This collection can then be used by other classes to quickly get a random SoundEvent from a range of possible options.
Instruments - These are machines that do something with SoundEvents. So far there are five of them:
Scatterer - This instrument has a SamplePool, plus a 'minimum time' and a 'maximum time'. When it is running, it picks a random SoundEvent from its SamplePool, plays it, then generates a random duration to pause between its minimum and maximum times, then does it again.
Looper - Even simpler than the Scatterer, this holds a single SoundEvent. When it is running, it plays that SoundEvent on a loop, with a few different options about the loop length, how it should crossfade at the loop point, things like that.
Seed - This instrument is a Wreckage Systems special. Very simple, surprisingly effective. It has a single SoundEvent, which is typically an entire song. It has a min/max 'pause time', and it also has a min/max 'planting speed'. When it is running, every X seconds (between its min/max pause time), it 'plants' between 3 and 5 seeds, with a very small pause (based on its min/max planting speed) between planting each one. And what is 'planting a seed'? It means playing an excerpt of the SoundEvent, around 4 seconds of it, starting from a random point, with a smooth fade in and fade out. All of these seeds are planted so close together they overlap, all of it is sent through a nice long reverb, and the result is a lovely kind of drone-y ambience. Seed systems are already up and running and since they are relatively simple, I hope before too long to have them on rotation over at the Test Broadcast station.
Piano - This instrument is a very basic piano. It has seven SoundEvents (the C notes of a piano from C0 - C6). These are mapped across the full note range and make up a fine-sounding if not particularly high-fidelity sampled piano for use wherever it might be needed.
Repeater - Mostly just a test instrument at the moment. Imagine a scatterer, but instead of deliberately random timings designed for ambience, a repeater is locked to a global BPM and plays random SoundEvents very precisely at various different divisions of a beat or bar. This instrument will likely be usurped, but I'm still thinking through ways to approach this.
AND THEN THE SYSTEMS
The idea is that a 'Wreckage System' will be a collection of some number of instruments like the ones described above. It customises those instruments, then sets them all running. For example the first test system 'Detext' has a looper which loops a field recording of rain with a low-pass filter on it, a scatterer that plays the occasional reverb-soaked guitar strum, a second scatterer that plays some odd clanky noise from time to time, and finally a piano that plays some procedurally-generated melodies every now and then.
WHAT NEXT
I've smoothed over some details there, but that's the general gist so far. I'm still figuring out the best way to manage audio routing and gain-staging (mixing) so lots of things can be stacked on top of each other and still sound nice.
Next on the list of classes to design and then work out how to make:
- Pattern - some kind of class that will have many, many child classes, that will essentially creates various different rhythms that instruments can draw upon.
- Sample - this will be distinct from SoundEvent, with the idea being that a sample can be more designed. Like, perhaps I want to make a really good kick drum from several layered SoundEvents - that would be a 'sample'.
- Kit - I guess then a kit would be a collection of samples that could be used in more than one place.
What else!? I dunno! Making it up as I go along!
Feel free to ask questions on Mastodon/ping me on the 65Discord.
Bye for now.
Member discussion