Wednesday, August 8, 2012

The World's 10 Highest Paid DJ's

For any musician who's spent a lifetime practicing learning your craft, this might not be something that you'll want to read. Forbes recently compiled a list of the world's top 10 money making DJ's. While still not as much as the current pop stars, some DJ's are sure raking in a lot of cash, but that's thanks to a global EDM industry that is now worth an estimated $4 billion.

Here's the list:

1. Tiesto - $22 million
2. Skrillex - $15 million
3. Swedish House Mafia - $14 million
4. David Guetta - $13.5 million
5. Steve Aoki - $12 million
6. Deadmau5 - $11.5 million
7. DJ Pauly D - $11 million
8. Kadkade - $10 million
9. Afrojack - $9 million
10. Avicii - $7 million

Keep in mind that their income is from touring (Tiesto makes about $250k a night), endorsements, and merch.

Introduction to Digital Audio

Digital audio at it’s most fundamental level is a mathematical representation of a continuous sound.

The digital world can get complicated very quickly, so it’s no surprise that a great deal of confusion exists.

The point of this article is to clarify how digital audio works without delving fully into the mathematics, but without skirting any information.

The key to understanding digital audio is to remember that what’s in the computer isn’t sound – it’s math.

What Is Sound?

Sound is the vibration of molecules. Mathematically, sound can accurately be described as a “wave” – meaning it has a peak part (a pushing stage) and a trough part (a pulling stage).

If you have ever seen a graph of a sound wave it’s always represented as a curve of some sort above a 0 axis, followed by a curve below the 0 axis.

What this means is that sound is “periodic.” All sound waves have at least one push and one pull – a positive curve and negative curve. That’s called a cycle. So – fundamental concept – all sound waves contain at least one cycle.

The next important idea is that any periodic function can be mathematically represented by a series of sine waves. In other words, the most complicated sound is really just a large mesh of sinusoidal sound (or pure tones). A voice may be constantly changing in volume and pitch, but at any given moment the sound you are hearing is a part of some collection of pure sine tones.

Lastly, and this part has been debated to a certain extent – people do not hear higher pitches than 22 kHz. So, any tones above 22 kHz are not necessary to record..

So, our main ideas so far are:

—Sound waves are periodic and can therefore be described as a bunch of sine waves,
—Any waves over 22 kHz are not necessary because we can’t hear them.

How To Get From Analog To Digital

Let’s say I’m talking into a microphone. The microphone turns my acoustic voice into a continuous electric current. That electric current travels down a wire into some kind of amplifier then keeps going until it hits an analog to digital converter.

Remember that computers don’t store sound, they store math, so we need something that can turn our analog signal into a series of 1s and 0s. That’s what the converter does. Basically it’s taking very fast snapshots, called samples, and giving each sample a value of amplitude.

This gives us two basic values to plot our points – one is time, and the other is amplitude.

Resolution & Bit Depth

(click to enlarge)
Nothing is continuous inside the digital world – everything is assigned specific mathematical values.
In an analog signal a sound wave will reach it’s peak amplitude – and all values of sound level from 0db to peak db will exist.

In a digital signal, only a designated number of amplitude points exist.

Think of an analog signal as someone going up an escalator – touching all points along the way, while digital is like going up a ladder – you are either on one rung or the next.

Dynamic range versus bit depth (resolution). (click to enlarge)
Let’s say you have a rung at 50, and a rung at 51. Your analog signal might have a value of 50.46 – but it has to be on one rung or the other – so it gets rounded off to rung 50. That means the actual shape of the sound is getting distorted. Since the analog signal is continuous, that means this is constantly happening during the conversion process. It’s called quantization error, and it sounds like weird noise.

But, let’s add more rungs to the ladder. Let’s say you have a rung at 50, one at 50.2, one at 50.4, one at 50.6, and so on. Your signal coming in at 50.46 is now going to get rounded off to 50.4. This is a notable improvement. It doesn’t get rid of the quantization error, but it reduces it’s impact.
Increasing the bit-depth is essentially like increasing the number of rungs on the ladder. By reducing the quantization error, you push your noise floor down.

(click to enlarge)
Who cares? Well, in modern music we use a LOT of compression. It’s not uncommon to peak limit a sound, compress it, sometimes even a third hit of compression, and then compress and limit the master buss before final print.

Remember that one of the major artifacts of compression is bringing the noise floor up! Suddenly, the very quiet quantization error noise is a bit more audible. This becomes particularly noticeable at the quietest sections of the sound recording – (i.e. fades, reverb tails, and pianissimo playing.)

A higher bit depth recording will allow you to hit your converter with more headroom to spare and without compression to stay well above the noise floor.

Sampling rate is probably the area of greatest confusion in digital recording. The sample rate is how fast the computer is taking those “snapshots” of sound.

Most people feel that if you take faster snapshots (actually, they’re more like pulses than snapshots, but whatever), you will be capturing an image of the sound that is closer to “continuous.” And therefore more analog. And therefore more better. But this is in fact incorrect.

Remember, the digital world is capturing math, not sound. This gets a little tricky, but bear with me.
Sound is fundamentally a bunch of sine waves. All you need is at least three point values to determine a sine wave function that crosses all three. Two will still leave some ambiguity – but three – there’s only one curve that will work. As long as your sample rate is catching points fast enough you will grab enough data to recreate the sine waves during playback.

In other words, the sample rate has to be more than twice as fast as the speed of the sine wave in order to catch it. If we don’t hear more than 22 kHz, or sine waves that cycle 22,000 times a second, we only need to capture snapshots more than 44,000 times a second.

Hence the common sample rate: 44.1 kHz.

But wait, you say! What if the function between those three points is not a sine wave. What if the function is some crazy looking shape and it just so happens that your A/D only caught three that made it look like a sine wave?

Well, remember that if it is some crazy function, it’s really just a further combination of sine waves. If those sine waves are within the audible realm they will be caught because the samples are being grabbed fast enough. If they are too fast for the our sample rate it’s OK, because we can’t hear them.
Remember, it’s not sound, it’s math. Once the data is in, the computer will recreate a smooth continuous curve for playback, not a really fast series of samples. It doesn’t matter if you have three points or 300 along the sine curve – it’ll still come out sounding exactly the same.

So what’s up with 88.2, 96, and 192 samples/second rates?

Well, first, it’s still somewhat shaky ground as to whether or not we truly don’t perceive sound waves that are over 22 kHz.

Secondly, our A/D uses a band-limiter at the edge of 1/2 our sampling rate. At 44.1, the A/D cuts off frequencies higher than 22 kHz. If not handled properly, this can cause a distortion called “aliasing” that effects lower frequencies.

In addition, certain software plug-ins, particularly equalizers suffer from inter-modular phase distortion (yikes) in the upper frequencies. The reason being, phase distortion is a natural side effect of equalization – it occurs at the edges of the effected bands. If you are band-limited to 22 kHz and do a high end boost, the high end brickwall stops at 22 kHz.

Instead of the phase distortion occurring gradually over the sloping edge of your band, it occurs all at once in the same place. This is a subject for another article, but ultimately this leaves a more audible “cheapening” of the sound.

Theoretically a 16-bit recording at 44.1 smpl/sec will have the same fidelity as a 24-bit recording at 192. But in practicality, you will have clearer fades, clearer reverb tails, smoother high end, and less aliasing working at higher bit depths and sample rates.

The whole digital thing can be very complicated – and in fact this is only touching the surface. Hopefully this article helped to clarify things. Now go cut some records!

Monday, May 28, 2012

Need to be invented

I got to thinking about the pro audio products I'd like to see invented after reading a similar story on home theater audio. When you think about it, we've all gotten pretty comfortable with technology that no one could ever consider as cutting edge. Even though core recording products exist in the following areas, there's plenty of room for growth. Let's take a look at a pie-in-the-sky wish list:

1. A new speaker technology. We've been listening to recorded and reinforced sound with the same technology for about 100 years now. Sure, the loudspeaker has improved and evolved, but it's still the weakest link in the audio chain. What we need is a new loudspeaker technology that improves the listening experience and takes sonic realism to the next level.

2. A new microphone technology. Something is seriously wrong when the best and most cherished microphones that we use today were made 50 years ago. Just like loudspeakers, the technology has improved and evolved over the years, but it's basically the same in that it's still based around moving a diaphragm or ribbon through a magnetic field or changing the electrical charge between two plates (that's a condenser mic, if you didn't know). There has to be a new technology that takes a giant leap to getting us closer to realism than what we have now.

3. Get rid of the wires. Studios have been pretty successful at reducing the amount of wiring in the last 10 years or so, but there's still too much. We need to eliminate them completely. Think how much different your studio would be with wireless speakers, microphones, connections to outboard gear, etc. Much of this is possible today, but the real trick is to make the signal transmission totally lossless with zero interference.

4. The ultimate work surface. Here's the problem. Engineers love to work with faders and knobs. The problem is that faders and knobs take up space, which changes the room acoustics, and which are expensive to implement. When the faders and knobs are reduced to banks of 8, it gets confusing switching between all the banks needed during a large mix. What we need is a work surface that takes this hybrid to the next level, giving the engineer enough faders and knobs to do the job, yet making it totally easy to look at the banks underneath or above. I realize that the bank concept has been implemented on digital consoles for years, but there's no way to actually view what those other banks are unless you call one up. There has to be a better way.

5. The ultimate audio file format. I've done experiments recording the same instrument at 48k, 96k, and 192k and I can tell you unequivocally that the 192kHz recording won hands down. It wasn't even close. Consider this - the ultimate in digital is analog! In other words, the higher the sample rate, the closer to analog it sounds. We need a universal audio format with a super high sample rate that can easily scale to a lower rate as needed. Yes, I realize it's a function of the hardware, but lets plan for the future, people.

6. The ultimate storage device. Speaking of the future, there are a lot of behind-the-scenes audio people that are quietly scared to death that the hard drives and SSD's of today won't be playable tomorrow. Just as Zip and Jazz drives had their brief day in the sun, how would you like to have your hit album backed up onto a drive that nobody can read? That's a more real possibility of that happening than you might know. We need a storage format that is not only robust and protected, but has a lifespan akin to analog tape (tapes from 60 years ago still play today; some sound as good as the day they were recorded). We just can't guarantee the same with the storage devices we use today.

The Quietest Room In The World

If you've never been in an anechoic chamber, it's literally an unreal experience. Things are quiet; too quiet. So quiet that it's disconcerting, since even in the quietest place you can think of, you can still at least hear reflections from your own movement.

I've always assumed that the quietest anechoic room belonged to either JBL (I was told that they have 3 of them) or the Institute for Research and Coordination In Audio and Music (IRCAM) in France, but according to the Guinness World Records, it's actually at Orfield Laboratories in South Minneapolis. Supposedly the Orfield chamber absorbes 99.9% of all sound generated within, which results in a measurement of -9dB SPL. As a comparison, a typical quiet room at night where most people sleep is at 30dB SPL, while a typical conversation is at about 60dB SPL.

The Orfield chamber is so quiet that no one has been able to stay inside for more than 45 minutes due to the fact that you begin to hear your heart beating, you lungs working, and even the blood coursing through your veins. Some people even begin to hallucinate during the experience. In fact, you can't even stand after a half-hour since you no longer hear the audio cues that you're used to when you stand as the reflections bounce off the floor, ceiling and walls of the environment.

While it's easy to figure out what JBL does with their anechoic chamber, what goes on in an independent one like at Orfield? It seems that the chamber is used by companies like Harley Davidson and Maytag to test how loud their products are. NASA also uses it for astronaut training.

Here's a short video that describes the Orfield anechoic chamber.

Music is Life

I've always felt that being a musician was a profession of a higher calling that most others. When you're doing it well, especially with others, there's a metaphysical and spiritual lifting that other professions, nobel though they be, just can't compete with.

Now comes research that shows that music, as we have suspected all along, has numerous rewards, from improving performance in school to dealing with emotional traumas to helping ward off aging. These come as a result of the brain biologically and neurologically enhancing its performance and protecting it from the some of the ravages of time thanks to the active participation of the player in the act of producing music.

Nina Kraus's research at the Auditory Neuroscience Laboratory at Northwestern University in Evanston, Il. has already shown that musicians suffer less from aging-related memory and hearing losses than non-musicians. They also found that playing an instrument is crucial to retaining both your memory and hearing as you age, and how well you process all sorts of daily information as you grow older.

It turns out that just listening to music isn't enough though. You actively have to participate as a player in order to receive any of the benefits.

That's as good a reason I can think of to learn how to play an instrument and keep on playing it for life. It's not only good for your spiritual health, but your physical side as well.

We love vinyl

Why we enjoy listening to vinyl so more than a CD??

Here are a number of points to consider.

1) It's an analog format. Because vinyl is an analog medium, a record has a theoretical frequency response that goes to the moon. Seriously though, it goes way beyond our  what we consider our "text book" hearing limits. It's easy to get into a debate as to if that really matters or not. Some audio scientists will tell you that we can feel the harmonic detail beyond 20kHz and that adds to the realism, while others will point to a mountain of data that shows that theory is rubbish.

That said, it sure does sound better than a CD, doesn't it? A CD's upper frequency response is theoretically around limited to around 22kHz, thanks to the 44.1kHz CD sample rate and something called the Nyquist Frequency, which states that you can't have a frequency response that goes beyond 1/2 of the sample rate (otherwise, you get some nasty sonic digital artifacts). What actually happens in real life is that a filter is used to keep the frequency response below the Nyquist Frequency, and that filter introduces it's own set of artifacts. That's one of the reasons why some CD/DVD players are so expensive; they've got better filters.

What this all adds up to is there's something going on in the upper frequencies on vinyl that our ears seem to like. What that is can be debatable, but we do like it.

2) Was the master analog? Vinyl really helps the sound and feel of a digital master, especially one made at a higher sampling rate like 96 or 192kHz, but it really sounds great if the source was originally an analog magnetic tape master. It still sounds pretty good if the source is from a 44.1kHz CD master source, but not as good as a hi-res digital or analog master. Yes, it's better than a CD, but doesn't have nearly the depth and "air" that an analog master has. This is why we tend to like the vinyl reissues of classic albums so much.

3) Vinyl is subject to sonic degradation. The big downside to vinyl is that from the first play onward, a vinyl record sonically degrades. Think about it. You have this diamond stylus (you know, the hardest natural substance known to man) that's constantly grinding up against the soft plastic grooves and wearing them down. After the first 10 plays or so, you're never going to hear it that good again. After about 20, you'll be hearing a lot of more of the noise floor, clicks and pops, although it will happen so gradually that you'll get used to them by then. Still, like magnetic tape losing oxide from the friction across the tape head. Your first pass is always the best.

Those are just a few things to think about when it comes to vinyl. Now get out to your record store and buy some!


Zynaptiq Unveil graphic from Bobby Owsinski's Big Picture production blog
One of the dreams of post engineers has been to find a device or plugin that allows you to strip the reverberation from a sound. Dialog or effects recording with unwanted ambience eventually have to be replaced, costing time and money, and a music track swimming in too much reverb can turn into a muddy mess. Now comes an announcement from a company called Zynaptiq regarding a Mac plugin called UNVEIL, which they say accomplishes real-time de-reverberation and "signal focusing."

By using artificial intelligence, UNVIEL claims to be able to not only strip the ambience from a sound, but to add more of the natural ambience back into the sound, as well as attenuating some of the components that cause a sound to be "muddy" or masked.

If it works as claimed, it could immediately find a place in the plugin lists of DAWs everywhere. Not only would it be invaluable for post, but during a mix as well. Instead of adding artificial reverb to a mix element, it would be great just to be able to adjust the natural ambience of the element itself. That means, of course, that you have to record the sound well complete with some natural ambience in the first place, but that's a topic for a different discussion.
Check out UNVIEL at the Zynaptiq site. A free trial is available.

Is your Mix finished?

Console Fader image from Bobby Owsinski's Big Picture production blog
One of the tougher things to decide when your doing a project is when the mix is finished. If you have a deadline, the decision is quickly made for you, but if you have a deep pocket budget or unlimited time, a mix can drag on forever.

So when is a mix considered finished? Here are some guidelines, courtesy of The Mixing Engineer's Handbook:

1) The groove of the song is solid. The groove usually comes from the rhythm section, but it might be from an element like a rhythm guitar (like on the Police’s Every Breath You Take) or just the bass by itself, like anything from the Detroit Motown that James Jamerson played on (Marvin Gaye’s What’s Goin’ On or The Four Tops Reach Out, I’ll Be There and Bernadette for instance). Whatever element supplies the groove, it has to be emphasized so that the listener can feel it.

2) You can distinctly hear every instrument. Every instrument must have its own frequency range to be heard. Depending upon the arrangement, this is what usually takes the most time during mixing.

3) Every lyric, and every note of every line or solo can be heard. You don’t want a single note buried. It all has to be crystal clear. Use your automation. That’s what it was made for.

4) The mix has punch. The relationship between the bass and drums is in the right proportion and work together well to give the song a solid foundation.

5) The mix has a focal point. What’s the most important element of the song? Make sure it’s obvious to the listener.

6) The mix has contrast. If you have the same amount of the same effect on everything (a trait I hear from so many neophyte mixers), the mix will sound washed out. You have to have contrast between different elements, from dry to wet, to give the mix depth.

7) All noises and glitches are eliminated. This means any count-offs, singer’s breaths that seem out of place or predominate because of vocal compression, amp noise on guitar tracks before and after the guitar is playing, bad sounding edits, and anything else that might take the listener’s attention away from the track.

8) You can play your mix against songs that you love, and it holds up. Perhaps the ultimate test. If you can get your mix in the same ball park as many of your favorites (either things you’ve mixed or from other artists) after you’ve passed the previous seven items, then you’re probably home free.

In the end, it’s best to figure at least a full day per song regardless of whether you’re mixing in the box or on an analog console, although it’s still best to figure a day and a half per mix if you’re mixing in a studio with an analog-style console. Of course, if you’re mixing every session as you go along recording, then you might be finished before you know it as you just tweak your mix a little.

Thursday, May 24, 2012

Mastered for Itunes

Mastered For iTunes icon image from Bobby Owsinski's Big Picture production blog

"Mastered for iTunes" at is most basic is iTunes finally opening up to hi-res masters. This means a number of things:

1) iTunes now prefers that you supply the master audio files at 96kHz/24 bit, but any sample rate that's a 24 bit file will still be considered "Mastered for iTunes." Music files that are supplied this way will have a "Mastered for iTunes" icon (like on the left) placed beside them to identify them as such.

The reason why they're asking for 96/24 is so they can both start with the highest resolution source material for a better encode, but also for a bit of future proofing in the event that iTunes later converts to a better format or a higher encode resolution (it's now 256kbs, but more on this in a second).

2) "Mastered for iTunes" doesn't mean that the mastering facility does anything special to the master except to check what it will sound like before they (or the record label) submit it to iTunes, and then check it later once again. All encoding for iTunes is still done by Apple, not by the mastering houses, record labels, or artists.

The reason for this is to keep the encodes consistent and to prevent anyone from gaming the system by hacking the encoder, but also to avoid any potential legal problems that might occur when a mastering house sends the files directly to iTunes instead of the label without their permission, or uses different specs, etc.

3) As stated above, the mastering house doesn't do any encoding directly, but Apple has provided a number of tools that they can use to hear what the final product will sound like when it's encoded. That way they can make any adjustments to the master to ensure a good encode.

One unique aspect of "Mastered for iTunes" is something that's not been publicized called a "test pressing." When Apple finally encodes the file, they'll send a copy back to the label/engineer/artist to check. If they sign off on it, the song then goes on sale in the iTunes store.

Of the few mastering houses that are currently participating in the program (all of the major ones), it was surprising that most of the time a test pressing was rejected not because of the audio quality, but because it was the wrong master. Yes, as record companies seem to do, someone would actually send the un-mastered file or a completely different song or version. Luckily, the problem is now able to be caught in the test pressing stage.

4) Speaking of the sound quality, iTunes is now using a completely new AAC encoder with a brand new algorithm and the sound quality it produces is stunning. If provides an excellent encode if you use a few common sense guidelines (more on this in a bit), and if you do, the result is almost impossible to hear (at least on the music we listened to). Certainly didn't sound anywhere near as bad as the typical MP3.

So what are the tricks to get the best sound quality from an iTunes encode? It turns out that the considerations are about the same as with MP3 encoding:
a) Turn it down a bit. A song that's flat-lined at -.1dBFS isn't going to encode as well as something with some headroom. This is because the iTunes AAC encoder outputs a tad hotter than the source, there's some intersample overs that happen at that level that aren't detected on a typical peak meter, and all DACs respond differently. Something that won't be an over on your DAC may be an over on another playback unit.
If you back it down to -.5 or even -1dB, the encode will sound a lot better and your listener probably won't be able to tell much of a difference anyway. 
b) Don't squash the master too hard. Masters with some dynamic range encode better. Masters that are squeezed to within an inch of their life don't. Simple as that. Listeners like it better too. 
c) Although the new encoder has a fantastic frequency response, sometimes rolling off a little of the extreme top end (16k and above) can help the encode as well.
5) "Mastered for iTunes" is only an indication that a hi-res master was supplied; it's not a separate product. There will always be only one version of the song on iTunes at the same price as before. "Mastered for iTunes" doesn't mean you get to charge more, or that iTunes charges you more. Everything is like it was before, you just supply a hi-res master so it sounds better.

6) So how do you supply that hi-res master? This is where it gets a bit tricky. If you're signed to a major label, they've been contacted my their Apple reps and everything is in place, so no problem there. If you're with an indie label, insist that they contact their Apple rep for instructions.

If you use CD Baby or Tunecore, at the moment they'll tell you they don't take 24 bit or high sample rate masters. Insist that they contact their Apple rep and don't take no for an answer (this is what the Apple iTunes guy told us). Apple is greatly encouraging everyone to get with the program, so the more pressure you put on them, the quicker it will become a standard. Of course, if you can find out who your local Apple rep is (ask the local label), that could expedite things too.

The bottom line is that "Mastered for iTunes" is a great thing for digital music. As far as I can see, there's no downside to it (except maybe for the initial hassle you may go through as an indie), and you'll be giving your fans a much better sounding product as a result.

Tuesday, January 31, 2012

Free Sound Design Tool now available !!

Free Music Software Developer Oli Larkin has released pMix (App Store), a free sound design tool for Mac OS X that allows you to morph between VST plugin presets using an intuitive graphical interface.

Presets are represented by coloured balls that are positioned on a 2D plane. The size of each ball and its proximity to the cursor affects the weight of the associated preset in the interpolation.

Morphing between presets often results in the discovery of interesting hybrid sounds. By constraining sound manipulations within a predesigned “interpolation space” complex transitions can be achieved that would otherwise be hard to manage.

pMix can load four VST2 audio plugins. It comes with a suite of specially designed plugins which cover a range of experimental DSP techniques (noise generators, FM synthesis, formant filtering, frequency shifting etc). These plugins can also be used in other VST host applications.


Realtime VST2 plug-in chainer.
Live processing of audio input, file playback or instrument plug-ins.
Unique multi-layered interpolation approach.
Rich visual control interface with real-time feedback.
Includes a suite of 9 high-quality audio FX and generators.
Many options for randomisation.
Break-point function and “freehand” automation modes.
Audio file player & recorder built in.
Controllable via OSC and MIDI.
Can be used with other applications via Rewire or Soundflower.
Changes from v0.7 (released in 2008):
Redesigned interface.
Supports instrument plug-ins.
Supports tempo synchronisation.
Now includes a suite of specially designed plug-ins.