It looks like you're new here. If you want to get involved, click one of these buttons!
Subscribe to our Patreon, and get image uploads with no ads on the site!
Base theme by DesignModo & ported to Powered by Vanilla by Chris Ireland, modified by the "theFB" team.
Comments
Using a delay with a short delay value (like 15-50ms) and hard panning the original and delayed signal.
Using a stereo chorus
Using a stereo reverb
Using two mics on the cab at different distances and hard panning them.
Copying the track (this would have been to tape, now you can just copy the track in a DAW) and using different eq settings on each track. This can make things sound a bit weird and phases when taken too far but it can work.
Now you can just use a stereo widening plugin- Waves do a good one called S1.
Studio: https://www.voltperoctave.com
Music: https://www.euclideancircuits.com
Me: https://www.jamesrichmond.com
What makes something wide and spatial is the contrast of frequencies within the stereo field. That's why double tracking the same part even with the same sound will sound bigger because you can never play the same part exactly the same, it's the time differences that draw your ears to the panned extremities of your guitar. One note slightly ahead in left speaker to the guitar in the right speakers makes your ears hear them as different sounds and tells your brain where they are.
Now take that further and alter the actual frequencies not just their time alignment. Change the guitar sound in the left speaker, make it bassier or brighter than the one in the right speaker. You now have a bigger, fully spread stereo guitar sound.
In Zep's case, they probably miced two cabs (even three) and panned them. Each speaker in a cab sounds different and a different mic on each would capture their own sound based on the mics frequency response.
If I had to guess, It's multiple amps/ cabs, and lots of different mics open on the stage and in the room. There's a lot to the science of it (google phase coherency if you're interested) but basically if you have one mic on the cab and the other mic far back enough, you can get a signal that's different enough between them both that when you pan them apart you get two distinct and usable sounds left and right rather than a weird phasey wash.
Beyond that, such an effect could be replicated artificially in the mix using short delays, reverb etc, ADT...
Bandcamp
Spotify, Apple et al
1. That awful thing that commercial radio stations do with jingles that kind of makes your ears jangle and sounds super wide: I think what they do is split the waveform in to slices and pan them all alternately left and right - you hear the full waveform but it's artificially throwing different bands of it left and right. It kind of fucks with your head but as a result really jumps out. I suppose you could do a subtler version of that.
2. Mid-side processing. 2 ways to do this - when you capture or later forcing a mono mix of the low freqs:
- capturing - 2 mics - one aimed down the middle goes to a mono track, and a figure of 8 stereo mic with a stereo track. Then you can eq the sides and the mono separately and it increases the apparent width of the sound.
- force mono - you collapse anything below a certain frequency in a stereo recording in to its own mono track, and the rest of the recording is the stereo (sides) track - again process it differently and that stuff stands out more in the stereo image
no idea what technique was used on these recordings but there are a number of ways to increase apparent width. I think I remember reading that the more you process stereo width in to something (that wasn't there initially) the more phase problems you get and it can start to sound 'off'
you would not believe how much effect a multi band compressor has
I used to use TC Electronics Master X3, but I am moving over to Waves C4
"According to Shirley, there were small but significant differences between the stereo mixes for the How The West Was Won, recorded in two shows in Los Angeles, and the stereo for the DVD. “For the Los Angeles and Long Beach shows I tried to enhance the stereo imagery for the guitar,” explains ‘Caveman’, “so that it would sound fuller. For instance, I programmed panning in Pro Tools on the electric guitar in ‘Dazed And Confused’, to make it more interesting. On the CD, I didn’t want the risk of it sounding like early Van Halen, with the guitar on one side and you lose it on the other side. I wanted it to sound a bit more cohesive. Because the music for the DVD is sync’ed to images, I wanted the position of the guitar to be very clear. Jimmy was always on stage left, for the audience on the right, so his guitar is predominantly in that location"
"On Jimmy Page’s guitar Shirley used a Peavey Kosmos Pro, a rackmounted processor that adds definition and energy to low frequencies. “Sometimes the recording of the guitar had a kind of nasal twang, and the Peavey helped, giving the guitar a real thump on the bottom. It gave the feeling of being much closer to the amp than just from that nasal effect. When you stand in front of a guitar cabinet you get this thump in the bottom end that you can feel in your belly. The Peavey helped me recreate that. I had just a little Aural Exciter on the acoustic guitar to bring out the sparkle, and a little valve compression.”
I can't find the interview I was thinking of though. I may be confusing Shirley's involvement with TSRTS mix. I remember them directly referring to the vocal fixes on "What is and what should never be" and the guitars in general. There's a killer pro-recorded multitrack show from Southampton University 1973 which would respond to this treatment but by then Robert's voice had gone - the reason Jimmy wanted to get a show from 1972 out despite the need to fix some instrumental issues. I loves me some live Zep.