How to add MIDI drums to a Cubase track

A quick guide to adding MIDI drums to a track in Cubase. Works with or without an external MIDI controller.

Add an Instrument track, and select HALion Sonic SE.

Choose a drum kit using the dropdown in the first channel.

Close this window.

If you’re using a MIDI controller, make sure that it’s plugged in and that the input is set to your MIDI input, if you’re using one. (e.g. an AKAI LDP8 in my case)

(If you’re not using a MIDI input you can still record drums, either with the on-screen keyboard or by drawing/writing the notes: create a blank track using the pencil, and then click on it to open up the note editor. But do that after you’ve set up a drum map, to make your life easier.)

Now set a drum map on the left-hand panel – GM Map.

This replaces the generic MIDI piano roll with named parts of the drum kit – much more useful. It also shows which MIDI note they correspond to.

Before setting a drum map:


Open the HALion Sonic window up again.

Add the same drum instrument in channel 10 of the HALion Sonic
If you don’t do this you won’t get any sound after adding the drum map. (I have no idea why.)

Now you can either record live through your MIDI input, or you can program some percussion via your mouse.)

Bonus – change which notes your external MIDI device sends to Cubase

The following instructions are for my AKAI LPD8 but are probably quite generic.

Open the LPD8 Editor (a separate programme).

Create a new preset with the notes you’re interested in from the drum map.

Then save it and press ‘Commit – Upload’.

It’ll now be on your LPD8, ready to use in Cubase.

How to record electric guitar using a Steinberg UR12 and Cubase AI

A quick guide to recording guitar through a Steinberg UR12 in the Cubase AI DAW. I produced this to help other people, and to remind myself in case I ever forget, because the Cubase software is not very intuitive.

I’llassume that you’ve plugged and installed in the Steinberg, installed the Cubase AI Software, and opened up Cubase AI.

Go to the ‘Devices’ menu dropdown at the top of the screen, then select ‘VST Connections’.

Go to the ‘Inputs’ tab and create a mono input using “Yamaha Steinberg USB ASIO” and “UR12 Input 2”.Cubase AI VST Connections controls - Input tab

Keep output as default on “Yamaha Steinberg USB ASIO” – i.e. Stereo speakers and Device Ports “UR12 Output L” and “UR12 Output R”.

Steinberg UR-12 with guitar lead and headphones plugged in

Plug your lead into the “Hi Z” input on the UR 12. (This is “Input 2”. The other one is for microphones.) Plug your headphones into the “Phones” socket on the right. (If you’re using 3.5mm headphones you’ll need a 3.5mm to 6.35mm adapter, as pictured above.)

Turn the Input 2 gain knob up to about 1/4 of the max.

Strum a bit, and you should see the Cubase Monitor show the input levels increase as you strum. If it’s flat-lining like in the below picture, check that you’re plugged in and that you’ve set up the input correctly. Play some low, loud palm-muted chords, and turn up the Input 2 gain until you get a red square appear above the blue input monitor. Click on the red square to make it disappear, and turn down the Input 2 gain a little. Your goal is to have it as high as possible without clipping.

Cubase AI monitor showing no input

Now left click in the audio panel, and select “Add audio track”

menu for adding new tracks in Cubase AI

A popup with a load of options will appear. Ignore them and add the track.

Click on the speaker symbol next to this track. This will allow you to hear the input as you play. It’ll turn orange to show that the “Monitor” option is enabled.

Audio track with Monitor option selected

Make sure to turn the ‘Output’ knob on the UR12 up a bit so you can actually hear it. At this point you should be able to play something on your guitar and hear it through your headphones from the UR-12.

If you’d like to add some amplifiers or other effects, go to “Inserts” on the left-hand “Inspector” and choose one. There are loads of free VST amplifiers and effects available.

Now let’s record something. To record audio from a track (e.g. “Audio 01” in the above image) you need to make sure that the record icon for that track is red. Cubase has a concept of an audio track being enabled or not enabled to record, and it just ignores it if it isn’t enabled.

Before you start recording, turn off the orange “Monitor” option. It’s useful for experimenting with a tone, but adds latency when recording so can mess up your timing. Use the “Direct Monitor” button on the front of the UR-12 box to send a dry (non-processed) signal back to you as you play.

To start recording, click on the recording circle near the top of the screen.

Bonus – fixing audio distortion in Windows 10

There’s a bug with Windows 10 whereby you hear horribly distorted output.
The two most useful threads on the topic (first and second) suggest that it’s been an issue for a couple of years. The fix is downgrading from 1.96 to 1.95 of a key driver. To do this, uninstall “Yamaha Steinberg USB Driver” via Add/Remove Programs (not via Device Manager – it won’t fully get removed if you try this route.), then install version 1.95 of the driver.

Bonus – gain screen space by removing VST Instruments and Media Bay

I don’t know what these do, so I’ve hidden them.  To do this, click on the button of three rectangles, just under the “File” menu. Then uncheck “racks”.


Play with your music – Module 5: Air Traffic Remix

For this module in the Play With Your Music course, I’ve been remixing ‘Air traffic’ by Clara Berry and Wooldog. Here’s the original track:

And here’s my remix:

I started by listening through the original mix and picking out some samples I liked.
Because we have access to the master mix, it’s possible to isolate a single voice or instrument.
I’d be interested to learn techniques for doing this sort of sampling where you don’t have the luxury of access to the original mix – ie where you can’t just isolate the instrument you want.

Once I’d identified the samples I wanted to use, I cut them up in Soundation. I made sure that all of my samples were 4 bars long – this made it easier to work with them later.
If they’d been of different lengths, it would have been much harder to coordinate them and use them concurrently.

My next move was to stitch these into a single song. One way of doing this would have been to record the music “live”, and simply turn on and off the different channels over the course of the song.
I played around with doing this, but couldn’t find a way in Soundation to record “live”.

But doing things more laboriously meant that I had more control – I could take my time applying effects to the mix, in a way that I wouldn’t be able to do live.

So I copy-pasted the 4 bar samples I’d created, and stirched them together into a full song. There was lots of copy-pasting, and it was a bit fiddly to get things to line up exactly. I found that zooming in very closely, and clicking on exactly the start of the bar, before pasting, helped.
Something I didn’t realise at first was that clicking and dragging a single clip from its top right hand corner loops it over and over. So I should have combined all my cut up 4 bar samples, looped them, and then cut down from there, rather than doing lots of cutting and pasting.

To build up the track, I started with the bass, then added in some piano and then drums.
I wanted the track to start with the rhythm section first, to depart from the original. The intro might be my favourite part of this remix.

Shortly after I’d started adding tracks, I began working on the effects and the positioning in the mix.
I wanted the bass to be prominent but not too loud, and I wanted the backing vocals to feel drifty and ethereal, but also still very present.

I used Distortion, then Equalizer then Compressor on the vocals. The distortion had very high gain and low volume, using distortion type WS1 (no idea what that means). The low end of the equalizer was maxed out; Mid was set to around a third, and High was set to nothing. The Compressor was pretty gentle. Very low attach, low release, high ratio and threshold, and medium gain.

I gave the backing vocals some reverb and delay. I used a cutoff filter on the drums towards the end of the track, to give a descending feel. I then removed the drums completely, before bringing them back at the same volume, and then fading out the volume, allowing me to close the remix with the bass alone, after the piano and then the drums have faded out.

I enjoyed picking out different parts of the original track to use in the remix. I might have liked a bit more raw material to work with – probably I should have used some additional instrumentation from elsewhere – but I quite liked the challenge of working with just the original track. I found it a bit tricky to line up some of the timing – and I’m still not quite sure if the piano is right. I think I’d enjoy working on a remix of a different track – or set of tracks – but I’d want to think it through very carefully. I’d also need to think about how to best take samples from tracks when you don’t have access to the masters. So I may give the final module of Play With Your Music a miss for now, but I’ve definitely enjoyed the course, and found that it’s improved my critical listening ability and given me a taste of using a digital audio workstation and mixing and remixing audio.

Play with your music – Module 4: Creative Audio Effects and Automation

This week in Play with your music we’ve been looking at a few basic effects that you can apply to individual tracks of audio while mixing: delay, reverb, EQ and compression. This week’s assignment was to create a version of “Air Traffic” with some effects applied. An optional task was to use this to create a new dynamic mix of air traffic. The optional task sounded like a fun extension so I tackled this too.

Here are some things that I wanted to do with my mix:

  • Keep the piano as the dominant backing for the first verse, as it’s really the only frame for the vocals at this point in the song. I like how the faster, plonky texture goes with the soft flowing vocals. But the for the rest of the song, I wanted to make the soundscape a little less cluttered, and decided that the piano could be removed.
  • Highlight some of the lovely drumming and bass work. I mixed the volume of the bass quite high, and increased it a little for one key passage. For the drums, I used EQ so that the drumming is initially very muted and bassy – a bit like it’s being played next door. I then change the EQ part way through to bring out the rimshot section. (I’m not a drummer, so I could be incorrect here, but it’s the bit that sounds more clickety-clackety). In the final section of the song I set the EQ to gradually change, giving a strange, mechanical swooshy descent.
  • Make the baritone guitar a little smoother. To do this, I applied a low pass filter. I made sure not to put the cutoff of the low pass filter too low – eg 308hz, as it makes the sound too bassy and it loses too much of the attack at the edge of the note. 1477 seemed like a good value, but upon listening to the track in the context of the mix it lacked definition and became a bit insignificant now that I’d dialed back the twang.
  • Strip the drums down for all but the section where I want to highlight them. So i applied a low pass filter with a very low cutoff.
  • Make the steel guitar more shimmery. So I put some reverb on it, with a a fair amount of wet signal.
  • Change the lead vocals somehow. I love the solitary vocals and piano in the first verse, so I didn’t really feel any need to change anything here. But for the sake of experimentation, I wanted to see if I could make the vocals more ethereal, or give them more presence somehow. I started off by applying a very small delay. When set to under 30ms, this just made the sound feel a bit wider, rather than adding distinct or overlapping voices. I settled on a slightly longer delay time, keeping it about the same for both channels. I wanted the final line of the song to be free from reverb, so I used the automation setting in Soundation to drop the wet signal for the very last line. Having played around with this for a while, on listening through I decided that actually the extra lead vocals didn’t work so well and I got rid of this effect.
  • Give the backing vocals a bit more presence. I added some compressor to do this.
    Along the way I found a few artifacts or clicks in the original wav files. It didn’t seem like I could directly edit the audio files that made up each track, so instead I went in and dropped the volume on the relevant track to nothing for the split second where the distracting noise appeared. (I used Soundation’s automation setting here, as shown below). This took a bit of work soloing each instrument to check which track needed fixing.

    removing artifacts in soundation

    For anyone else working on this mix, I found artifacts on the following tracks and locations: bar 27 on the bass track (2 clicks), bar 28 in EFX (I’d muted this track anyway), bar 34 and 39 bass, bar 40 backing vocals, bar 42 in the steel guitar, bar 46 lead vocals, bar 56 on the steel guitar, bar 68 bass, bar 73 bass.

    What was my overall experience of adding effects to this song? Overall I found that effects were a bit unnecessary, to be honest. I enjoyed being able to highlight the bass and drums, but that was mainly down to volume and EQ rather than anything else. I also enjoyed using compression to bring a bit more prominence to the backing vocals. But I probably enjoyed using the pan and volume settings in last week’s static mix more, as it felt like I was honing in on the best elements of an existing expression, rather than trying to put Christmas lights onto an already beautiful tree.

    I think that in some cases effects can sound really cool – I’m particularly looking forward to playing around with an electric guitar in a digital audio workstation – but in other cases they might be an unhelpful veneer. In this track it felt that a lot of the time effects were cluttering or obscuring. Maybe that’s more to do with my skill in employing them than their utility in general. From this week’s module I’ve gained initial basic understanding of effects, and I’m looking forward to being able to learn when and how best to use effects in the future.

Play with Your Music – Module 3: Reverse Engineering a Multitrack Mix

The third module of “Play with your Music” is about the mixing process: taking individual tracks of recorded instruments and vocals, and arranging them together into a cohesive whole.

The first part of this week’s assignment was to reproduce Clara Berry & Wooldog’s “Air traffic”, by setting the volume and pan (left-right) for each track. This reproduction is to create a “convergent” mix. The task was to make a static mix, so we’d just have to find a single setting for each track, rather than adjusting things over the course of the track.

There was a video on multi-track mixing and a useful interview with the musicians, producer and engineer involved in the creation of “Air traffic”. Alex Ruthmann has a good interview technique, and challenged the interviewees to unpack things that were obvious to them but not to mixing novices.

We were doing our mixing in-browser in a tool made for the occasion. It basically loads an 128kbps mp3 file for each track, and then uses javascript to handle the volume and panning of each track. The mix information gets stored as a string at the end of the URL generated by the web page. This is how we can share our mixes with each other. Listen to my convergent mix of “Air Traffic”. The string at the end of the URL is pretty neat: a dictionary with global information identifying the song and its length, followed by a list of dictionaries for each track. I actually just used the soundcloud version of the track, rather than the left and right channels in the mix deck, as I found this a bit easier to navigate. Hopefully the versions aren’t too different!

The most fun part of this week was being given the chance to come up with a different version of the mix.

This is the type of “playing with my music” that I’d been waiting for, and having listened to this track a couple of dozen times I had an idea of how I wanted to refashion it: I wanted to focus the track around the lovely vocals and driving bass. While I love multi-layering complexity, I also love uncluttered, focused soundscapes. That’s what I’m aiming for with my mix. Here’s what I did:

  • I removed the piano
  • I moved background vocals to the left channel, and lead vocals to the right channel, so that they could have more of a duet feel.
  • Initially I made the background vocals louder than the lead vocals, to see what that would sound like. But this sounded a bit too strong, so I turned the volume of these down, so that left and right vocals were now equal.
  • I love the drum entrance, but keeping the drums at the same level for the whole mix, as we’re doing in this static mix exercise, made things feel too abrupt when certain elements appeared. And I didn’t want them vaguely-present, but not loud enough to listen to closely. So I stripped the drums out entirely.
  • Without the piano there’s actually nothing to listen to for the first few bars, so I changed the min time in the URL to 6, to trim the silence
  • I then tried to focus in on the timpani, which I’d set to about half of max volume and forgotten about while I worked on the more important elements. I couldn’t hear it very clearly, so I maxed out its volume and listened to its contribution to the mix. It only makes a few appearances – in a way, it’s a frame for the drums – and sounded a bit disjointed without some of the other embellishments that I’d removed, so I got rid of it too.
  • I gave the mix another few spins, and decided that the bass was set a bit loud. So I turned the bass down a bit.

    Have a listen to my divergent mix of “Air Traffic”.

    divergent mix of air traffic, by martin lugton

Play with Your Music – Module 1: Analyse your favourite tune and share it

I’m currently taking a 6-week open online course in mixing and remixing music, called Play With Your Music. The first’s week assignment is to describe the sonic landscape of a track I like.

Why I’ve chosen this track: Alcest is one of my favourite bands, if not my favourite. Alcest’s music sounds graceful, elegiac and transcendent. It feels like it’s coming from a context beyond every-day worries, concerns and feelings.

I’ve chosen a track from Alcest’s second album, Écailles de lune, called Sur l’océan couleur de fer. I particularly like its sorrowful and slightly mysterious lyrics (they work better in French, but I’ve pasted a translation below). I think the track is about 30% longer than it needs to be, but I still really like it.


On The Iron-Coloured Ocean

On the iron-coloured ocean
Cried an immense choir
And those long screams whose insanity
Seem to pierce through Hell

And then death and silence,
Rising just like a black wall
…Sometimes, in the distance, could be seen
A swaying fire

Translation from lyrics translate.

The instruments and their location in the mix:

The main instrument is a slightly shimmering, mellow clean, electric guitar, played gently.

Just behind it in the mix sit the distant, tranquil, male vocals; wavering slightly but with some lovely sustain and slight vibrato on these longer notes.

There are other instruments too – a smooth, firm and unobtrusive bass guitar part in the centre of the mix, additional guitar and vocal parts in left and right channels, vocal harmonies, an acoustic guitar, backing vocals later and a symbol – but the track is defined by this interplay of vocal and guitar over the bass. The parts where the guitar climbs up one channel, and the vocals climb up another, and the two intertwine, are particularly pretty.

I enjoyed this exercise, and it’s helped me to be more attentive to what I’m listening to. I hadn’t noticed the acoustic guitar that is used towards the end of the track, for example.