Roland MT-32 Emulation Still Beating (May 10, 2013)

KrisV

Administrator
Back in the '80s and '90s, supported games sounded a lot better if you had an expensive Roland MIDI card for your PC. Now, it's possible to enjoy nearly the same audio quality through software emulation. We first reported on the munt Roland MT-32 emulator two years ago. The emulator has since been updated to version 1.2, getting even closer to the real thing. Best of all, it's DOSBox compatible. Taewong has posted a custom build of DOSBox that includes the munt 1.2 emulator as well as an OpenGlide patch from gulikoza.

The video below starts off using standard Sound Blaster music and switches to Roland MT-32 around 4:26. The difference is huge. More pre-recorded audio samples are available on YouTube. We have compiled a short list here.

--
Original update published on May 10, 2013
 
Last edited:
Use bassmidi and set the midi synth to munt. That way you can use windows media player to play the midi directly through munt. Unforunately windows 8 took this feature away, you have to play the midi directly through munt. :-(
 
If you play the midi through windows, Dosbox doesn't record it along with the video (you can make it record separately, but then you have to spend ages synching them up). Which is fine for usual purposes, but extremely annoying when you want to upload something for Youtube.
 
And I should point out, Mac users can use boxer which has munt also included as well as handles the mt32 screen messages in a neat snazy graphic. Though the last released version hasn't been updated yet. munt 1.2 is already in the source for the next version.
www.boxerapp.com
 
I notice that the missile launch and explosion sounds are better on the MT-32. These always seemed to be a weak point in WC2 on sound blaster - if it could do digitised speech, why not digitised explosions? I can also hazard a guess - WC2 didn't seem to have a have the ability to mix digitised sounds, so could probably only play one at a time. A common trigger for speech is something blowing up, so playing a digitised explosion would mean no speech right when you need it.

On the other hand, using the music card meant setting aside at least one music channel for sound effects. This probably wasn't seen as a problem with 16-channel MIDI, but as Mark Knight observes over here, it certainly constrained the Amiga music where there were only four channels to begin with.
 
Last edited by a moderator:
It's because the sound effects aren't actually digitized. It's actually Adlib Frequency Modulation synthesis which was a standard that soundblaster had incorporated into itself. I don't remember the spects but yes it could generate more then one sound at a time Music and sound effects even. Just it was rather sketchy in the latter by today's standards. Adlib really was designed to be a music card. You also must remember that digitized speech was really ground breaking stuff at the time.
 
I don't remember the spects but yes it could generate more then one sound at a time Music and sound effects even. Just it was rather sketchy in the latter by today's standards. Adlib really was designed to be a music card. You also must remember that digitized speech was really ground breaking stuff at the time.

Yes, on the programming side Adlib music synthesis was really nice, given the right libraries. You could start a tune playing and it would keep going, and even repeat, all on its own. This meant your program could spend all its precious CPU cycles for graphics and simulation. I've even seen games crash unrecoverably, while the music kept playing without trouble.

I never worked much with digitised sound libraries under DOS (let alone interacting with the hardware directly), so I'd welcome anyone with more knowledge. However, I'm pretty sure that the rules for the Sound Blaster 1.0, 1.5 and 2.0 were:
  1. You could queue up one digitised sound effect for the Sound Blaster and have it play. The duration you could queue would be determined by the amount of system memory you were willing to give up.
  2. The Sound Blaster could happily mix that one digitised sound with FM synthesised music without complaint.
  3. When CD audio became possible, the hardware also combined that with a single digital input.
  4. However, if you wanted to play two or more digitised sounds simultaneously, your program had to spend CPU cycles doing the mixing, ending with a single waveform for the sound card to play.
I do remember that playing module files (made popular on the Amiga and Atari, these contained multiple digitised instrument samples instead of FM synthesis) was programmatically complex, and required ongoing work by the CPU at very close intervals. Several DOS games did include such music in the early 90s, but at first they only did so during the title screen and other non-gameplay moments. Star Control II (November 1992) is the oldest game in my own collection that plays digitised module music - and mixes it with digitised sound effects - during gameplay. However Star Control II has programmatically simple 2D graphics. While they rotate, the rotations are pre-drawn, and scaling is in multiples of two.

There probably are older games - maybe even some that predate Wing Commander II (September 1991). However, I think it's safe to say that trying to mix multiple digitised sounds while displaying 3D graphics of WC2's complexity would have made an already bleeding-edge game unplayable. The programmers made the right decision to concentrate on digitised speech, but after a decade of configuring Sierra games, it's weird to hear a Roland do the sound effects.
 
Last edited by a moderator:
Well that's really interesting, and would answer a lot of question I had as to why digitized sound wasn't so prevalent before 1994 even though the soundblaster was available.

And Rolland wouldn't use any cpu either, Just a midi command to the box saying "play explosion patch at such and such level" My question is did the mt32 handle which channel to play the sound on? or was that in the application running to send a signal? Munt's qt interface gives a very good view of whats going on with the mt32 and with a game like wing commander it really hast to switch instruments left and right to stay under its 32 instrument max it's really something to see. You wouldn't guess how easy that requirement is to fill up. when coupled with sound effects.
 
The soundblaster was a Yamaha OPL2 (FM synth, like Adlib) and a DAC, and both were combined together in an analog mixer prior to the output amplifiers.

Sounds are basically very simple devices - they're just the audio chips wired straight to the ISA bus, with a tiny amount of logic gates to handle DMA because that DAC needs samples fed to it at a regular pace. The MIDI port to which stuff like the MT32 hooked to was just a basic UART. The "drivers" you install basically let the programmer deal with things at a higher level (like "play this PCM data to the DAC" and "Play these MIDI notes to the OPL") rather than having to actually program the registers and DMA manually (which some games also did).

Mixing a lot of sound together to one channel is very complex - it's complex enough that even today it requires a lot of CPU to downmix 32+ tracks down to 2 channel stereo in real time. (Then again, you're easily talking about 48 or 96kHz with 24 bits resolution these days and mixing 32+ channels is actually a lot of data being shoveled around the CPU. Back in the soundblaster era, it was basically 8kHz audio).

As for playback - I'm fairly certain the channel assignments are static - you tell it what channel to play what note on what bank at what envelope. It's better for the application anyhow because you want some channels assigned for music and others for fx. You assign banks to each channel and then play them. Banks can be switched quickly (it's just a MIDI message) so it's not really a big deal.
 
Back
Top