Home | Audio Magazine | Stereo Review magazine | Good Sound | Troubleshooting

(From Stereophile, October 1992)

His background may have been in tubed audio product design, but Digital’s Mike Moffat is now at the forefront of computer-based digital processor development. His D/A processors are among a handful of products that use Digital Signal Processing (DSP) chips and custom filtering software instead of commercial off-the-shelf (COTS) filter chips.

[The software in a DSP—based processor is the list of instructions that tell the DSP chips what to do to the audio data. It is contained in one or more Erasable Programmable Read-Only Memory (EPROM) chips.]

[Mike Moffat is now with Schiit Audio]

I recently [ca. mid 1992] visited Mike at the factory to get his current ideas on digital audio reproduction and what goes into designing a good- sounding processor. I began by asking Mike if he had always been an audiophile.

Mike Moffat:

I’ve been an audiophile since high school. I went to UCLA, but got drafted in the middle of it and ended up graduating [ Electrical Engineering] two years late. I’m giving away my age. [laughs]. I then worked briefly at Douglas Aircraft and decided I didn’t like a desk job. So I got a more adventurous job with Texas Instruments working on experimental digital tape recorders. These were 8-bit machines that sampled at 50Hz to pick up low-frequency information that geologists used. It was overseas employment and paid well. When I came back from overseas with all that money, I could afford to be in audio professionally. In 1977 I built a preamp called the Preamp. I sold to my partner in ‘82, did interconnects for a while, and got involved in import and export.


All the old analog stuff was all tubed—a power amp, preamp, and tubed head-amp. It was kind of a novel design in its day: no feedback and 6DJ8 tubes—and there were no 12AX7s in it. My campaign at the time was against the 12AX7; it was a two-bit tube designed for cheap phonograph players.

The power amp was a 75W monoblock thing that sold for $700 or $800 apiece, the preamp was $700, and the head-amp was $500—this was 1977, now. The head-amp had two Western Electric 417As in it, which was the quietest tube ever made. It was fun getting the head-amp non-microphonic. We built about a hundred of those.

Robert Harley:

Was the old successful?

MM: It was doing fine at the time I sold it. My partner sort of lost interest in it, I guess. He got married, but still, to this day, he’ll work on the old stuff. He’s down in San Diego and still takes care of whoever has them out there running.

RH: You seem nostalgic about tube gear. Do you think tubes are inherently better sonically than transistors?

MM: My whole system at home is tubed—but I wouldn’t want to build a product with tubes in the 1990s. The worst thing is when they come back.

RH: Do you mean reliability problems?

MM: I have heard that reliability is a problem from dealers who sell tube equipment in general. If it’s got tubes in it, it will be more problematical. That’s what I hear.

RH: But do you think they sound better?

MM: Oh, yeah. So why don’t I put tubes in my own products? The answer is, I don’t want to see it back. Also, just putting tubes in a $4000 DS Pro would take it up to probably $5500 or $6000—more than that if you do the power supplies right.

But I love them. Neil [Neil Sinclair, Mike’s partner in Digital] still has tubes. He probably won’t admit to using a tubed system in 1992, but I know he hasn’t sold his tube amps.

RH: All your designs—even the $1250 DS Pro Prime—use computer-based DSP chips running custom filtering software. Do you think DSP-based digital filters are a requirement for state-of-the-art digital playback?

MM: I believe so, yes. That’s because no digital filters you can buy [ca. 1992] optimize time-domain performance. They are designed for best frequency-domain performance—minimum ripple in the passband and maximum attenuation in the stopband. They are frequency-domain devices only.

One of the purposes of an oversampling system, where you add dots [samples] between the existing dots, is to add more information. In the captive filter design [commercial off-the-shelf filter chip], that translates to improvements you see on spectrum analyzers—lower ripple and better stopband characteristics. But there is no enhancement of the time domain. So you’re constrained to whatever information is in the original recording. Whereas in a time-domain—optimized filter you can improve [time-domain characteristics] the way you would improve the frequency-domain characteristics of a captive filter. With DSP filters you get the best of both worlds.

RH: How much of your processors’ spatial qualities are a result of DSP-based filters and custom software?

MM: Almost all of it. Having done a number of experiments with captive filters— the COTS variety from NPC, Philips (e.g. SAA7220), Sony, etc.—the variations of the algorithm they all run doesn’t do anything to optimize the time-domain performance. The algorithm we run is a specific time-domain enhancer. That’s why I build processor that are Motorola-based [the Motorola 56001 DSP chip] as opposed to captive filter-based; there is a substantial difference in imaging and sense of space.

RH: So soundstage size and image focus are primarily functions of the digital filter’s time-domain performance?

MM: Absolutely. Not only theoretically, but also empirically—I’ve done the experiments.

RH: How much can you change the sound of a processor with the software?

MM: Quite a bit. You can put a Wadia-type [filtering] algorithm into a DS Pro Basic—we’ve done it—and it totally changes the sound of the processor. We’ve put in a frequency-domain optimization that completely changes the sound of the processor, particularly the imaging.

[Note that a Stereophile 2006 report, by Keith Howard, on programmed filters purport very different results: that switching between software filters reveal only very subtle, subjective sonic differences]

RH: When you’re designing a processor, how do you decide which filtering algorithm is the best? What’s the point of reference? And do you make editorial decisions to try to make it sound “better?”

MM: Well, there are lots of ways to optimize digital filters. I would be bullshitting ill told you we’ve tried them all—we haven’t. But they sound very different, even with the same parameters. Given the same passband, stopband, and transition-band parameters, they do sound different. I picked the filter we’re using because it does it more than any other algorithm I’ve found. We get goo d frequency-domain and good time-domain performance. I picked it over filters with identical performance—it sounded better in the spaciousness aspect.

RH: Is the spatial presentation heard from your processors inherent in the recording, or does the software create some of it? You used the word “enhancement” earlier in describing your digital filters’ time-domain performance.

MM: The software creates that sense of space based on information that’s in the recording. In other words, it works mathematically in the time domain the same way frequency-domain optimization works. It takes a weighted average of a group of samples. It’s very similar to the video algorithms JPL did for the Mars Lander to enhance surface detail.

So the answer is really yes to both; the sense of space is created by the software, but it’s based on information that’s originally there in the recording. It’s not making things up. It’s not re-creating some artificial image; the image it creates is based on information in the original samples.

RH: So the additional time-domain information created in your DSP filter is analogous to the additional samples generated by an oversampling digital filter in the frequency domain?

MM: Absolutely.

RH: Do you attend much live music?

MM: Every year we’ve been buying season tickets to the LA Opera, and a partial season to the LA Philharmonic. And finally this year they’re in the Founder’s Circle, which are the best seats in the house for sound at opera because the orchestra is in the pit.

That’s the reference for my home system—that tells me how it sounds. At the opera, unlike Broadway shows or lighter theater, if anybody comes out with a microphone taped on, tomatoes would be coming out of the audience. That’s just not okay. The only thing they ever mike in any opera, and only in some of the older operas, is the harpsichord, because it’s a feeble instrument acoustically.

(above: Classic DS Pro Generation V multi-bit D/A processor from early/mid 1990s)

RH: Is it important to have live, unamplified music as a reference?

MM: Oh, absolutely. And that’s the only place you can get it any more. You can’t go to a jazz club without listening to amps. You can’t go to a Broadway show—everybody has microphones taped on. You can’t go to any light opera—Phantom of the Opera—and make judgments because you’re listening to their sound system. But if you go to the real opera and you go to the real classical concerts, there are no mikes. And that’s what tells me. That’s how I know when [the design] is right.


RH: Your processors have a characteristic sound—apparently reflecting your sonic priorities. How does a designer translate to circuit design what he thinks the product should sound like?

MM: That’s a complex question. For the tonal aspects, the [ filter’s frequency-domain performance must be accurate. But there are other factors. The longer I stay in [ processor design], the morel realize how much I don’t know.

For example, if you have a lot of RF near the analog stages, the RF causes the analog stages to perform in a less than optimal manner. You take any body’s good preamp and load it up with RF—RF that may be only 20 to 30dB below the audio signal—and it will affect the sonic qualities of the preamp to a greater or lesser degree, depending on how well it’s RF— protected. That gets to be complex in itself, the more you RF-protect something and roll it off, it can actually intrude into the treble presentation of the electronics. We try to keep the RF as low as possible. In the worst case— our least expensive product—we have in the high 60dB of RF rejection to 100MHz. Most digital audio products are only quoted on signal/noise ratio across the audioband.

The time-domain optimization is a very important part of getting the image right—hearing exactly where everything is. There are other factors as well. The grounding becomes much more critical in digital designs, particularly when they’re mixed with analog designs, which of course occurs in every D/A converter. But the sonic goal is being tonally wonderful, wonderful from the standpoint of how it images, and quiet and free from noise and other spuriae.

The term “jitter” is overused. What’s quoted primarily is a single figure generally measured in the picoseconds, and, in the poorer-performing units, nanoseconds. But what we’re finding is that not only is the total amount of jitter important, but the frequency components of the jitter as well. Different jitter frequencies affect the sound in different ways.


I’m sure there are more things we’ll find out about, but my thing is frequency-domain— and time-domain—optimized filters. You make it as quiet as possible and as jitter—free as possible. That translates to the best sound.

RH: The sound of many products reflects their designers’ sonic priorities. You, for example, seem to go for three-dimensionality and deep bass. How do a designer’s sonic preferences end up in his product?

MM: My preferences emerge as various prototypes are developed. Again, I’m shooting for the same kind of rush I get listening to live music.

The goal has always been the Peter Walker thing—to re—create live music in the listening room. You learn things along the way, then refine other aspects of the design. When we learn how to do something, like bass— we learned how to do bass early on—we just keep on doing it and improve other aspects of the design.

I’m not trying to color it or take artistic license, I’m just trying to make it sound like the real thing. How you do that in the analog stages is to do analog circuits that are horrendously fast, particularly at the summing junction of the DAC. You try to keep it as fast as it can be and keep the distortion as low as possible. The analog stage [ a digital processor] is very different from the line stage you would use in a preamp. You have different priori ties—settling time, glitch reduction—because the smaller the area under the curve of the glitches, the better it sounds.


RH: Do you find much correlation between sound quality and measurements?

MM: Not all the time. If you rely on measurement only, you can design products than measure well but don’t sound so good. If I improve a specification—and I’m talking about specifications other than the usual harmonic distortion and audioband S/N ratio—if I improve the RF S/N ratio to 100MHz, I know it’s going to sound better. That’s not a commonly measured specification. We’re finding things to measure that aren’t commonly measured. We’ve found empirically that, as you reduce the RF noise, you get nothing but better sound. As you reduce the settling nine in the current-to-voltage converter and reduce the area under the curve in the glitching, you also get better sound.

RH: Your latest project uses a laser-driven interface between transport and processor instead of a conventional LED-driven interface.

MM: The unit we brought [ the 1992 SCES] is an all-out design. The differences between that unit’s interface and the AT&T—and the Toslink, for that matter—is that the AT&T and Toslink are LED—driven and ours is laser— driven. The AT&T interface is some 10 times faster than Toslink; the Toslink has about a 5MHz bandwidth and the AT&T has about a 50MHz band width. The Toslink is way too slow.

As an experiment, we built a Gigahertz bandwidth device that was laser- driven. It was also “single mode,” which means that if you look at it on a spectrum analyzer, you see one pip instead of a series of pips. Both the AT&T and the Toslink system are multi-mode. They work over a broad range of frequencies.

The laser system we developed involves some focusing and some critical distancing between parts. It has to be assembled in production to an accuracy of one to ten microns. We’re now building an interface in the vicinity of 150MHz that is single-mode. The plan is to build a non-single mode system with a Gigahertz bandwidth and determine which is really important—the single mode or the wide bandwidth. Then we’ll take a look at what to offer as a product.

We will offer the all-out design. We’re building jig to assemble the thing to the tolerances required. We’re also looking at offering a lower-cost, single- mode interface if that works out well. I’m optimistic.

RH: So this is a practical system we’re likely to see in products in the near future? [ca. late 1992]

MM: Oh, yes. Certainly by sometime in 1993.

RH: Any idea what it will cost?

MM: The target on the 1GHz system—a complete system including the transmitter and fiber link, and a detector that would mount in a Generation III—is $1200 to $1500.

RH: What do you hear when you widen the interface bandwidth?

MM: It’s stunning. Layers of haze and veiling disappear, revealing nuances. Not just in a sense of more detail to the music, but there is more spatial information as well. When I first heard it my jaw hit the floor. I couldn’t believe the differences it was making. It was a lot like the difference between a veiled and slow inexpensive preamplifier and a preamplifier that is faster and smoother at the same time and has a lot more detail. It’s shocking.

RH: That reminds me of something you said once said about moving a Toslink optical cable and hearing the images shift.

MM: I do that at dealer demos. I’ll find whatever processor’s hooked up optically, and get all the listeners there, and just take the cable and wind it around—curl it up—and then ask them to note where the images are. And I’ll be sitting there holding the cable, and they’ll notice where everything is, and then I let the cable go. When it springs back to its normal position, they say, “Wow, it really moved.” It does.

RH: Why do you think there is so little recognition of the sonic limitations of digital audio among professionals?

MM: Well, I think it’s obvious that many of them don’t give a shit as long as their CDs are selling. It’s the people like us who care about good sound. We [audiophiles] have been given short shrift. Just look over the last 20 or 30 years. Going back to the 12AX7 [tube] we were talking about earlier: the 12AX7 was basically a commercial compromise to allow low-cost record players to be built. The kind of record player with an on/off switch—you turn it on, it starts spinning, you lift up the arm—had a 12AX7 in it. Companies are always going to be looking at cost.

Japan’s a big market for . They love our stuff because it’s an alternative to the stuff that was designed to perform as well as possible for as few dollars as possible. Actually, when you look at a Magnavox Bitstream player that you can buy for $200—$300, even at $300, that’s a lot of electronics for the money. So the guy who’s buying the average cheap system is getting a better deal.

RH: Some would argue that differences in digital processors are academic—loudspeakers have a much greater effect on the reproduced sound.

MM: A friend of mine named John Koval believes that the only reason that any two speakers sound different is because they have different frequency responses. He will equalize one speaker to the other and then prove that no difference can be heard between the speakers. But I’ll be over there and he’ll be absolutely swearing that I can’t tell them apart—but I know.


So one day, I took over an early prototype of the DS Pre. He said, “Okay, we’re going to do a blind test.” He’s very emphatic about what he believes in. He hooks up the DS Pro and a cheap stock CD player through his A/B box that has relays and junk interconnects—throwaway cables that come with a $100 cassette player. He hooks it all up, gives me this little box, and I don’t know which is which. I’m sitting there going back and forth. I was the only person in the room who picked out my processor ten out often times. After ten times he finally gave up.

But he couldn’t hear the difference. Some people just don’t hear the differences, but I have to think the reason they don’t hear the differences is they don’t want to hear the differences.

The guy who sells the exotic Vishay resistors came in [to Digital office HQ]. He lives, talks, eats, and breathes resistors. He asked what we made. When we showed him, he asked what it did. We said, “It makes digital audio sound better” He said, “Well, digital audio’s perfect, isn’t it?” And we played it for him—compared a Generation II to a generic CD player—and he was just absolutely amazed. He left babbling.

RH: One of your updates to a processor was replacing the short piece of wire that carries the digital audio signal from the input jack to the pcb. Can a 3” piece of wire carrying digital audio make such a difference to the sound?

MM: The wire we use is very fast wire, made for the internal wiring harness in supercomputers. It’s relatively expensive; but it’s blindingly, screamingly fast. And since the first product we put it in—all the little wires to the jacks, and then on the larger units where we had the boards separated under different compartments—we use it for all the digital wiring between boards. It just sounds much better.

I don’t want to sound like a nonscientific type, but I have to totally admit that that’s just an empirical sort of decision which is normally not the way we do things here. Normally we try to have organized R&D programs. Because hopefully we’re going to learn something about these things—other than vague generalities.


Prev. | Next

Top of Page   All Related Articles    Home

Updated: Tuesday, 2015-04-21 17:10 PST