Barry Stephen Goldfarb (BSG) Technologies QOL “Signal Completion Stage”

Barry Stephen Goldfarb (BSG) Technologies QOL “Signal Completion Stage”

جمعه 14 سپتامبر 2012
/ / /

BSG Technologies is quick to stress that QOL isn’t a signal-processing technology, in that it doesn’t add anything to the signal that’s not already present. Moreover, it differs from the myriad techniques used in the past (extensively referenced in the QOL patent application’s “prior art” citations), most of which involve extracting the difference channel from a stereo signal (left minus right), equalizing, delaying, filtering, inverting, or performing other psychoacoustically based manipulations of that difference signal, and then mixing it back into the L and R channels. The BSG Technologies Web site offers this explanation: “Instead of ‘adding’ a host of processing techniques intended to create ‘effects,’ we have simply found a way to extract information already present in the recordings, but otherwise hidden in conventional reproduction.”

  • Left_Out = (1.618 x Left_In) – (0.618 x Right_In)
  • Right_Out = (1.618 x Right_In) – (0.618 x Left_In)

یک سیستم آنالوگ که میاد سر راه سیگنال استریو قرار میگیره بین سورس و پری آمپ و ادعا میکنه باهاش صدا بهتر شنیده میشه !!!!!!

نوشته بالا رو دوباره بخونید، سیستم دیجیتال نیست که بگیم خب DSP بوده ، سیستم آنالوگ هست و بنظرم ادعای خیلی خیلی عجیبی هست.

از روی اطلاعات فاز دو سیگنال استریو دو سیگنال دیگه استخراج میشه که صداش بهتر هست. البته FM Acoustics هم چیزی شبیه این داره که من نفهمیدم همین کار رو میکنه یا اصلا یه چیز دیگه هست. کلا کامپوننت جالبی هست.

پاسخ فاز دو سیگنال استریو اطلاعات داخلش مربوط به تصویر صدا میشه و این کمپانی با تغییر وضعیت تصویر بهتری رو به شنونده میده البته طبق ادعای تحلیل گران.

دقت کنید تو جمع و کم دو سیگنال از عدد طلایی 1.618 استفاده شده.

http://www.bsgt.com/

http://www.google.com/patents

Romy The Cat

TAS Robert Harley

Stereo Times

Soundstage

6moon

Positive Feedback

http://www.whatsbestforum.com/showthread.php?4384-Stereo-Field-Processing

http://forum.audiogon.com/cgi-bin/frr.pl?rprea&1338187442&openfrom&2&4

QOL Inventor Barry Stephen Goldfarb Talks with Robert Harley

Barry Stephen Goldfarb is the prototypical inventor who seeks novel solutions to seemingly intractable problems. Goldfarb became self-educated in music, science, technology, and art. In addition to teaching audio engineering and acoustics at the university level, Goldfarb, alone or with others, has been awarded more than 50 patents in acoustics and audio electronics. He is the principal of BSG Laboratory and the inventor of the technologies marketed by BSG Technologies, LLC.
Goldfarb rarely leaves his Florida lab, but recently visited Southern California to oversee the installation of his QOL (rhymes with “coal,” and represented by the trademarked acronym “qøl”) technology at the Segerstrom Center for the Arts in Costa Mesa, California. I sat down with him to get some insight into the man and how QOL works.

Robert Harley: Tell me about your background and what led you to create QOL.
Barry Stephen Goldfarb: My background is more on the artistic side. Specifically, I played music at a very young age and grew up in a creative environment. I became a professional musician and actually used music to support my greater goals in art. That gave me the time, the money, and the freedom to do the things I wanted to do and still remain creative.
I’ve been working on a project all my life—a multi-sensory project. Think of it this way: If music is fundamentally organized sound, what if we could organize light, and the molecules that enable us to smell, and structure all these different variables to build a whole system out of these ethereal, ephemeral elements that are really in a sense non-material materials? I’ve spent my life working toward building a place made out of light and sound and color and other sensory experience.
Sound for this project became extremely important from the standpoint of creating a reference. If the reference that I was going to work with was off, then everything else would be off. When creating music and then recording music, I noticed dramatic losses between the sound of an acoustic instrument and the recording of that instrument. This was particularly true of a wonderful pipe organ. After hearing that pipe organ played back through any kind of loudspeaker, it just didn’t sound the same.
The overwhelming, all-encompassing sounds that fill space were gone, and the space-and-time relationship was totally lost. I found a great discrepancy between the real world of acoustics and the played-backed world of audio electronics. I began a search as to what the heck was missing.
My goal was to find out what was missing and see if I could correct it. I didn’t know what my limitations were, because I didn’t come at this from an acoustician’s standpoint or an engineering standpoint, or even a science standpoint. I came at it from just listening and being a musician and knowing what things should sound like.

RH: How did you get the insight that led to QOL.
BSG: I was not afraid to take things apart and fiddle with them, particularly loudspeakers. I love loudspeakers. They’re like living things to me. Everything else is just stuff. But a loudspeaker is the heart and soul. The revelation that led me to QOL was building an automobile audio system with 68 speakers and trying to get this all-engulfing, all-encompassing sound. I wasn’t trying to make it big and loud and powerful—in fact, quite the opposite. It was going to be quite subtle. The technician helping me accidentally wired one of the speakers out of phase and I said, “Wait, I like this better—it’s not supposed to be [this way], but I like it better.”
So I end up taking two loudspeakers on the left channel—that’s where I began—and two separate amplifiers, and I put one loudspeaker in phase and one loudspeaker out of phase, and power balanced the differences so that, to my ear, it sounded just right. I was getting the in-phase and the out-of-phase signals simultaneously, and voîlà! For me, sort of a whole color palette came on, and I was getting the sound, the tone, the color that I was looking for.
I was also beginning to get some very unusual radiation patterns. Now, of course, you have so much cancellation, not only in the out-of-phase loudspeaker, particularly the lower-frequency cancellation, but you’re also getting a wonderful radiation pattern.
I was really creating a dipole using two separate drivers rather than one driver reflecting. I extended that to every point in the car. That’s why I ended up with 68 loudspeakers. Each one radiated differently. In the world of acoustics and music, sound radiates omnidirectionally, but why doesn’t the loudspeaker do that? It’s got mechanical and electronic limitations. The crossovers and filters change the phase, which changes the time and the space, and on and on and on.
I got into the idea that what we need is a single loudspeaker that would produce all the frequencies and it would radiate omni-directionally. Of course, I didn’t know enough to know that that was impossible. Not to sound boastful or anything, but I built one, and that’s my pride and joy. And it works, and it sounds absolutely real.
The idea of accurate tonal color is based in multiple areas. You have to be able to radiate in all directions. You have to able to extract all of the spatial and temporal information that is locked in the signal and restore it. And I think I figured it out.

RH: How long did it take from that initial insight until you had an actual circuit that worked?
BSG: About seven or eight years.

RH: Can you explain the theory behind QOL?
BSG: Essentially the idea was to get out of a single signal both the in-phase information and the out-of-phase information, the way I was getting it out of an omnidirectional loudspeaker.
Now, how do you do that in a signal? I just began experimenting. Of course, if you have complete phase reversal of a signal that’s equal in amplitude and frequency you have complete cancellation. I wondered if that energy really disappeared, or if was being smothered out. In fact, my partner in this, who also helps me with my patents, Rob Clark, is an expert in this field [Dr. Robert L. Clark is Professor and Dean of Engineering and Applied Sciences at the University of Rochester—Ed.]. He’s written a book on active and adaptive noise controls, which deals with active noise filters.
But I began to try to figure out a way of tricking the signal so that part of it would play and another part might be cancelled. I then tried layering different frequency paths. Let’s say I took a limited frequency band up to, say, 3kHz. I’d let that play. Then I would take another band-limited signal from 3kHz to 6kHz and put it in the opposite phase. Now they’re playing together. They’re not interfering because the two are not really playing the same frequency simultaneously, If you keep doing that with other frequency bands, it’s like weaving frequencies. A group of frequencies will be in-phase to a limited bandwidth; another group of a different bandwidth will be out-of-phase; and I would add these layers until the entire audio bandwidth from 20Hz to 20kHz was covered. That technique produces a whole audio signal.

RH: Certain frequency bands have some inverted polarity components added back into the signal?
BSG: We call it in our patent “Phase Layering.” The number of layers could be infinite. In theory it has a minimum of two and a maximum of infinity. But the most important point was that I was trying to get out what was already in a signal.
There’s a great discrepancy between the way sound operates in the real world and the way the audio industry is building equipment. My number one goal was understanding what was locked inside an audio signal. “What was getting lost? Can we find that information or has it really been cancelled out?” It seemed to me that the information is in the signal but is dramatically lost when it is being retrieved.

RH: Is the correction algorithm the same, no matter how the signal was recorded? That is, is one set of frequency bands and phase shifts ideal for simply miked recordings as well as for multitracked recordings?
BSG: Yes, and let me answer you in two ways. I started working with mono because I didn’t want the confusion of multichannel signals. The primary work was done with a single-microphone recording, played back through one loudspeaker. Stereo was another issue. While stereo is a wonderful window into a virtual world, it doesn’t exist in nature. We don’t hear in stereo. We don’t have phantom images in life. It doesn’t require two birds to hear the bird in the middle. We hear thousands of points in space. But with stereo blending, the circuit is a little different; however, it’s still the same algorithm.

RH: How simple or complex is the decoding circuitry?
BSG: Actually, ridiculously simple once we get it down. In the patent application we have examples that show how many layers you can have. One is an electronic circuit with perhaps six layers. One is a simple passive version.

RH: Where in the signal path from the actual acoustic event to what the listener hears in the reproduction is that phase information hidden?
BSG: Let’s go back to that concert hall, and let’s take a look at the way most recordings and most audio equipment works. If we make a recording in the back of a concert hall, what we hear sounds great, but when we play the recording at home it sounds terrible—it’s too wet. What happened?
Your ears and the microphones are in the same positions, but the microphone isn’t connected to the brain. There’s a whole process that occurs between the radiation pattern and its positioning in space and in time relative to where we are. It’s just an amazing mechanism.
The microphone is going to simply pick up the loudest elements that make up sound. If the loudest element is the loudest in terms of amplitude, it will pick up the loudest amplitude thing.

RH: It’s purely pressure activated.
BSG: It’s pressure activated. I’ve taken recordings that were really wet and horrible, and I went to the other side [the out-of-phase information] and the signal was, in fact, dry. That information is also in there. That’s not something you could do with QOL now because the ratio [of direct-to-phase-layered components] is set up for 99 percent of recordings out there.

RH: Is there an advantage to digital implementation of QOL rather than analog implementation as in the box that I have?
BSG: There’s no advantage to me whatsoever. I am not a digital fan, other than its elegance and speed. But the digital implementation will be our biggest market because the world is running on digital.

RH: So presumably the circuitry can be small and be integrated into, say, a preamplifier or a digital-to-analog converter?
BSG: Absolutely. The analog version can be a very simple circuit about this big (holds two fingers apart). We have created the digital algorithm and loaded it into three different chip platforms.

RH: What applications do you see beyond music reproduction?
BSG: We have already built prototypes for cell phones and the human voice. The voice is much more natural. I see QOL going where any audio signal is being used: AM, FM, speech and voice dialogue, motion picture theaters, music, musical instruments.

RH: Why didn’t someone think of this before? Why did you think of this in the 21st century?
BSG: I think it’s because I didn’t know enough to know what I don’t know. I didn’t come at it from the standpoint of an engineer. And I think musicians, for the most part, find that serious world of engineering very intimidating. I was fearless! Fearless, in the sense that I’m a border-breaker by nature, and I’m looking for something. I wanted—I demanded—that I would be able to get through this larger project [the multi-sensory art installation mentioned earlier], and the sound was such an important aspect of it.
It’s an acoustic process. It’s not your typical processor because we’re not adding or subtracting anything. We’re relating the principles of acoustics to the way the brain would interpret that information as real once it exists in the air. And no one had done that before.

 

 

 

 

3 Comments