So what other reasons could have caused the decline in interest? Was there nothing that could be improved upon? Were there improvements on the software side that made hardware redundant and/or useless? Is there any other company besides Creative, however large or small, still holding the torch for innovating in this space?
When DVDs and HDMI were becoming popular, and Windows Vista was launched, a lot of restrictions were put on drivers, I saw many people defending them claiming it was for better stability, avoiding blue screens and so on.
But a major thing the restrictions did, was restrain several of the sound cards features, most notably their 3D audio calculations that were then just starting to take off, people were making 3D audio APIs that intentionally mirrored 3D graphics API with the idea you would have both a GPU and a 3D audio processor, and you would have games where the audio was calculated with reflections, refractions and diffractions...
After that, the only use of sound cards became what the drivers still allowed you to do, that was mostly play sampled audio, so sound cards became kinda pointless.
Gone are the days of 3D audio chips, or having sound cards full of synthethizers that could create new audio on the fly.
Yamaha still manufactures sound card chips, and their current ones have way less features than the ones that they made during the sound card era.
EDIT: also forgot to point out the same restrictions kinda killed analog video too, for example before the restrictions nothing prevented people from sending arbitrary data to analog monitors, so you could have monitors with non-standard resolutions, non-square pixels, unusual bit depths (for example SGI made some monitors that happily accepted 48 bits of color) or not even having pixels at all (think vectrex) and so on. All this died and in a sense also affected video development, some features that video cards were getting at the time were removed and hardware design moved to a narrower path, more compatible with MS rules.
As for what the restrictions have to do with DRM: the point was not allow people to intercept audio and video using analog signals with perfect quality, since this would be an easy way to go around the DRM built-in on HDMI cables.
They're not necessary for consumer apps. Consumer audio applications got "good enough" with mass produced builtin motherboard "soundcard on a chips" that basically replicated the function of the old soundcards at a much lower price point.
If you want to, say, connect 16 microphones at once and record to 16 seperate tracks, or you plan to apply a bunch of digital effects and therefore want a much higher sample rate than what your consumer audio chip can do, you can buy an audio interface.
[1] https://www.sweetwater.com/shop/studio-recording/audio-inter...
Now every computer has a little chip that plays back at least CD quality audio from an infinite pool of storage and RAM. Nobody wants to hear MIDI in their games anymore. I'm not even sure what a better sound card could even do for me - reduce line noise, drive high impedance headphones or something. Boring!
This is a massive bugaboo in the audio industry in my opinion.
I have always been a PCI soundcard user - and still am to this day, but industry trends are stopping this. I think a big part of this is due to laptops/ipads and the like becoming more popular devices, as well as from a useability standpoint - companies optimise for succesful adoption into a users system - than technical specifications.
I started my DAW with a Terratec soundcard with midi + stereo audio ins and outs roughly 20 years ago.
Fastforward 7 years - i bought an early USB interface - the NI Audio Kontrol 1 to use with a laptop. I could run everything on it - take it out and about - cool!!
Fastworward another few years - and i got more serious about audio and bought a Lynx PCIe AES card (now without midi) - to use with an Apogee Rosetta 800 (8 in / 8 out). Now we're getting there. But - not an all in one solution.
In 2022 - surprisingly - the only (?) companies doing full PCIe audio solutions are Lynx and RME. In a fresh session in FL Studio or Ableton - with a sample buffer rate of 64 (the lowest) i enjoy latency of 0.72m/s. This cant be beaten by USB. However - that's not a deal breaker for most people sadly.
It greatly saddens me that audio in general is a second class citizen with regards to tech advancement. It still blows my mind that the Atari STE with MIDI in built onto its circuit still thwarts the tightness in the midi department - of a brand new full specced blazing machine. We need more development for Realtime O/S in the midi world.
I think another factor is MP3 players and phone audio; people stopped using their computer as the (interface to) media source when other things took that function over for them.
Add-on sound devices still exist, but they are simple because they don't include extensive hardware acceleration anything like what a GPU has. In fact, if you want hardware acceleration for audio processing algorithms today, like really fancy 3D sound propagation or something, GPUs would actually be great at that, and they support digital audio output too.
- During the MS-DOS era, there wasn't really a standard API for sound, so using a cheap, off-brand sound chip (including anything that might be integrated) often meant compatibility problems. Even though it might not necessarily have offered the highest quality sound, Creative's Sound Blaster line was the gold standard for compatibility during this time. Standardized sound APIs have largely eliminated this issue.
-Throughout the '90s, music for games (and a number of other applications) was distributed as MIDI (or MIDI-like) instructions to be generated by a synthesizer, and the quality of the music was very much dependent on the synthesizer used. The Roland Sound Canvas series was the gold standard at the time (in part due to its quality, and in part because that's what the composers themselves used), but it was very expensive and out of reach to the mass market. Software synthesizers were either too slow, or the quality sucked. That gave an opportunity for sound card manufacturers like Creative to offer higher-quality hardware synthesizers on their sound cards than what cheap/integrated cards could do. These days, most audio is PCM, and hardware is perfectly capable of high-quality software sound synthesis, so hardware synthesis has become a non-issue and modern consumer sound hardware doesn't even have hardware synthesis capabilities anymore.
- During the '00s, sound cards began to offer accelerated environmental and positional audio (e.g., Aureal3D, Creative EAX), which games quickly adopted to improve the sense of immersion. However, changes in the Windows audio architecture introduced with Windows Vista broke this functionality without a replacement. Advances in CPU hardware have since allowed this type of processing to be done on the CPU (e.g., XAudio 3D, OpenAL Soft) with acceptable performance.
In the current era, we do have dedicated soundcards, although not in the form of PCIe add-in boards. External DACs (either dedicated USB, or integrated into a display or AV receiver) are popular, as are the DACs used by wireless/USB headphones. Also, there has been some work done to utilize the computational capability of GPUs for real-time audio ray tracing.
There was also Aureal. Both Creative and Aureal had their own specific API to try and create a similar moat like Glide from 3DFx but failed. And then Realtek took over.
Creative could have competed with onboard Audio as well. But they were too worry about losing their Sound Blaster Revenue, so they somehow diverged into other things like GPU ( 3DLabs ), MP3 players, Speakers, etc etc. And every single one of them failed.
If you are looking for modern Audio Engineering, you could look at PS5. But powerful DSP isn't exactly rocket science anymore. A lot of the improvement has to do with software.
Creative used to be the pride of Singapore. It is sad the company was badly managed and never made the leap to the next stage.
This is also around the time it started to be common for pre-built systems to integrate functionality into the motherboard, such as VGA, audio, USB, and in some cases even AGP video all as part of a chipset.
The peak of PC audio probably matches the peak of the "HTPC" wave that happened in the first half of the 2000's - PCs designed to be put under your TV and replace your stereo.
But also, laptops started getting cheaper and more popular as the late 90's turned into the 2000's and beyond - where integration of components was even more valued. Then smartphones started to take over in the 2010's.
The culture is different now. These days, the young people don't have stereos anymore, they might have at best have a TV soundbar or some really good wireless speakers, or a couple of bluetooth speakers, and the phone is the centerpiece of the personal audio experience now.
Hi-Fi that's not dedicated to making your car rattle or be \blasted at 500w-per-channel volume over a bar/club PA speaker is dead.
Desktop PCs are for businesses which need only good enough audio for business purposes, and gamers who probably want to spend money on a GPU over audio.
It forced the dedicated soundcard vendors to justify the add-on price by pushing features like multichannel, Surround sound codecs, hardware controls etc, but none of those features were of mainstream interest.
Total sales volume for dedicated soundcards dropped, economics of scale dropped, prices had to increase, pushing the products even more into niche...
If you don't like the DAC in the headphones, you can also find a high-quality USB DAC and use the audio cable from there.
If you're serious about audio you just plug a cable into your breakout box and have your interfaces. converts, and preamps there. Your sound hardware can be anything from a pure i/o to an elaborate instrument under computer control. You can do audio synthesis and compositing on the CPU, GPU (not so different from a DSP) or external hardware.
Soundcards are only 'gone' in the sense that PCI cards are less important because many people use laptops and the audio built into motherboards is more than Good Enough for everyday purposes.
And while 3d audio is/was a cool concept, most people don't really have a sound system that will really take advantage of it. Even most "serious gamers" that I know use headphones or stereo speakers... now that I think about it I'm pretty sure I'm the only person in my friend group with a 5.1 speaker setup on my PC.
There is no point to using an add-in card if the facility is now on the main board and can do the task to the user's wishes.
The same thing can be said about many previously modular components where more and more is now simply a function of the main board itself. Take all the legacy I/O which used to be various chips, often on various add-in boards. They were all condensed into one single Super IO chip that can do all of it, but at a fraction of the size, cost and energy usage.
A lot of peripherals used to be implemented in separate chips and sometimes even discrete logic. If we were to try to do that today we'd either have to cut 90% of the currently available standard features or make the mainboard ten times as big to be able to implement it the old school way.
For example if you produce music, you probably are good with an external USB audio interface like a Focusrite Scarlett.
1) Most people are happy with good enough. To most people's ears, speaker quality makes a bigger difference than audio output, and people already settle there. Furthermore, when iTunes was a big deal it turned out people got accustomed to low bit rates and mediocre equipment and thought it sounded better than the good stuff, because it's how they expected their music to sound.
2) With most computing moving to laptops and then to mobile, people generally don't have a choice about the audio processing technology inside their computer.
For what it's worth, if you have a small discretionary budget, I would recommend a "top of the low-end" DAC to anyone who listens to a lot of music. I "did my own research" and concluded that for me, the Topping D10s USB DAC was the correct amount of gadget. It has RCA outputs and supports 384kHz audio, which needs to be enabled in your sound settings.
When I got it set up, it was as if my previously disappointing desk stereo speakers and preamp combo took a great sigh of relief and the sound opened up. Everything sounds more defined, I can hear where the instruments are positioned in the soundstage, and I am now one of those people who appreciates Bandcamp allowing FLAC downloads. For me, this was worth CAD$139.
https://www.amazon.ca/gp/product/B08CVBKHFX/ <- currently "unavailable" but likely easy to locate via search
It's still fun today, 25-30 years later, to crack open in a 90's MIDI in a DAW and route the channel outs through virtual instruments to see how they sound.
I use PC audio to test analog audio circuits that I make, so I've made it my business to know the quality of my measurements.
I've also checked out the specs on the chips used in those devices. The delta sigma conversion technique is one of the wonders of the modern world.
The fact is that the audio quality coming out of those devices is stunning, and probably doesn't need to be better. I can see where a recording studio might want to spend more on "overkill" to make the artifacts of their digital interface a non-issue, but for the rest of us, we're living in a golden age of audio.
You answered it yourself there. Sound hardware started being integrated into the motherboard and/or southbridge/PCH.
(Although a minor quibble... on-board sound *started existing*. Back in the days of sound cards the only on-board sound would've been a PC Speaker which... well, it can do beeps of various frequencies, but that's about it.)
Old sound cards also had various synthesis and MIDI stuff, instead of playing sampled audio, which is great in theory but... then your audio sounds different on every different hardware. Also, these days CPUs are fast enough to do a lot of the synthesis in software (and have extra cores so you're not stealing cycles from something else). That way, even if you really wanted synthesis, not only do you not need extra hardware, it also sounds the same "everywhere".
In the 90s, CD audio could handle some soundtrack work but without looping, and it would obviously block disc reads.
However you have now significant storage and hardware compressed audio decoders. So, soundtracks are shipped as compressed waveforms.
All decoders will play them the same way. There is no differentiation.
There's this trusty Creative Audigy Rx that I nabbed from a closing down sale at an electronics retailer. Poetically, both are facing the same fate.
I built a Ryzen 7 machine and installed Windows 10. Whilst installing the CD drivers for the soundcard Windows 10 BSOD'd and rebooted. Not to be deterred, I tried the latest downloadable drivers (marked as Windows 8 compatible) but it is all WDM so surely OK? Not so. Another BSOD.
There was no help to be found online so I very reluctantly gave up. Now it lives in the retail packaging somewhere in my house, the bright and elaborate box promising an audible experience that exists only in my mind.
As for people who just listen to music, IMO, built-in audio interfaces or just digital Bluetooth headphones have been good enough for a very long-time in digital-audio conversion.
This is the unit I have and I'm quite happy with it. It's on the "low end" of "good". It has a surprisingly low noise floor. Lacks sophisticated routing and switching. Latency is really well at 192khz, good enough for live mixing, as low as you have the CPU mmmph to handle it.
They only take off in price from here, up to thousands of $.
This is right till you want something a little bit out of the ordinary: usb dac with 5.1 that supports hardware decoding of DSD/etc . Most of the options have either $x000 price tag. The next best thing is to use A/V Receiver via HDMI
For your average consumer, built-in audio is plenty good. Not much reason to get any extra hardware at all.
For audio enthusiast consumers who want better than what onboard offers for some reason, an internal card is only useful for desktp computers, whereas external DACs are uasable on mobile devices and laptops - and that's what most people use.
For audio producers, an internal soundcard can't physically fit the I/O you need. 1/4" jacks, XLR, that sort of thing, so older professional cards all had breakout boxes. If you're going to have a breakout box anyway, might as well have the whole device be self-contained and plugged in through Firewire (or, more recently, USB or TB). Depending on your setup, you might even have these things rack-mounted.
In the early days of PC's most EVERY peripheral was provided by an IO expansion card save for the keyboard. My 386 had a 16-bit multi-IO ISA board that provided the essential coms ports: ATA, floppy, serial, parallel. You purchased a VGA card and then a sound card. You had at least 2 or three ISA cards because your motherboard was taken up by all the CPU, FPU, RAM, and essential control logic chips. My second 486, a DX2 66MHz, had the ATA, serial, parallel and floppy ports on-board which amazed me as it eliminated a whole ISA board. (Now everything fits on a single silicon die...)
Early on-board audio was usually a sound card soldered the the motherboard. Then Intel developed AC'97 which integrated standard audio into the south-bridge. Coincidentally that made Microsoft's life easier as all the PC's they were running on would use this standard meaning all they had to do was provide AC'97 drivers and everyone with an Intel machine had sound. No more competing 3rd party audio api's from Creative, et al. it was all Wintel. PC builders could now provide multimedia PC's for cheaper prices with audio by default. Also, USB happened which allowed people to plug in things without opening cases and fiddling with circuit boards which is alien to many.
And as CPU's became faster, the need for dedicated DSP processors to handle audio processing or synthesis is eliminated. You can now run a whole DAW complete with synth, sample playback, effects and mixing in real-time on a cheap general purpose off the shelf computer with no special hardware.
There are still "cards" being made for pro-audio users, that embed DSPs for computing plugin algorithms [1]. Not quite the same application, but an interesting parallel.
1. https://www.phoronix.com/review/590
2. https://www.modders-inc.com/razer-barracuda-hp-1-8-channel-g...
Lack of marketing, maybe? A myth that you need to fork over thousands for a complicated audiophile setup to get very incremental improvements over the baseline? Probably a combination of these is to blame.
I have a Scarlett attached to the underside of my desk with a pair of Sennheiser HD600 headphones and a Monoprice Stage Right condenser mic on a desk stand.
I'm guessing the inner-PC card space has a bit too much of a EM noise problem for people who care about quality higher than integrated sound can provide (which is pretty good these days anyway) and external devices have room for the inputs people actually want.
Well, the internal ones still exist [0]. However, with higher bus speeds, external interfaces are more practical: you can connect more devices to them, and you can move them - a lot of music today is done on laptops.
While good studio-grade audio monitors are available for less than $200 (Beyerdynamic DT990 Pro for example) I don't see what a soundcard offers that an external DAC doesn't. If anything most soundcards I find are quite lacking some much needed features compared to most DACs, which probably is the answer to your question.
Desktop PCs have become a niche, more people have laptops than Desktop PCs, so that already reduces the market for dedicated internal sound cards significantly.
Desktop PC motherboards now come with integrated DACs that are "good enough", and if you care enough that they aren't, you'd have a hard time arguing for an internal solution over an external one anyway.
10 years ago even many budget sound cards had outperformed the capabilities of an average human ear and an average headphone.
There’s not much room left for growth.
As opposed to much more complex and dimensional video data.
In the Mac/Amiga/non-DOS world sound was good enough very early. As soon as we said "we need sound," it was there, as long as you weren't on DOS.
The Amiga Video Blaster card was a thing, too. Your Android/iOS/PalmOS device surpassed that long ago.
When the Realtek chips become good enough for audio consumers in 00s, the market for dedicated sound cards no longer exist there. Of course, they still exist for audio producers and professionals.
The actual advantages of doing things in hardware quickly faded, as cpus became ever more powerful. But just as much, the various apis didnt really ascend, or their technologies (head-related transfer function) got swallowed/mainstreamed. Namely DirectSound3D (1996), which grew various hardware offloads (EAX), and eventually because DirectX Audio. The pressure to compete deflated under thus mainstreamification... few people wanted to the extra 100 miles to support bespoken fancy hardware capabilities 0.01% of gamers might have, when they got 90% there using the common denominator.
I dont actually know what folks like gamedevs do now. Both Xbox & Playstation say they tick a lot of really good 3d audio boxes, they support an array of common 3d audio standards that I dont really know that much about. I'd love to know more about hoe the gamedevs feature-sets & capabilities have evolved over time... most coverage is alas ultra consumer facing & abstract; getting some intimicacy with technically what is possible now vs then would be fascinating.
I do think Creative has continued doing some refinement on their dedicated cards, but, like, in general, I think the other answer to the question is just that we are damned near perfection. We have really smart folk telling us we're making things worse by having too high a sample rate (>=192kHz, see Monty's https://people.xiph.org/~xiphmont/demo/neil-young.html & newer excellent audio myth videos), there's so little noise left to chase out to drive SNR higher, THD is tiny. Many laptops & gaming motherboards have really really good audio outputs, which are like $4 more cost for the person making the system for nearly flawless output, and even cheap stuff has gotten quite fine (but boy can engineers still cut corners & create trashfire designs, especially on low end systems, but it's gotten harder!).
What's kind of exciting in the past half decade is that Intel realized that on-card processing needed effective commodification if it was to survive. Each chip/driver maker making their own bespoke solutions like back the 90s was going nowhere & the only effective pushback against forever doing more & more on the cpu was to make the sounnd hardware more usable, easier to implement consistently & well. To that end, they did the amazing working of making SOF (sound open firmware) which is an implementable reference firmware anyone can use & ship for sound devices. It's a community effort now, to make a orchestrator/controller that implement commom driver interfaces, & figures out how to use a slate of DSPs effectively to do thr job, which under the hood is what soundcards have been. Now everyone can work together to use these DSPs effectively & well, whatever chips eith ehatever DSPs on them you happen to have. AMD is one other noted user, I forget who else.
https://thesofproject.github.io/latest/introduction/index.ht...
True, you can get some of the best sound cards available in the market if you are willing to spend the money but one thing that most people can’t understand is that aside from a handful of the sector, sound cards are not being bought by a lot of people. This has convinced us to explore whether sound cards are still relevant or not. You can know and learn by visiting https://enterprise.affle.com/mobile-app-development Honestly, there could be a lot of reasons behind why these are being phased out. However, is the situation bad to the point that sound cards will soon be considered relics of the past?
This is what we are going to explore in today’s opinion. Onboard Audio is Getting Better Okay, let’s be honest. Part of the reason why sound cards were created in the first place because onboard audio had a lot of distortion issue mainly because the components were placed closer to each other. For the longest time, this was a huge issue that resulted in sub-par audio, and that is why many companies like Creative banked on this and created a long range of soundcards, starting from cheaper options, to the ones that cost a lot of money.
However, as time progressed, the onboard audio only improved. So much so that many companies like Asus started working on shielding the audio components onto the separate layer. This technique reduced the distortion by a great mile and the onboard audio started improving a lot, too.
Wireless Headphones are Taking Over Back in the old days, wireless headphones or any wireless peripheral was simply not good enough to match the quality and fidelity provided by their wired counterparts. However, things have changed drastically to a point that wireless technology has improved by a drastic measure. With most gamers and general users now proffering wireless technology over the wired one, there is no denying that wireless headphones are taking over. They are convenient, introduce absolutely minimal input delay, the battery life is great, and most importantly, they come with built-in audio processing technology. xternal DAC/Amp Combos Are Now Becoming the Choice Another reason why the sales of sound cards are declining is that people are now going for options like external DAC/Amp combos a lot more than they are going for sound cards. Sure, these combos are certainly expensive, but the good news is that the performance they deliver is actually a lot better than some might expect, in the first place.
For starters, a Schiit Magni and Modi combo are going to be good enough to beat pretty much every single sound card available in the market. I know that people might want to invest money on something a sound card but when you are getting better overall quality from DAC/amp combos, you do not really see the reason.
Sound Cards are Not as Versatile This actually ties into the previous point that we discussed. Simply put, sound cards are not as versatile. They were never versatile, to begin with. However, back then, the needs were not as widespread. If you are talking about internal sound cards that most people want to go with, they can only be used once they are plugged into the PCI-express slot on your motherboard.
However, when we are looking at the DAC and amp combos that are widely available in the market, you really do not need to do that. They are plug and play, most importantly, they are driver-less, and can work on pretty much any device that comes with the required ports.
Needless to say, sound cards are simply not as versatile, and that gives a lot of people the reason to stop using them and opt for something that actually serves them properly.
Hope this things will help.