PSA: AI Audio Features Are Trading Sound Quality for Hands-Free Convenience

AI is changing how we listen to music, but at what cost?
AI is changing how we listen to music, but at what cost?

We independently review all our recommendations. Purchases made via our links may earn us a commission. Learn more ❯

AI is picking up where Bluetooth left off when it comes to convenience over quality.

AI is reshaping our relationship with music, with the recent Made by Google 2024 event showing how far AI-powered audio has come.

Our smart devices now claim to make perfect playlists, guess our music moods, and change our audio based on where we are or what we’re doing.

But as AI becomes more deeply integrated into our audio devices and music platforms, we need to ask: what are we gaining, and what might we be losing?

The Convenience of AI in Audio

Made by Google Event 2024. (From: Google)
Made by Google Event 2024. (From: Google)

AI is quickly improving our audio devices with features that seemed like science fiction just a few years ago.

Take, for instance, the Pixel Buds Pro 2’s Tensor A1 chip (A.K.A. Google’s first audio-specific processor).

This chip allows noise cancellation that changes with your surroundings up to 3 million times per second. Meaning, users can enjoy a bubble of silence, no matter where they are.

But noise cancellation is just the beginning.

Pixel Buds Pro 2 are the first earbuds built with Gemini AI. (From: Google)
Pixel Buds Pro 2 are the first earbuds built with Gemini AI. (From: Google)

The Pixel Buds Pro 2 also let users control their music, get information, and even have conversations without touching their phone. Such hands-free features prove particularly useful during workouts, commutes, or household chores.

AI is also making our music experiences more aware of what’s happening around us.

Your earbuds can now automatically adjust the volume based on your surroundings. Or, they can switch to a workout playlist when they detect you’ve started running. All that’s possible with AI.

But it’s not all about our devices. Music platforms are now also experimenting with more AI features.

For example, Spotify’s AI Playlist feature lets listeners make custom playlists using simple text prompts. YouTube Music and Amazon Music are also testing similar features.

These AI systems try to understand not just what we like, but why we like it. And, it could introduce us to new artists and genres we might never have found on our own.

The Quality Question

While AI is making music more accessible and personal, we need to think about how it might affect the quality of our listening experience.

Here are some of them:

Impact on sound quality

In many ways, AI is becoming the new Bluetooth when it comes to balancing convenience and sound quality.

On one hand, AI could make music sound better.

The Tensor A1 chip in Google’s Pixel Buds Pro 2 uses AI to process audio in real-time, potentially making it clearer and less distorted.

Qualcomm’s new sound platforms also use AI to adjust audio settings automatically based on your environment, activity, or hearing ability.

The Qualcomm S3 Gen 3 and S5 Gen 3 Sound platforms revealed. (From: Headphonesty)
The Qualcomm S3 Gen 3 and S5 Gen 3 Sound platforms revealed. (From: Headphonesty)

This makes it super convenient because you get the best sound without needing to tweak the settings yourself. However, this ease of use might come at a cost. AI might simplify the sound too much, focusing more on consistency than on delivering the highest quality audio.

Just like Bluetooth can sometimes reduce audio quality for the sake of wireless convenience, AI-driven audio processing might introduce its own changes to the original sound.

Aggressive noise cancellation can remove some of the ambient sounds that add to a live recording’s atmosphere. AI-powered adjustments could also flatten out the subtle changes that make a carefully mastered track special.

This could lead to a situation like the loudness wars of the CD era.

Back then, Music was often compressed to sound better on first listen. But, it lost dynamic range and caused listening fatigue over time.

Impact on musical quality

The growing involvement of AI in music creation also raises questions about what fundamentally makes music good.

A study from the University of York found that people rated human-composed music higher than AI-generated music in areas like enjoyment and emotional depth.

However, tools like UDIO, which can create entire songs from simple prompts in under five minutes are growing. This makes us wonder if we’ll be flooded with AI-generated tracks that lack the subtlety and emotion of human-created music soon.

Impact on music exposure

AI artists are allegedly already dominating platforms like Spotify.
AI artists are allegedly already dominating platforms like Spotify.

Recommendation algorithms might steer listeners more toward AI tracks that are likely to be popular. Meaning, it’ll become more challenging for human-created music to break the barriers.

In fact, music analyst Rick Beato has already noticed a trend towards simpler music over time.

These types of music come with more basic chord progressions and melodies, which make them catchy. But it also makes it easier for AI to copy these current musical trends.

As a result, we might see many AI-generated tracks on listening platforms that sound familiar and polished. But, they’ll ultimately lack the depth and emotional impact of human-created music.

This flood of AI-produced content could make it harder for listeners to find truly meaningful and innovative music among all the computer-crafted tracks.

The Hidden Costs

Beyond the immediate impacts, there are less obvious costs to consider.

Privacy is a big concern, as AI systems need lots of personal data to work well. Every song we skip, every playlist we make, and every mood we choose becomes data that can be analyzed and potentially misused.

We also need to think about how AI will affect the music industry and the artists we love.

A study by Goldmedia predicts that music creators could lose up to 27% of their income by 2028 if AI-generated music becomes common without proper payment systems in place.

Forecast of the growth of Generative AI in music. (From: Goldmedia)
Forecast of the growth of Generative AI in music. (From: Goldmedia)

While the AI-generated music market is expected to reach $3.1 billion by 2028, it’s not clear how much of this will benefit human artists.

But, perhaps most worrying is the potential loss of the human touch in our music experiences.

Music has always been a way for artists to connect emotionally with listeners.

The growing number of AI middlemen risks cutting that direct connection and the chance discoveries often coming from human curation and shared experiences.

Finding Balance

Looking to the future, the challenge will be finding a balance between using AI’s abilities and keeping what makes music meaningful to us.

Perhaps the solution is to use AI as a tool to enhance human creativity rather than replace it.

For example, Adam Neely suggested the idea of a Musical Turing Test. This could help set standards for AI music quality and make sure that machine-generated compositions meet certain levels of musicality and emotional impact.

We could even extend this idea to how we listen to music.

As AI becomes more common in our music world, listeners might need to develop new skills to critically evaluate AI-generated or AI-curated music.

Just like we’ve had to learn to tell the difference between high and low-quality digital audio, we might need to train our ears to recognize the subtle differences between AI-produced content and human-created music.

This could lead to a more discerning and engaged audience, potentially pushing both human artists and AI systems to create higher-quality content.

Leave a Reply