AI-Generated Music: Copyright in the Music Industry’s AI Age

Ron PyeRon Pye
25 min read

AI-Generated Music: Copyright in the Music Industry’s AI Age

The rapid advancements in artificial intelligence are affecting all industries, and the music industry is no exception as it finds itself at a turning point. AI-generated music is changing how we think and feel about music creation. Machines can now make songs with skill and creativity.

The blanket adoption of these new technologies raises significant copyright questions for everyone in the music industry. It raises the question of who owns the rights to music created by machines. How much human input is required for the algorithmic output to be considered human created? If it can’t be considered a human creation, then who owns it? The old rules of copyright protection, originally designed for physical items, don’t fit well with music made by algorithms from vast databases. Especially when existing laws state that substantial similarity constitutes copyright infringement.

As AI and copyright issues grow, so does the need for new rules. AI innovation is getting easier for people all over to use. This means we need to talk about how to handle music made by machines fast.

It’s key to understand how tech and law work together in music today. We must find fair ways to deal with copyright issues. This will help music keep growing and changing.

The Current Landscape of AI Music Generation

Today, AI generative music platforms are changing how we make music. They use AI technology to create ‘original’ songs with increasingly surprising quality. These generative and assistive AI systems are changing the way professionals, hobbyists and everyone in between are making music.

AI-Generated Music: Copyright in the Music Industry's AI Age

As AI companies develop more sophisticated platforms with AI integrations and terms and conditions of use, the market is growing incredibly fast. These systems learn from music patterns and styles, enhancing their ability to create music content that resonates with end listeners. Some would argue that this is proof of the homogenisation or ‘sounds like’ effect of what is perceived as modern music. The AI companies market themselves as being able to make songs that feel as good as ones made by humans.

Leading AI Generation Music Platforms and Technologies

Some platforms are leading the way in the use of generative AI to enhance user experiences. AIVA is one of the top names in Generative AI music. The AI machine learning systems make all genres from classical, rock and cinematic music through deep learning.

Amper Music is another AI tool that makes music extremely quickly. The easy-to-use interface allows you to choose the mood and style of your music, based on a few simple prompts. It then makes professional-sounding music in minutes..

OpenAI’s MuseNet is another big step in generative AI models. It also can make music in many different styles, from classical to pop. MuseNet shows how well AI models understand the composition of music.

PlatformPrimary FocusKey FeaturesTarget Users
AIVAClassical & CinematicDeep learning algorithms, score analysisFilm composers, classical musicians
Amper MusicCommercial SoundtracksMood-based generation, quick turnaroundContent creators, advertisers
MuseNetMulti-genre CompositionStyle blending, genre versatilityExperimental musicians, researchers
SoundrawRoyalty-free MusicCustomisable parameters, instant downloadVideo producers, podcasters

These platforms show how generative AI has grown hugely in recent years. Modern AI models understand not only music theory and composition but also emotion. How the end result is going to make the listener feel. The tech is getting better fast, with new features added all the time.

Commercial Applications in the Music Sector

AI use in music is big in many areas. Ad agencies use ai tools for custom soundtracks. This saves money compared to hiring composers or buying tracks.

Film and TV allegedly use generative AI music for scores and mood. These AI platforms make music that fits the theme of the shows or the mood of the scene. It’s considered great for indie filmmakers with small budgets who can’t afford licensing fees.

Gaming companies also use AI technology for music. They need soundtracks that change with the game. Generative AI models make music that fits the game perfectly.

Streaming services use AI tools for playlist music recommendations. They make music based on what you like. This means you get music that’s just for you.

Independent artists and authors also benefit from ai use in music. It makes making music easier, without needing to be a pro. Many start with AI and then add their own touch.

Now, studios mix AI generative systems with traditional music making. This mix of tech and human touch makes music making arguably, better.

Understanding How AI Systems Learn from Music

The starting point for AI generative music begins with complex algorithms and huge musical datasets. These systems need lots of training data to learn patterns and styles in music. They learn by processing huge amounts of music data, training their models to understand what makes music appealing to humans. And, crucially, how the impact of generative AI shapes these preferences.

Understanding How AI Systems Learn from Music

Today’s AI learning tools and applications use advanced data preparation. Engineers turn audio into numbers for algorithms to assimilate and study. This helps the learning machines grasp rhythm, melody, and harmony. Similar to a human.

Training Data Collection and Processing

Companies use many ways to get music for their AI. Web scraping is a common practice, using tools to ‘scrape’ songs from the internet. Although, everyone is aware this happens, no one admits to it, as it is a violation of current copyright laws. They also work with record labels for quality music.

The training AI systems process is extremely detailed. First, audio is broken down into musical elements like tempo and pitch. Then, these are turned into numbers for AI models.

Checking the data quality is also key. Engineers remove bad files and duplicates. This step is crucial for AI to create good music.

The Scale of Copyrighted Material Usage

AI generative music systems use a huge amount of copyrighted material. Big platforms handle millions of songs without asking producers. This raises big questions about rights and fair pay.

Studies show companies use music from many artists over decades. This wide range helps AI learn all about music.

The table below shows how many copyrighted works are used in the production of AI generative music:

Platform TypeDataset SizeCopyrighted WorksTraining Duration
Commercial AI Platforms1.2M songs (OpenAI Jukebox) to tens of millions (Suno)Web-scraped (admitted by Suno/Udio)2-4 weeks (documented commercial models)
Research Institutions200 hours (MAESTRO) to 280K hours (MusicLM)Mixed: Licensed (Meta) to Web-scraped (Google)1-2 weeks (typical research timeframes)
Independent Developers5K-100K tracks (typical indie projects)Typically copyright-cleared datasetsDays to weeks (smaller datasets)
Open Source Projects15K tracks (GCX) to 250K+ scores (PDMX)100% copyright-cleared/public domainDays to weeks (copyright-cleared datasets)

Training AI on such big datasets costs a tremendous amount. Companies spend millions on computers and storage. They believe bigger datasets make better AI-generated music.

Legal issues are growing about using so much protected content. Some say it should need a license like traditional music. This debate is about fair use and rights.

How Does AI Training on Music Affect Artist Rights

AI systems trained on copyrighted music change the game for creators’ rights. Pertinent questions are raised about how tech companies use existing music for their products. Music producers, composers, singers and songwriters are also in a difficult situation. Their works can become training materials for systems that might eventually end up competing with them on prized platforms.

How Does AI Training on Music Affect Artist Rights

The issues that are raised go beyond basic copyright. Artists’ rights cover many different areas, like making sure creators get remunerated fairly and protecting their artistic vision. These rights have grown over time and vary greatly from country to country, which makes them difficult to implement.

“The fundamental question is whether AI companies can build billion-dollar businesses on the backs of artists without permission or compensation.”

Tracy Chan – former Soundcloud Twitch, and Spotify exec.

Direct Threats to Creator Revenue Streams

AI in the production of music poses big financial risks to music authors and rights holders. AI can make music that sounds like existing works, taking market share from human artists. This affects many ways musicians make money.

Streaming platforms now play AI-generated music alongside human music. Their algorithms can’t tell the difference between original works and those that have been influenced by copyrighted works used for training. This means music producers and artists might lose chances to get their music heard.

Licensing markets are also hit hard. Background music for ads, films, and commercials is a big source of income. AI can make music for these uses quickly and cheaply, threatening the jobs of composers.

AI makes passable music much faster than humans ever could. While artists might spend months on a song, AI can make hundreds of different tracks, or iterations of a single track, in a matter of hours. This increasing flood of music data could, over time, make human creativity appear less valuable or necessary.

Attribution and Recognition Challenges

Music that has been generated with AI algorithms ignores the originator of the music. When AI learns rhythms and tempo via copyrighted music, it takes on those styles and sounds without giving credit to the authors. This means artists’ work becomes difficult to trace, almost impossible to attribute and part of a process that ignores any recognition.

Copyright holders might only find out their work has been used to influence AI-created music much later. The training process uses millions of tracks without consent or credit. This makes artists’ contributions anonymous.

Recognition is not just about remuneration and legal rights but also cultural respect. Artists can build their careers on unique styles and sounds, which sometimes may be geographically specific. AI replicating these without credit undermines the bond between musicians and their fans and the territory of the original sounds.

New artists face a tough time. Established musicians can fight for their rights, but newcomers often can’t as they do not have the financial means. This makes artists’ rights depend on how much money they have.

Moral Rights and Artistic Integrity

Moral rights are key protections for creators in the UK. They include the right to be credited and the right to protect their work’s integrity. AI music generation challenges these rights.

The right of integrity stops artists’ work from being changed in ways that harm their reputation. Learned AI, trained on copyrighted music, can create unexpected or inappropriate uses of recognisable elements. This can link musicians to content they never intended to be associated with.

Attribution rights exist to ensure musicians get credit for their work. AI training processes often remove or completely ignore this credit, as metadata is not scraped and attributed in the creation of generative music. The resulting music may show clear influences from specific artists without any formal recognition given. This raises concerns about the rights of artists in the modern AI age.

Violating moral rights can seriously hurt artists’ careers. Musicians can spend years building their unique voice and reputation. AI uses their work without the correct permissions, which can damage these foundations.

Original creators argue that they feel violated when they find out their work was used to train AI algorithms, without their consent. This emotional impact is very often overlooked.

Artists struggle to prove these violations and get justice. Traditional copyright laws can’t handle the vast scale and complexity of AI training. This leaves many without basic ways to protect their rights.

British intellectual property legislation faces new challenges with the rise of algorithmically created music. The law was made for human creators, not AI systems. These systems can quickly create music from huge databases of tracks, utilising the latest advancements in the use of AI and existing copyright-protected material.

UK Copyright Law and AI Music Generation

The Copyright, Designs and Patents Act 1988 is the main law for copyright in the UK. But, it was made before AI existed. Now, courts and experts must apply old laws to new AI scenarios.

UK law gives strong copyright protection to musical works. Original creations and compositions get copyright from the start. This protection lasts 70 years after the creator/composer’s lifetime

AI-created music raises significant questions for these protections. When AI uses thousands of songs to train its algorithms, protected by copyright, it is still unclear if the use of generative/assistive AI is allowed in this context. Traditionally, permissions must be sought as the technology becomes ever more advanced. The law must decide if the vast training of AI is fair use or copyright infringement.

The courts are starting to debate/make rules, but many questions still exist. The intellectual property law community continues to debate if AI-generated music can infringe existing copyright. This will affect how music creators protect their IP rights in an AI world.

UK law also talks about who owns AI-generated content. It says the person who arranged for its creation owns the copyright. This could apply to music created with AI programs, but as the laws currently stand, it’s complex to apply.

Text and Data Mining Exceptions

The UK made text and data mining exceptions to copyright law in 2014, and updated them again in 2021. These adapted rules were introduced primarily to help with modern research and innovation, under certain conditions.

For AI enterprises , the 2021 changes allow more researchers to use these exceptions. However, AI developers must have the correct and proper legal access to the works that they intend to analyse. Just downloading music to train an algorithm isn’t enough for the research guidance.

There are also limits to these exceptions. Rights holders , contracts or technology can circumvent these rules to ensure that the use of AI-generated music complies with the copyright laws. Many existing music companies include clauses in their deals that strictly block the use of AI.

Legal ProvisionScope of ProtectionLimitations for AICommercial Impact
Copyright, Designs and Patents Act 1988 (Section 1)Automatic copyright for original literary, dramatic, musical, artistic works, plus sound recordings, films & broadcasts.Any unlicensed copying (including model training) is prima-facie infringement unless an explicit statutory exception appliesHigh – foundational right governing every stage of music-AI development (licensing, enforcement, litigation)
Text-and-Data Mining Exception (Section 29A CDPA, 2014)Permits computational analysis for non-commercial research where the user already has lawful access.Strictly non-commercial; no sharing of copies; rightsholders’ contracts cannot override but commercial AI training is excluded.Low – negligible direct benefit to commercial AI firms; proposed 2022 expansion was withdrawn in 2023 after industry push-back.
Fair-Dealing Provisions (Sections 29–30 CDPA)Narrow exceptions for non-commercial research, private study, criticism/review & reporting current events.Must be “fair”; research must be non-commercial; excludes sound recordings for research; scope too narrow for industrial-scale AI training.Low – limited to academic or journalistic uses; offers virtually no safe harbour for commercial generative-AI workflows
Computer-Generated Works Provision (Section 9(3) CDPA)Confers 50-year copyright where no human author; authorship vests in the entity making the “arrangements necessary”.Originality threshold unclear; identifying the “arrangement maker” is fact-specific; academic and Court of Appeal critique note doctrinal uncertainty.Medium – governs ownership of AI outputs yet remains legally unsettled, creating deal-making friction and litigation risk

The current laws create a lot of uncertainty for AI engineers and music creators. The text and data mining exceptions help, but they don’t cover all variations of algorithmically created music generation. Many AI systems might use copyrighted data without legal permission.

Legal experts think new laws will come as AI technology grows. The UK government plans to look at copyright laws again. These changes could greatly affect how AI organisations work and how artists protect their work in the digital world.

Legal battles continue to rage between artists and AI makers, which suggests that new laws are needed for the protection of music. If the laws are not adapted or no charges are made, then the courts will continue to struggle to decide when AI-generated content breaks the existing rules. These cases are challenging a legal system that is designed to protect the rights of artists and also aims to support the progress of technology.

Copyright Infringement and Legal Disputes

The music world has seen many key cases that show how copyright is changing with technology, yet it is also lagging behind. Each case helps us understand how the laws need to keep up with the rate of new technology development.

High-Profile Cases and Precedents

Recent legal disputes have made people talk about AI and music copyright. Anthropic is facing lawsuits from big publishers over using copyrighted lyrics, highlighting the importance of being protected by copyright in the age of AI. Publishers say this is a clear copyright breach.

In the US, lawsuits against Stability AI and others are setting new rules for AI and copyright. These cases look at if using intellectual property for the training of AI is fair use. The results will likely affect UK laws too.

The European Union’s rules on AI copyright are also important worldwide. New decisions stress the need for clear data sources in AI algorithmic training. They say developers must show how they use copyrighted materials.

Determining Substantial Similarity in AI Outputs

To the human ear, there appears to be a substantial similarity in the training data used and AI output, which leads to legal disputes over whether the output constitutes copyright infringement. However, the courts have an extremely difficult time figuring out if AI-generated music training data and AI output are actually copyright infringement under the current laws and subsequent amendments. As mentioned earlier, how much human input or prompts to an algorithm determines whether the AI output is a unique creation.

Traditional copyright checks look for direct copying or obvious similarities in melody, composition and song structure. Or perhaps the use of an unlicensed sample. But AI-created music, which uses micro fragments of data, is fundamentally different. This makes any direct comparison difficult.

Legal experts must consider multiple angles when checking for a supposed infringement. They have to consider melodic patterns, harmonic progressions, and rhythmic structures of songs in any generative AI outputs. The really big challenge is trying to tell what is considered coincidental similarities and what is actual copying of the original intent.

Technical proof and evidence is key in legal cases. Courts use expert opinions to understand how AI has been applied to create the music. This helps to show if AI-generated works originate from specific copyrighted sources.

As AI advances, it has become increasingly difficult for plaintiffs to prove copyright violations in legal cases. The burden of proof is on the plaintiff to show that an infringement happened. This requires time, money and a deep, intricate analysis of the training and the final output.

Linking specific training materials to final outputs is also a significant challenge. AI mixes micro elements from many sources, making it very hard to identify direct influences. Courts need to create new ways and means to deal with these increasingly complex issues.

Music Industry Organisations and Rights Management

Rights management bodies are leading the way in fair compensation for artists in algorithmic AI training. They understand that old licensing models need a big change for AI. Music industry organisations are working hard to adapt.

Music Industry Organisations and Rights Management

It’s not just about individual actions. Collective rights societies , Governments and tech companies are teaming up to navigate the challenges posed by the intersection of AI and music. They aim to protect creators while encouraging new ideas.

Collective Rights Societies’ Response

PRS for Music is a key player in pushing for clear AI licensing rules. They’re involved in Copyright consultation is essential for understanding how copyrighted works can be used for training AI models. with lawmakers. They want AI engineers to get the right licences before using music in training.

PPL has set up new systems to track AI content on streaming sites. These systems spot when original music influences AI. They make sure remuneration for ai goes to the right people.

The Musicians’ Union is very active. They guide members on how to reserve their rights with AI companies. They also offer legal help when artists’ work is used without permission.

These collective rights societies share info and work together. This ensures the same protection everywhere, no matter where AI is made.

Industry-Wide Protection Initiatives

The music sector has started many protection programs. The Music Rights Awareness Initiative teaches artists about AI Copyright issues arise when copyrighted works are used for training AI models.. It helps them know their rights and legal options.

A new AI licensing framework is being made. It will deal with AI’s special needs while keeping fair pay. Many rights organisation experts from different countries are helping make it.

International cooperation is getting stronger. UK groups are teaming up with EU and US ones for global standards. This industry-wide protection stops AI organisations from choosing weak laws.

The sector has set up fast ways to handle AI copyright issues. These systems help solve problems quickly. AI’s impact on music in 2025 is driving these efforts.

Legal steps are being backed and reinforced by technology solutions. As an example, blockchain-based systems can track how music is used in AI, from the source. They offer clear, immutable records for enforcing rights when needed.

AI Developer Responsibilities and Licensing

The music industry’s changing relationship with AI developers is based on clear licensing and data use. As AI becomes increasingly more sophisticated, it will be crucial to have new rules for the use of protected music. The key will be to balance creators’ rights, technological progress and fair remuneration.

AI Music Developer Responsibilities

AI developers’ responsibilities must extend beyond just getting permissions. They must include ethics, law, and working within the music industry. Companies must adapt to actively work with creators, showing that they have their interest at heart, before legal issues arise, as the vast majority do not have the financial means to challenge.

Discussions on Innovation and artists’ rights are at the forefront of discussions regarding the use of AI in music creation. These ongoing talks will guide how developers handle their duties in music development.

Licensing Models and Industry Agreements

Current licensing practices vary widely. Some companies make deals with labels and/or publishers directly. Others use blanket licenses for large music catalogues. These agreements set rules for fair data use and output restrictions.

Some systems need creators to allow AI companies to use their music. This gives creators control but limits the access to data. Other systems assume permission unless creators specifically say no, offering more access but raising consent issues around the use of music data and personal data.

Big streaming services and labels are making deals with AI organisations to set rules. These deals include sharing profits and giving credit to creators. But smaller artists might struggle to get these deals.

Getting licenses for lots of music costs a lot. Developers say these costs might slow down innovation. But creators say they need fair pay for their work.

Data Source Disclosure and Accountability

Transparency is essential to ensure that AI-generated music does not violate rules regarding music without permission. in data sources is now key. Many AI developers are under pressure to reveal their data sources. This lets creators see how their work is used and get paid.

But, making this transparent is hard. Big datasets have millions of tracks from many places. Tracking and attributing this data is a big task for companies.

Some groups are publishing reports on their data use and licenses. These reports say what content they allow AI to process and under what rules. But some companies might not want to share all the details because of giving an edge to their competition.

Now, the focus is on setting clear transparency rules for AI. Many groups are pushing for standards that ensure AI is developed and used morally and ethically. This includes making sure all apps, with the ability to create or use AI-created content, follow and adhere to the same universally accepted rules.

Checking if companies follow these rules is also increasingly important. Regulation and independent audits could prove whether companies are honest about their practices and activities. This would give creators more confidence and help good developers show they’re following the laws.

The future of AI developer responsibilities will likely mean stricter rules and self-regulation. As AI gets better, the need for ethics and transparency will grow. This will shape how companies work with music AI in the future.

Impact on Music Streaming and Commercial Distribution

AI-generated music is changing how we get music. It’s making old ways of deciding what music we hear less important. Now, algorithms can make thousands of songs every day.

Music Streaming services and Commercial Distribution

This change is making music streaming services rethink how they work in light of the rights of artists and the rise of AI. They need to handle a lot more music than before.

Big streaming services are dealing with a huge amount of music data created with AI. This is both good and bad for them. It’s harder to tell if a song is made by a human or AI.

“More than 20,000 AI-generated tracks are being delivered to our platform every day – around double the 10,000 daily AI uploads Deezer reported in January,”

Aurelien Herault, Chief Innovation Officer at Deezer

Platform Policies for AI-Generated Content

Music streaming services have independently made rules for algorithmically created music. They ask creators to say if AI helped make the resulting song. _Being open about AI use is _now the key to transparency.

They use special tools during the upload procedures to check if a song has been made by/with AI. These tools look at the sound, details, and how the song was shared to help determine suspected discrepancies.

How to label AI music also varies greatly between services. Some ask for clear labels, while others rely on the creators to say. It’s hard to keep things the same everywhere. Because of this, there are called for a standardisation approach so it is clearly noted and available to all when a song has been made by AI.

Services have changed significantly in how they judge the quality of the music that they accept. They appear to want to help AI creators but at the same time also keep the quality level high for the end listeners.

Revenue Sharing and Monetisation Models

AI-created music is changing how services make money. The old way of paying for each stream doesn’t work well with AI. They need new ways to share money.

Services are trying new payment systems. They pay differently for music made by humans and AI. It’s hard to figure out how much to pay for algorithmically generated music, and who exactly to pay.

AI companies are now part of the music business. They work with services in special ways. New ways to make money are coming because of AI.

AI is being touted as making playlists fairer. It can be used to make sure music by humans and AI gets played equally. This also changes how we find new music. It’s considered good for some artists but bad for others.

AI-created music is being used more and more in business and media. It’s cheaper to develop and easier to use than music made by humans. This is creating new ways to make money, but also hurting some creators.

The future of music streaming depends on fair remuneration models. As AI gets more sophisticated, the music industry needs to find ways to pay everyone fairly. It’s a big challenge.

Protecting Artists in the AI Era

Keeping music creative in the digital world needs new tech and strong laws. Algorithms that have been used to make music have led to advanced copyright protection systems. These systems use the latest tech and old laws to protect artists’ work.

Protecting Artists in the AI Era

The music world knows that stopping the use of copyrighted material without permission is key. Artists, labels, and tech firms are working together. They aim to protect creative rights while AI changes.

Digital Fingerprinting and Content Authentication

Technological solutions are the first defence against AI misuse. Digital watermarking puts invisible marks in audio files. These marks stay even after AI changes the file.

Blockchain-based systems keep an immutable record of who owns what. Even fragments, or granular registrations of specific elements, can be registered. They can then be used to show who has rights and where they’ve been used. This makes it clear who owns what and how to remunerate for that.

AI can also now accurately spot when music is copied without permission. It checks new songs against vast databases of music. If it finds a match, it flags it as possibly copied.

Content authentication certificates add even more transparency and protection. They can prove that music comes from human creation, humans and AI assistance or entirely AI. This stops music made without permissions and enables potential unauthorised usage to be traced.

Legal remedies help artists when their work is taken without permission. Cease and desist orders stop misuse quickly. They can stop AI from using protected music without licensing agreements.

Artists can also claim damages for lost money due to copyright infringement. Courts now see the value of music used in AI. This means artists can get fairly remunerated for their unique creations.

Courts can also order AI developers to remove protected music that has been used for training, through the enforcement of legal injunctions. They also make sure they don’t use it again.

Enforcement mechanisms are growing to be able to handle AI’s global reach. Rights groups are working together globally to fight copyright violations. They use automated protection systems to watch for misuse and act fast.

Global industry wide opt-out databases also let artists say no to the use of their creations in AI. These databases register and show who doesn’t want their work used in AI. They help artists keep control while respecting legal, moral and ethical guidelines.

So, What Do We All Do?

The link between AI and the music world is changing fast. Generative AI and the use of AI in music bring both new chances and big hurdles for artists, as they navigate the complexities of copyright. Policy makers need to think carefully about these changes.

Artists worry about how AI might change the music making process and their jobs. The use of AI raises fundamental questions about fair pay and who gets the correct credit. Some big music companies, like Universal Music, are trying to universally stop the use of their songs in AI training methods without the required permissions.

AI uses training methods that require vast amounts of data from protected tracks. This has led to debates about the request of and the need for proper permission to use this content. AI can assist in making music, but it must seek to respect existing copyright laws, even if that is written into the algorithm as a bias of control.

The future of AI in music depends on finding fair solutions for artists. Everyone in the industry must work together to address the rise of AI and the effect it is having on every aspect of the music industry. They need to make regulations that protect and respect artists but also allow new ideas to grow.

The music world is at a turning point where the advancements in technology are now meeting creativity. Some argue that we do not need generative AI in creative spaces, some argue that it assists in the creative process. Success will come from finding ways that help both human artists and new technology. It will become more important than ever to keep the heart of music, creativity, at its core.


0
Subscribe to my newsletter

Read articles from Ron Pye directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Ron Pye
Ron Pye

Ron Pye is a Music Industry executive with MA in Music Industry Studies and BA in Music Business and Finance from the University of Liverpool.