Regulation for AI has typically not addressed the copyright issues that arise from the training and development of models. Greater legislative clarity and balance is needed to offer stronger protections than currently available for creators while still supporting the growth of AI.
The issue of copyright infringement in the development and training of AI models has largely not been addressed by policymakers despite its potential to impact a range of creative industries. As such, a number of rightsholders have filed lawsuits against large AI firms such as Anthropic, Meta and OpenAI alleging breaches of copyright.
The EU and South Korea have established robust legislative frameworks for AI, but these laws have paid limited attention to the technology’s potential implications for copyrighted works. Where AI and copyright issues have been recognised, such as in the UK and Japan, governments tend to favour supporting the AI sector’s growth at the expense of copyright holders.
Brazil’s pending AI legislation is the first to comprehensively tackle the relationship between AI and copyright in a way that places greater emphasis on protecting creators, for instance by providing them with clearer routes to remuneration. Australia looks likely to pursue a similar approach, with priorities that include improving transparency from AI firms and opt-out options for rightsholders.
Further clarity from policymakers is needed on how the rise of AI affects copyright protections. Regardless of their general approach to regulating AI, governments can provide certainty through the publication of guidance or codes of practice to help AI firms to avoid copyright breaches and rightsholders to better protect their works.
Legislators could look to Brazil when adopting creator-favoured copyright regulation such as opt-in options for creators. Enforcing a more balanced copyright regime can help to encourage competition and incentivise innovation between AI firms at the training stage of models, and can boost the economic growth of the creative sector by providing better protections to copyright holders.
The development and training of AI models raises concerns relating to copyright infringement
As AI continues to develop at an unprecedented pace, determining the appropriate way of regulating the technology has become a priority for governments globally. Policymakers are taking varying approaches, with some focused on the safety of society from the potential harms of AI while others target a unique opportunity for economic growth. As debate around the most effective means of regulation persists, the scope for copyright infringements in the development of AI models and tools is an issue that has only recently started to receive significant attention despite the potential impacts on a number of creative industries.
As AI firms look to train their models with wide-ranging datasets in order to improve their accuracy and foundational intelligence, concerns have been raised about their use of copyrighted works, which they have been accused of doing without the explicit permission of rightsholders. A number of lawsuits alleging breaches of copyright have already been filed against firms such as Anthropic, Meta and OpenAI, including from the media and music industries. The overall lack of clarity on how copyright legislation applies in the context of AI is one of the key enablers of this, which is now increasingly on the radars of governments and regulators around the world.
Most approaches to AI-related copyright issues focus on enabling the growth of the AI sector over protecting rightsholders
The most common approach to regulating AI-related copyright issues currently tends to favour AI firms over copyright holders, such as artists and writers, by offering few protections for copyrighted works – see Figure 1. The EU’s AI Act briefly addresses the use of copyrighted works by stipulating that firms that develop AI generative models must publish a detailed summary of the content used in the training process. The aim of this requirement is to provide creators with the information they may need to exercise their rights of ownership over the content being used in AI training. Previously, the EU’s 2019 Digital Single Markets (DSM) Directive on Copyright established some protections for creators by allowing them to reserve their rights over their works to prevent text and data mining – two key processes in the training of AI models. While this does create some level of protection for copyright holders, it still places the burden on them to opt-out of their works being used in training AI.
Similar to the AI Act, South Korea’s AI Basic Act (passed in December 2024) pays limited attention to the issue of AI and copyright, but states that the Government may support projects relating to the protection of fundamental rights, body and property of the people in the development and use of AI. However, the Ministry of Culture, Sports and Tourism (MCST) pre-empted this law with the publication of a comprehensive guide to AI and copyright. This recognises that the Korean Copyright Act’s (KCA) provisions are unclear about copyright infringement linked to the development of AI, although it does provide some useful advice for both rightsholders and AI firms. The guide encourages AI firms to enter into clear and precise contracts with rightsholders for the use of their works in AI training in order to avoid potential copyright issues. It also suggests that these firms use works that are already in the public domain. The advice to creators is similar to the EU’s copyright regulations in the DSM Directive in that it requires creators to take steps to prohibit the use of their works in data mining for AI development purposes.
On 25 December 2024, the UK Government launched a consultation on copyright and AI in which it proposed measures to support rightsholders’ control over their content, boost the development of world-leading AI models in the UK and promote greater trust and transparency between the creative and AI sectors. Again, these proposals largely reflected provisions contained within the AI Act and the DSM Directive in the EU. Specifically, the consultation proposed that AI firms provide greater transparency about the works they are using and that creators be given a route to opt-out of their works being used in the training of AI. It also emphasised the importance of regulation still enabling innovation in the AI industry. As this consultation ended on 25 February 2025, a number of campaigns by the creative industries as well as the news media took place, urging the Government to reconsider its proposed approach to provide better protections to rightsholders.
On 13 January 2025, the UK Government announced its AI Opportunities Action Plan, in which the issue of AI and copyright received little attention other than a brief promise that a “clear and trusted copyright regime” would be introduced. Similar to the MCST’s guidance for AI firms to use public domain data for AI training, the UK’s plan outlines a recommendation – that the Government intends to pursue – to create a copyright-cleared media asset training data set, which could be formed through partnering with bodies that hold valuable cultural data, including the National Archives, the British Library, the BBC and the Natural History Museum. It also suggests that this dataset could be licensed internationally at scale. Although this could provide AI firms with useful training data that does not infringe on any copyright, it still falls short of establishing clear protections for creators and rightsholders whose works are often being used without their approval or knowledge, with the Government’s focus remaining on the importance of AI development and the regulatory framework that will best enable this.
Other jurisdictions are relatively more hands-off, offering even less protection for rightsholders. In May 2024, Japan’s Agency for Cultural Affairs published guidance on AI and copyright outlining how the Japanese Copyright Act permits the use of copyrighted material without the permission of the rightsholder in the development of practical technology, data analysis and computer processing. The act also sets out that these uses are permissible on the grounds that the exploitation of copyrighted works is not done for the “enjoyment of the thoughts or sentiment expressed” in those works. The issue with these permissions is that although the copyrighted works may not be used for enjoyment in the development of AI models, the works that these models go on to produce may be. Any production generated by these models will be influenced by the copyright works it was trained on.
Brazil’s AI bill is the first major legislation to propose an alternative, creator-favoured approach
The alternative approach to regulating AI and copyright is to focus on better protections for rightsholders, or creators. Rather than only enabling creators to opt-out of their works being used, a more creator-oriented regulation may provide clearer routes to remuneration and refusal rights for those whose works are being used in the training of AI.
In Australia, the Attorney General established the Copyright and AI Reference Group (CAIRG) in December 2023 to better prepare for future copyright challenges that could emerge from AI. The CAIRG’s focus so far has been to explore the use of copyrighted materials as inputs for AI systems. Although the CAIRG is yet to publish any official guidance, its report from September 2024 gives a good insight into its likely priorities, such as remuneration for copyright holders, opt-out options, transparency from AI firms and greater legislative and regulatory clarity on AI-related copyright issues, indicating that the country’s approach will look to bolster protections for copyright holders – see Table 1.
Brazil’s AI bill (now in its final stages before completion) goes a step further than the Australian Government's approach as one of the first pieces of legislation to date that not only provides a clear framework for copyright rules related to AI but also establishes robust copyright protections for creators. The bill establishes three key protections for creators:
AI developers who use copyright protected content would be required to inform about the protected content used in the development of their AI tools through a summary on an easily accessible website;
The bill gives creators the right of opposition, if a creator does not want their works to be used in the development of AI, they can prohibit its use. This policy seems like a more absolute version of the opt-out options offered by the EU, UK and South Korea; and
The bill establishes a system of remuneration. Under the bill, an AI agent utilising protected content in the development of AI systems would be forced to compensate the creators who hold the rights to the protected content.
By some margin, the bill is therefore the most progressive and pro-creator approach to legislating AI and copyright. Under these protections, creators will have clearer control over their works and will be able to access new, fairer paths to remuneration, which is expected to bolster Brazil’s creative sector.
There is an urgent need for clarity from governments on the legal relationship between AI and copyright
From analysing the approaches taken in the six markets, there is a clear lack of distinct legislative frameworks accounting for the relationship between AI and copyright. Even where new, comprehensive AI legislation has been introduced, policymakers have largely tended to steer clear of including provisions for copyright, remaining largely reliant upon legislation already in force (and in some instances with additional guidance on how it should be interpreted). It currently appears the case that it is the existing regulation that is more likely to be amended to include AI-related copyright provisions rather than the latest AI legislation itself.
It is also notable that the adoption of a clear copyright policy for AI does not seem to hinge on whether a country or region’s AI legislation is more safety or innovation oriented. The EU takes a risk-based approach to AI, focusing on safety and security, while the UK’s approach so far indicates a preference for promoting innovation. The South Korean AI Basic Act also focuses on innovation and the establishment of a competitive AI sector, but goes further than the UK or the EU by also ensuring a risk-based approach to safety is taken. Despite these differences, all three markets have taken similar approaches to AI and copyright, favouring larger tech firms over creators. Even though South Korea’s guidance on AI and copyright most keenly recognises the issues that are arising for creators, its attempt to mitigate these issues fails to take any significant steps to better protect creators.
It is apparent that in order to effectively regulate the use of copyrighted works in AI development, clearer guidelines and regulation is necessary to help both copyright holders and AI firms better navigate the legal landscape. However, this has so far only been recognised by a select few governments, such as in Australia, Japan and South Korea. While not publishing guidance, the UK Government’s Science, Innovation and Technology and Culture, Media and Sport Committees have both published reports on AI. The latter’s report is particularly focused on AI and creative technology, and urges the Government to reconsider exemptions for text and data mining from copyright infringement rules. Though this appears unlikely, with the Government set to largely favour the interests of AI firms moving forward, more clarity on its approach is expected as the AI Opportunities Action Plan is implemented. This will be welcome news to speakers at two recent Parliamentary hearings who called for greater clarity from the Government on the issue of AI and copyright, as well as an improved pace of action in this space.
More countries could improve legislative clarity on AI and copyright issues by publishing guidance similar to that from the MCST in South Korea or by creating industry standards and codes of practice for AI firms to follow – something which the CAIRG in Australia is also currently considering. In the UK, Ofcom already publishes codes of practice for stakeholders in the sectors it oversees. Having fairly recently taken on responsibilities for enforcing the Online Safety Act, Ofcom has again sought to provide clear guidance on how providers can ensure they comply with their specific legal duties under the act. A similar approach could be taken elsewhere, providing a better understanding and greater certainty about how the emergence of AI plays into existing copyright law. Before policy discussions surrounding whether copyright holders need better protections or if large tech firms should be given more freedom to encourage growth in the AI sector, both creators and AI firms would benefit from having a clearer set of guidelines to follow on what is a new issue with a growing awareness of the possibilities for copyright infringement.
An economic balance must be found between creators and firms in legislating for AI and copyright
As more countries consider how to regulate for AI and copyright, a balance will have to be struck between protecting creators’ rights over their works and enabling AI firms to grow. Although it seems that many are currently leaning towards the latter, policymakers should consider the benefits of an approach akin to that of Brazil. By providing creators with clear routes to remuneration for the use of their copyrighted works, creative sectors that rely on these streams of income could see significant growth. The current protections given by most governments place a heavier burden on creators than on AI firms, often requiring them to opt-out of their copyrighted works being used in the training of AI models. This creates the default position of AI firms being able to use copyrighted materials without permission of the rightsholders. In contrast, a regime similar to Brazil’s could help to create a mutually-beneficial framework to which creators and copyright holders are incentivised financially to opt-in, while AI firms would benefit from having a richer and more diverse range of works to leverage in their training processes without risking breaching copyright law.
The rapid growth of the AI sector has the potential to open an entirely new market for the creative sectors. This would also align with the aims of a number of countries such as the UK that wish to place AI at the heart of economic growth in all industries. Though this change would increase the costs of AI development for firms, it can also encourage competition in the AI sector by driving firms to compete in finding the lowest-cost ways to develop high-capability AI models. Alternatively, if most firms were pushed into using a similar cultural dataset that is already in the public domain, as encouraged in the South Korean Government’s guidance, these firms would be competing on a more open and level playing field in developing AI models, further incentivising innovation.