This year, Artificial Intelligence (AI) has dominated headlines. Generative AI, such as deepfakes and ChatGPT, has caused excitement and controversy in equal measure, as society wrestles with both its potential and the new legal and ethical questions it raises. As a renowned leader in this space, Marks & Clerk has been at the forefront of the European Patent Office’s (EPO’s) engagement with the patent profession as they try to bring some legal certainty and consistency to examining AI patent applications within this rapidly evolving environment.
Growing public attention is mirrored by strong growth in AI patent filings with the number of AI patent publications at the EPO increasing by around 17% between 2021 and 2022, according to Mark’s & Clark’s annual AI Report. It also reveals that practical applications of AI are an increasing focus for industry, demonstrating significant investment that reflects AI’s growing impact on the global economy. As AI’s usefulness increases, it is clear that companies are eager to protect their innovations in this area. Key findings also include:
• The US is filing more AI applications at the EPO than any other country, whilst the Republic of Korea has the highest number of AI applications per capita
• The Med-tech and other life sciences sector filed more AI patent applications at the EPO than any other industry
• The allowance rate varied between 23% and 62% between sectors, based on areas deemed “technical” or not by the EPO
Whilst it’s too early to see the effects of the very latest AI developments in the data, it seems clear that patent applications will continue to surge. As applications continue to rise, lawmakers’ efforts to balance legal and ethical considerations will also increase, which will make some firms uneasy about how they can protect their interests while leveraging these new technologies.
The report shows that with the right expertise, companies can protect their AI innovations, while exploring some of the most common concerns arising from the assimilation of generative AI into businesses.
1. Data quality concerns
Both regulators and commentators have raised concerns about the data which fuels Large Language Models (LLM) and how it can lead to bias outputs. Generative AI is also prone to ‘hallucinations’, whereby the text it generates is factually incorrect.
To minimise this risk, firms need to take time to consider how and when they should be using LLM programs, and that they are making sure the information they feed to them is factually correct and complete.
2. Copyright infringement
To further fuel their outputs, generative AI programs have likely been trawling through and leveraging third-party proprietary materials. The scale and sophistication of this activity makes it hard for firms to keep track of what has and hasn’t been used and has given rise to a number of cases alleging unauthorised use.
As part of this, it’s likely that personal data may also have been used, meaning firms could be exposed to privacy claims as well as copyright infringements. We have already seen the perils of deepfake pictures and film footage play out across the media, and the impact this has had on those whose images have been used.
Part of the solution here appears to be regulation, with regulators around the world seeking ways to ensure that they can protect privacy without stifling innovation. In the EU for example, the draft EU AI Act anticipates that the use of all third-party material be disclosed, including sources, which would be cumbersome for AI users across the continent.
3. Ownership
One of the most heated debates in the industry is whether AI can be deemed an ‘inventor’, and consequently, if solely AI-generated works can be patented or have copyright protection. To date, the stance from most jurisdictions has been that human input is required for copyright or patentability to arise, leaving AI-generated works unprotected.
It remains to be seen whether this approach will continue, however any protection is better than no protection, so firms should seek to protect what they can. For example, in the US we have seen a series of partial copyright registrations, with human-generated elements to a creative work being granted protection and AI-generated ones being denied it.
At least one thing is undisputed: generative AI is set to continue generating controversy. But at its crux is yet another example of law and regulation being outstripped by technological progress. A way forward will emerge once the dust has settled, but it seems like a region-to-region approach is most likely. This means firms will need to spend time and get the right expertise to ensure they have the necessary protections in place in every continent.
Mike Williams, lead partner in AI at Marks & Clerk, looks ahead to incoming AI regulations:
“The impact of AI regulation on patent filing trends is worth exploring. The drafted EU AI Act could pose a significant burden to AI development in the EU – similar to the issues posed by GDPR – which could create divergence between countries if other jurisdictions apply a softer-touch approach to regulation.
“Given the 18-month delay between patent filing and publication, the recent surge in generative AI is not yet reflected in the current patent data. But given the disruption caused by the widespread uptake of Large Language Models (LLMs), it will be interesting to see whether this has an effect in next year’s data.”
This is Marks & Clerk’s third annual AI Report [hyperlink], which uses EPO data to identify global trends in AI patent applications and provides analysis from the firm’s team of world-leading AI specialists.