Should an artificial intelligence be allowed to get a patent?

Guest post by Ronald Yu

Whether or not an artificial intelligence (AI) ought to be granted patent rights is a timely dilemma given the increasing proliferation of AI in the workplace. AI technology has been applied effectively in medical advancements, psycholinguistics, and tourism and food preparation. Even a film written by an AI recently debuted online, and AI has sneaked into the legal profession.

In 2014, the US Copyright Office updated its Compendium of US Copyright Office Practices with, inter alia, a declaration that  the Office will not register works produced by a machine, or, “mere mechanical process that operates randomly or automatically without any creative input or intervention from a human author.”

To grant or not to grant: A human prerequisite?

One might argue that Intellectual Property (IP) laws and IP Rights were designed to excusively benefit human creators and inventors, and thus would exclude non-humans from holding IP rights. The US Copyright Office’s December 2014 update to the Compendium of US Copyright Office Practices, that added requirements for human authorship, certainly adds weight to this view.

However, many IP laws were drafted well before the emergence of AI, and in any case do not explicitly require that a creator or inventor be ‘human.’ The World Intellectual Property Organization’s (WIPOs) definition of IP talks about creations of the mind, but does not specify whether it must be a human mind. Similarly, provisions in laws promoting innovation and IP rights, such as the so-called Intellectual Property Clause of the US Constitution, also do not explicitly mention a ‘human’ requirement.

Finally, it ought to be noted that while the US Copyright Office declared it would not register works produced by a machine, or mere mechanical process, without human creative input, it did not explicitly state that an AI could not have copyright rights.

Legal Personhood

One might argue that an AI is not human, therefore not a legal person, and thus not entitled to apply for, much less be granted, a patent. New Zealand’s Patents Act, for example, refers to a patent ‘applicant’ as a ‘person’. Yet this line of argument could be countered by an assertion that a legal ‘person’ need not be ‘human’, as is the case of a corporation, and there are many examples of patents assigned to corporations.

The underlying science

To answer the question of patent rights for an AI, we need to examine how modern AI systems work and, as an example, consider how machine translation applications such as Google Translate function.

While such systems are marketed as if they’re ‘magic brains that just understand language’, the problem is that there exists no definitive scientific description for language or language processing. Thus such language translation systems cannot function by mimicking the processes of the brain. Rather, they employ a scheme known as Statistical Machine Translation (SMT) whereby online systems search the Internet, identifying documents that have already been translated by human translators—for example: books, organizations such as the United Nations, or websites. The systems then scan these texts for statistically significant patters and, once the computer finds a pattern, it uses the pattern to translate similar text in the future.

Many modern AI systems are largely big data models that operate by defining a real world problem that needs to be solved, then conceive a conceptual model to solve this problem, which is typically a statistical analysis that falls into one of three categories: regression, classification, or missing data. Data is then fed into the model and used to refine and calibrate the model. As the model is increasingly refined, it is used to guide location of data and, after a number of rounds of refinement, finally results in a model capable of some predictive functionality.

Big data models can be used to discover patterns in large data sets, but can also, as in the case of translation systems, exploit statistically significant correlations in data. None of this, however, suggests that current AI systems are capable of inventive or creative capacity.

Patentability?

This is an important consideration because, in order to get a patent, an invention must:

  • Be novel in that it does not form part of the prior art
  • Have an inventive step in that it not obvious to a person skilled in the art
  • Be useful

and it must not fall into an excluded category that can include discoveries, presentations of information, and mental processes or rules or methods for performing a mental act.

Why discoveries are not inventions is tied with the issue of obviousness and as noted by Buckley J. in Reynolds v. Herbert Smith & Co., Ltd who stated:

Discovery adds to the amount of human knowledge, but it does so only by lifting the veil and disclosing something which before had been unseen or dimly seen. Invention also adds to human knowledge, but not merely by disclosing something. Invention necessarily involves also the suggestion of an act to be done, and it must be an act which results in a new product, or a new result, or a new process, or a new combination for producing an old product or an old result.

Therefore in order to get a patent, an AI must first be capable of producing a patentable invention but, given current technology, is this even possible?

A thought exercise

Consider the following:

  • You believe that as a person exercises more, he/she consumes more oxygen. You have therefor tasked your AI with analyzing the relationship between oxygen consumption and exercise.
  • You provide the AI with a model suggesting that oxygen consumption increases with physical exertion, and data that shows oxygen consumption among people performing little, moderate, and heavy exercise.
  • The AI reviews the data, refines the model, collects more data, and comes up with a predictive model (e.g. when a person exercises X amount, he/she consumes Y amount of oxygen, and when the person doubles his/her exertion, his oxygen consumption rate triples).

As this is essentially a statistical regression, the model will not always be completely accurate in its predictions due to differences between individuals (i.e. for some persons the model will predict oxygen consumption fairly accurately, for others its results will be far off).

However, this particular model has another, more fundamental limitation: It fails to consider that a human cannot exercise beyond a certain point because his/her heart would be incapable of sustaining such levels of exertion or because over-exercise may trigger an unexpected reaction, like death.

If one were to feed this model data of persons who have collapsed or died during exercise (and thus, in the latter case, not consume any oxygen), would the AI be able to ‘think outside its box’ and:

  • Question the cause of these data discrepancies and have the initiative to conduct further investigation?
  • Note and correct the limitation in the original model (which would require a significant amendment)?
  • Or would it simply alter the existing model by changing the slope of the regression line?

SMT and other AI have similar limitations. In the case of SMT, once the system is built, linguistic knowledge becomes necessary to achieve perfect translation at all grammatical levels. And SMT systems presently cannot translate cultural components of the source text into the target language. They lack creativity and provide literal, word for word translations that do not recognize idioms, slang, and terms that are not in the machine’s memory. To correct this would require a change to the underlying machine translation model, and the question arises whether this would have to be done by the human creators of the SMT, or whether the SMT itself would be able to make the necessary corrections and adjustments to the model.

Should the SMT or, in the earlier example the AI, be unable to improve and innovate on the existing model, does it have the creative or inventive capacity to conceive an invention as truly inventive? And if either the SMT or the AI can produce something that appears novel and inventive, given the nature of how AI. presently operates (i.e. as big data models), would such a product be the result of an analysis of existing data to uncover hitherto unseen relationships—in other words, a discovery?

Returning to the original question about patent rights for an AI: Perhaps the question we should ask is not whether an AI should be able to get a patent, but whether an AI, given current technology, can create a patentable invention in the first place. And if the answer to that question is ‘no’, then the question of granting patent rights to an AI is moot.

Share: