Q&A with MaryAnne Armstrong: The future of AI in the life sciences

by MaryAnne Armstrong, Ph. D. | LSIPR

Question: Does the use of AI in pharma change the routes that people will go down to protect their IP?


Dr. Armstrong: Even though right now people are viewing that AI is so early in development and use that they’re not as concerned about using traditional patent protection for protecting a compound that maybe they used AI to develop.  There was the Dabus case where the patent owner named the AI as the inventor and the courts in both the EU and the US are saying, no, you can’t do that, you can’t get a patent, you have to name the inventor, the AI cannot be the inventor, but that was done deliberately by the patent owner, other people generally are viewing it as AI is not so sophisticated and developed at this point in time that AI can itself be an inventor so that’s not as much of a concern.

And, similarly, the article dealt with, sort of, obviousness and inventive step – it’s same situation at this point in time.  People view it as there is still so much human input that’s needed and human input and innovation that’s needed that people aren't as concerned at this point in time that if the compound itself is not obvious just because AI was used as a tool in its development it’s probably not going to be an issue. The concern, though, that people are thinking about is, what happens down the road when AI becomes more sophisticated, more commonplace and does more of the work, then it could be an issue.

Question: Is there going to be a trade-off in the future between your decision to use an AI solution and your ability to patent those solutions?

Dr. Armstrong: So, I would say at this point in time people aren’t so concerned because the AI is not that sophisticated to the point that the AI is wholly developing the invention, and enormous amount of human input is still needed.  And, as a result, I think people view it as if a newly invented compound, even if AI was used, is not obvious over the known art than it’s still going to be patentable and patent protection is still probably the best protection there is.

The concern would be down the road, you know, years from now, who knows how quickly AI will develop, but as AI becomes more and more sophisticated a point may be reached where the AI is doing a lot of the work,  and, the AI is really becoming a common tool, a publicly available tool perhaps, just like cloning techniques, you know, if it’s pretty straight forward, everybody knows how to do it and it’s just something that’s done in the ordinary course of drug development.

Now, at that point in time, there may be a concern that, well, is the bar for inventive step suddenly going to be much higher, because now we have this very sophisticated tool that we use and it makes things much easier. And, I think, at that point in time, companies may start thinking about maybe seeing what can be done as other forms of protection.  You know, obviously, the compound itself is going to be out in the public domain, if it becomes a pharmaceutical, I mean, you’re still going to have to try to use traditional patent protection for that in addition to regulatory exclusivity.

But what they may be doing is trying other alternatives for protecting the AI that was used to develop the compound, to keep that as proprietary and a trade secret.  So, that way the specific AI that was used is not a common tool, it is something that’s only known to the company and, therefore, it can’t be regarded as just commonplace general technique.

Question: Do we need to start thinking about separate kinds of rights for AI solutions that cannot be completed by people?

Dr. Armstrong: That’s a good question and that is being tossed around, do we need to have two different standards, or even alternative forms of protection if AI is used to develop an invention.  Because, it’s still the innovator, it’s still of value, that work that was done, even developing the AI which in turn developed the compound is incredibly valuable to the world.  And, protection needs to be given to the innovator who developed the AI and the compound using the AI.

But you can’t equate an AI and one ordinary skilled person, they’re not the same.  So, you run into a dilemma of, if you’re evaluating inventive step, obviousness, enablement, things like that, from the perspective of a person, it doesn’t equate to the AI.  People are already starting to think about, well, do we need to have either two different standards to evaluate things, one, if the invention was made by AI versus a person, or do we need to have some kind of alternative protection.  I mean, you know, we have FDA exclusivity, does there need to be some kind of other exclusivity given if it’s an AI developed invention.

Question: Who do you believe will decide what happens to AI long-term?

Dr. Armstrong: If there’s going to be some kind of alternative protection it’s going to have to come from the legislature of the various countries, the courts can’t decide that.  I mean, that’s already been the instance with the Dabus case in the US, where they recently had a summary judgement hearing in the Virginia District Court to consider the Dabus appeal trying to get the application examined having AI named as an inventor.  

And, the judge in the summary judgement hearing essentially said that the owner of the Dabus application was asking the court to make legislature changes, which the court cannot do.  That if AI is going to be an inventor that has to come from the various legislatures, it cannot be done in a court.

Question: In practice, how are your pharmaceutical clients dealing with this issue?

Dr. Armstrong: At the moment, as I mentioned earlier, right now AI is regarded as being still in early enough stage development that an enormous amount of human input is still needed.  So, right now it’s viewed as the AI itself can’t be an inventor because you still need human input, human conception to really develop the AI.  The AI is at this point really just a tool that’s being used by the inventors.

And, similarly, even if AI aids in the invention development the inventive step obviousness is still being viewed against what’s known in the art.  The AI isn’t really a factor in determining whether or not it’s obvious because it’s not a common enough tool at this point and it’s not sophisticated enough.  So, at this point, I think there’s not a lot of worry about these issues but, like other things, other technology, it’s a fast moving technology and it is becoming a part of reality.

So, the concern is not today, at this point in time, but what happens down the road, where are things going to be ten years from now when AI is a lot more common and a lot more sophisticated.  I think the industry is trying to, kind of, proactively, pre-emptively look at some of these issues rather than having to try to scramble around and deal with them, down the road when they’ve got a new invention and suddenly someone says, yes, but it’s not patentable because AI really developed it.  So, it’s better to think about these things now.

Question: Because something can become obvious by virtue of an AI solution looking at a lot of data, do you think that people need to look at their data more carefully?

Dr. Armstrong: I think that is the case.  I think that as AI becomes more sophisticated data protection will become more important, because assuming the courts follow the analysis that’s already being used, at least in the US, in determining obviousness, is it just a matter of varying known parameters, known variables.  If the dataset is maintained as being proprietary and trade secret then you can’t say, well, it’s just varying known variables, known parameters because they’re not known, they’re not in the public domain.

And so, I think that would give a stronger argument, a stronger position for inventive step if you say, well, none of what we worked on is in the public domain.  So, it’s not just the case of the AI using a computer algorithm to vary and analyse known data to come up with an answer.

 

Contact Us