Leading up to DLA Piper's Global Technology Summit on October 9 – 10, 2018, Larissa Bifano, Co-Chair, Patent Prosecution Practice, DLA Piper, outlines the implications for legal counsel and clients as artificial intelligence takes roots in non-traditional industries.

As AI proliferates in sectors such as finance and healthcare, Bifano discusses the data privacy, liability and intellectual property considerations for organizations investing in the technology.

What are the most significant ways you see your clients interacting with AI?

Nearly all sectors have become focused on using artificial intelligence to provide better results to customers, or, in my case, clients. That could be a life sciences company looking to predict patient treatments or a financial company seeking to better serve its customers through products that predict financial market behavior. It could be a software company wanting to improve the performance algorithms in its software. AI is permeating all industries.

AI is also becoming a focus for many of our clients. What is interesting in my practice is that AI as well as the software used to create it is viewed by the United States Patent and Trademark Office as patentable subject matter. Patent filings are increasing in industries that you wouldn't expect, especially among life sciences, fintech and financial organizations.

Clients themselves are implementing AI throughout their product lines, and as a result we've seen is an increased willingness for them to pursue patents in areas they previously wouldn't have − simply because they are implementing novel and interesting artificial intelligence techniques.

Do you think any specific sectors have a more difficult time implementing AI?

The two sectors that have the biggest issues with AI are life sciences − specifically, companies dealing with patient data − and financial organizations handling customer's financial information. These companies have to deal with the regulatory questions regarding how gathered data needs to be handled and used. In those areas, you might see less focus on AI solutions due to the regulatory climate. For media or software organizations that have access to data – marketing data to predict customer behavior, for example – there is less likely to be a concern about how they can use that data to improve their products.

Technology companies and well-known software companies have the skills to implement AI effectively. These industries are advancing the development of AI, whereas some non-traditional technology companies might simply be using AI as a window dressing. There is a lot of technical firepower around AI, and you have to be careful about adopting it for adoption's sake if it hasn't actually moved the needle for business initiatives. This is similar to what we're seeing with blockchain – just using the term and attempting to implement the technology isn't enough.

How have regulations influenced the way companies and organizations implement AI technology?

Patents, as I mentioned before, are one big consideration.

The other is privacy regulations. One well-known example of this is what we've all seen happening with Facebook and concerns about its users' data privacy. The question is: if you use people's data to make predictions using AI, is that invading their privacy? How that question is answered may prove to be a roadblock for artificial intelligence.

Another consideration is the liability associated with machines making decisions on behalf of humans. For example, instead of having a person analyzing data and making decisions, there are computers that have been trained to do that − to act independently. Who is liable for the outcome of these decisions or actions by a computer? The company? The coder? It is unclear. This is a regulatory issue that will need to be resolved as the algorithms become more powerful.

What are the risks involved with AI that businesses need to consider?

In the same way, for a life sciences company, what if an algorithm makes a recommendation on a dose for a patient's medicine, it is implemented and the patient dies?

It comes down to determining check and balances. Did the decision have to go through a doctor? Are there checks in the process so that a computer is not directly affecting a patient's well-being? On the privacy side, it is also important to consider how to make data blind so that personal identity will not be disclosed. These are among the risks to consider as AI moves forward.