News & Views

Data Security and the Rise of AI

What questions should you ask when considering an AI or machine learning-based solution? Our legal counsel has the answer.

IMTC’s Legal Counsel offers perspective on key questions related to the emergence, and growing prevalence, of AI.

Machine learning, commonly referred to as “Artificial Intelligence” (AI), has been a hot topic for decades; but until recent years, that conversation has been relegated to the realm of science fiction. Now, nearly every industry is exploring how AI can integrate with their current operations to automate certain business processes. Scholars and technology experts still debate what computer automated processes should be considered AI. The term has taken on a life of its own—being applied across a vast spectrum of computer-enabled applications.

Ultimately, machine learning/AI can be boiled down to four key features: the ability for a program to scan and understand large swaths of data, the processing of this data to recognize patterns, applying these patterns to generate rules and forecast future projections, and dynamically changing the underlying rules of the program in response to new data.

Various use cases range from simply automating administrative processes, to photo and image recognition and financial forecasting/price discovery. Despite these advancements, early adopters and regulators need to be vigilant about AI deployment. It is important to consider factors related to the quality of data, soundness of the programming and the data processing. Regulators and early adopters of AI should be asking themselves the following questions before deploying an AI solution to the commercial market:

Is the data from a reliable/secure source? How are you ensuring the integrity of this data?

These questions must be answered first, as data is what drives AI. If the data is inaccurate, corrupted, or otherwise incomplete, the output of the AI program will be inaccurate, and it will improperly self-program to identify false patterns.

If AI is powering a decision that impacts an individual, how can decision-making be audited?

When AI is applied to high-stakes situations, such as criminal identification or financial decision-making, it is necessary to be able to see how the AI program came to its conclusion. This is an issue because AI programs operate as a “black box,” where no one can see how the decision was made, but only see the outputs. For example, in finance, how can an asset manager be confident that the AI program is accurately identifying and applying a client’s risk-preferences in making financial decisions if all they see is a recommended trade? If this “black box” problem is not addressed, it will be the asset manager that is liable for any improper investment decision.

What are you asking the AI program to accomplish? Is the program designed to appropriately complete this task?

As previously mentioned, AI programs are primarily designed to answer a specific question by assessing data points and recognizing patterns. However, finding a correlation between two data points is not the same as showing causation between the two. Additionally, the AI program may be missing other key data points in deciding a specific result. AI developers must be careful in designing and testing their AI programs to ensure it is considering all factors and appropriately accounting for them.

To read more about the trends shifting the investment landscape, download our State of the Industry whitepaper.