Transactions in the Age of Artificial Intelligence: Risks and Considerations – JD Supra

Artificial Intelligence (AI) has become a major focus of, and the most valuable asset in, many technology transactions and the competition for top AI companies has never been hotter. According to CB Insights, there have been over 1,000 AI acquisitions since 2010. The COVID pandemic interrupted this trajectory, causing acquisitions to fall from 242 in 2019 to 159 in 2020. However, there are signs of a return, with over 90 acquisitions in the AI space as of June 2021 according to the latest CB Insights data. With tech giants helping drive the demand for AI, smaller AI startups are becoming increasingly attractive targets for acquisition.

AI companies have their own set of specialized risks that may not be addressed if buyers approach the transaction with their standard process. AIs reliance on data and the dynamic nature of its insights highlight the shortcomings of standard agreement language and the risks in not tailoring agreements to address AI specific issues. Sophisticated parties should consider crafting agreements specifically tailored to AI and its unique attributes and risks, which lend the parties a more accurate picture of an AI systems output and predictive capabilities, and can assist the parties in assessing and addressing the risks associated with the transaction. These risks include:

Freedom to use training data may be curtailed by contracts with third parties or other limitations regarding open source or scraped data.

Clarity around training data ownership can be complex and uncertain. Training data may be subject to ownership claims by third parties, be subject to third-party infringement claims, have been improperly obtained, or be subject to privacy issues.

To the extent that training data is subject to use limitations, a company may be restricted in a variety of ways including (i) how it commercializes and licenses the training data, (ii) the types of technology and algorithms it is permitted to develop with the training data and (iii) the purposes to which its technology and algorithms may be applied.

Standard representations on ownership of IP and IP improvements may be insufficient when applied to AI transactions. Output data generated by algorithms and the algorithms themselves trained from supplied training data may be vulnerable to ownership claims by data providers and vendors. Further, a third-party data provider may contract that, as between the parties, it owns IP improvements, resulting in companies struggling to distinguish ownership of their algorithms prior to using such third-party data from their improved algorithms after such use, as well as their ownership and ability to use model generated output data to continue to train and improve their algorithms.

Inadequate confidentiality or exclusivity provisions may leave an AI systems training data inputs and material technologies exposed to third parties, enabling competitors to use the same data and technologies to build similar or identical models. This is particularly the case when algorithms are developed using open sourced or publicly available machine learning processes.

Additional maintenance covenants may be warranted because an algorithms competitive value may atrophy if the algorithm is not designed to permit dynamic retraining, or the user of the algorithm fails to maintain and retrain the algorithm with updated data feeds.

In addition to the above, legislative protection in the AI space has yet to fully mature, and until such time, companies should protect their IP, data, algorithms, and models, by ensuring that their transactions and agreements are specifically designed to address the unique risks presented by the use and ownership of training data, AI-based technology and any output data generated by such technology.

Link:

Transactions in the Age of Artificial Intelligence: Risks and Considerations - JD Supra

Related Posts

Comments are closed.