Ash Fontana, a managing director at Zetta Ventures, is the author of The AI-First Company: How to Compete and Win with Artificial Intelligence.More posts by this contributor
Investors in AI-first technology companies serving the defense industry, such as Palantir, Primer and Anduril, are doing well. Anduril, for one, reached a valuation of over $4 billion in less than four years. Many other companies that build general-purpose, AI-first technologies such as image labeling receive large (undisclosed) portions of their revenue from the defense industry.
Investors in AI-first technology companies that arent even intended to serve the defense industry often find that these firms eventually (and sometimes inadvertently) help other powerful institutions, such as police forces, municipal agencies and media companies, prosecute their duties.
Most do a lot of good work, such as DataRobot helping agencies understand the spread of COVID, HASH running simulations of vaccine distribution or Lilt making school communications available to immigrant parents in a U.S. school district.
However, there are also some less positive examples technology made by Israeli cyber-intelligence firm NSO was used to hack 37 smartphones belonging to journalists, human-rights activists, business executives and the fiance of murdered Saudi journalist Jamal Khashoggi, according to a report by The Washington Post and 16 media partners. The report claims the phones were on a list of over 50,000 numbers based in countries that surveil their citizens and are known to have hired the services of the Israeli firm.
Investors in these companies may now be asked challenging questions by other founders, limited partners and governments about whether the technology is too powerful, enables too much or is applied too broadly. These are questions of degree, but are sometimes not even asked upon making an investment.
Ive had the privilege of talking to a lot of people with lots of perspectives CEOs of big companies, founders of (currently!) small companies and politicians since publishing The AI-First Company and investing in such firms for the better part of a decade. Ive been getting one important question over and over again: How do investors ensure that the startups in which they invest responsibly apply AI?
Lets be frank: Its easy for startup investors to hand-wave away such an important question by saying something like, Its so hard to tell when we invest. Startups are nascent forms of something to come. However, AI-first startups are working with something powerful from day one: Tools that allow leverage far beyond our physical, intellectual and temporal reach.
AI not only gives people the ability to put their hands around heavier objects (robots) or get their heads around more data (analytics), it also gives them the ability to bend their minds around time (predictions). When people can make predictions and learn as they play out, they can learn fast. When people can learn fast, they can act fast.
Like any tool, one can use these tools for good or for bad. You can use a rock to build a house or you can throw it at someone. You can use gunpowder for beautiful fireworks or firing bullets.
Substantially similar, AI-based computer vision models can be used to figure out the moves of a dance group or a terrorist group. AI-powered drones can aim a camera at us while going off ski jumps, but they can also aim a gun at us.
This article covers the basics, metrics and politics of responsibly investing in AI-first companies.
Investors in and board members of AI-first companies must take at least partial responsibility for the decisions of the companies in which they invest.
Investors influence founders, whether they intend to or not. Founders constantly ask investors about what products to build, which customers to approach and which deals to execute. They do this to learn and improve their chances of winning. They also do this, in part, to keep investors engaged and informed because they may be a valuable source of capital.
Investors can think that theyre operating in an entirely Socratic way, as a sounding board for founders, but the reality is that they influence key decisions even by just asking questions, let alone giving specific advice on what to build, how to sell it and how much to charge. This is why investors need their own framework for responsibly investing in AI, lest they influence a bad outcome.
Board members have input on key strategic decisions legally and practically. Board meetings are where key product, pricing and packaging decisions are made. Some of these decisions affect how the core technology is used for example, whether to grant exclusive licenses to governments, set up foreign subsidiaries or get personal security clearances. This is why board members need their own framework for responsibly investing in AI.
The first step in taking responsibility is knowing what on earth is going on. Its easy for startup investors to shrug off the need to know whats going on inside AI-based models. Testing the code to see if it works before sending it off to a customer site is sufficient for many software investors.
However, AI-first products constantly adapt, evolve and spawn new data. Some consider monitoring AI so hard as to be basically impossible. However, we can set up both metrics and management systems to monitor the effects of AI-first products.
We can use hard metrics to figure out if a startups AI-based system is working at all or if its getting out of control. The right metrics to use depend on the type of modeling technique, the data used to train the model and the intended effect of using the prediction. For example, when the goal is hitting a target, one can measure true/false positive/negative rates.
Sensitivity and specificity may also be useful in healthcare applications to get some clues as to the efficacy of a diagnostic product: Does it detect enough diseases enough of the time to warrant the cost and pain of the diagnostic process? The book has an explanation of these metrics and a list of metrics to consider putting in place.
We can also implement a machine learning management loop that catches models before they drift away from reality. Drift is when the model is trained on data that is different from the currently observed data and is measured by comparing the distributions of those two data sets. Measuring model drift regularly is imperative, given that the world changes gradually, suddenly and often.
We can measure gradual changes only if we receive metrics over time, sudden changes can be measured only if we get metrics close to real time, and regular changes are measurable only if we accumulate metrics at the same intervals. The following schematic shows some of the steps involved in a machine learning management loop so that we can realize that its important to constantly and consistently measure the same things at every step of the process of building, testing, deploying and using models.
The issue of bias in AI is a problem both ethical and technical. We deal with the technical part here and summarize management of machine bias by treating it in the same way we often manage human bias: With hard constraints. Setting constraints on what the model can predict, who accesses those predictions, limits on feedback data, acceptable uses of the predictions and more requires effort when designing the system but ensures appropriate alerting.
Additionally, setting standards for training data can increase the likelihood of it considering a wide range of inputs. Speaking to the designer of the model is the best way to reach an understanding of the risks of any bias inherent in their approach. Consider automatic actions such as shutting down or alerting after setting these constraints.
Helping powerful institutions by giving them powerful tools is often interpreted as direct support of the political parties that put them into power. Alignment is often assumed rightly or wrongly and carries consequences. Team members, customers and potential investors aligned with different political parties may not want to work with you. Media may target you. This is to be expected and thus expressed internally as an explicit choice as to whether to work with such institutions.
The primary, most direct political issues arise for investors when companies do work for the military. Weve seen large companies such as Google face employee strikes over the mere potential of taking on military contracts.
Secondary political issues such as personal privacy are more a question of degree in terms of whether they catalyze pressure to limit the use of AI. For example, when civil liberties groups target applications that may encroach on a persons privacy, investors may have to consider restrictions on the use of those applications.
Tertiary political issues are generally industrial, such as how AI may affect the way we work. These are hard for investors to manage, because the impact on society is often unknowable on the timeline over which politicians can operate, i.e., a few years.
Responsible investors will constantly consider all three areas military, privacy and industry of political concern, and set the internal policy-making agenda short, medium and long term according to the proximity of the political risk.
Arguably, AI-first companies that want to bring about peace in our world may take the view that they eventually will have to pick a side to empower. This is a strong point of view to take, but one thats justified by certain (mostly utilitarian) views on violence.
The responsibilities of AI-first investors run deep, and rarely do investors in this field know how deep when theyre just getting started, often failing to fully appreciate the potential impact of their work. Perhaps the solution is to develop a strong ethical framework to consistently apply across all investments.
I havent delved into ethical frameworks because, well, they take tomes to properly consider, a lifetime to construct for oneself and what feels like a lifetime to construct for companies. Suffice to say, my belief is that philosophers could be better utilized in AI-first companies in developing such frameworks.
Until then, investors that are aware of the basics, metrics and politics will hopefully be a good influence on the builders of this most powerful technology.
Disclaimer: The author is an investor in two companies mentioned in this article (HASH and Lilt) through a fund (Zetta), where he is a managing partner.
Read more:
The responsibilities of AI-first investors - TechCrunch
- Chinese national arrested and charged with stealing AI trade secrets from Google - NPR - March 8th, 2024 [March 8th, 2024]
- President Biden Calls for Ban on AI Voice Impersonations During State of the Union - Variety - March 8th, 2024 [March 8th, 2024]
- Revolutionize Your Business with AWS Generative AI Competency Partners | Amazon Web Services - AWS Blog - March 8th, 2024 [March 8th, 2024]
- Broadcom Expects AI Demand to Help Offset Weakness Elsewhere - Yahoo Finance - March 8th, 2024 [March 8th, 2024]
- Micron Hits Record High With Analysts Calling It an 'Under-Appreciated AI Beneficiary' - Investopedia - March 8th, 2024 [March 8th, 2024]
- The Adams administration quietly hired its first AI czar. Who is he? - City & State New York - March 8th, 2024 [March 8th, 2024]
- AI likely to increase energy use and accelerate climate misinformation report - The Guardian - March 8th, 2024 [March 8th, 2024]
- This Artificial Intelligence (AI) Stock Could Double, and It Is Way Cheaper Than Nvidia - Yahoo Finance - March 8th, 2024 [March 8th, 2024]
- Fake images made to show Trump with Black supporters highlight concerns around AI and elections - The Associated Press - March 8th, 2024 [March 8th, 2024]
- Artificial intelligence and illusions of understanding in scientific research - Nature.com - March 8th, 2024 [March 8th, 2024]
- Analysis | House AI task force leaders take long view on regulating the tools - The Washington Post - March 8th, 2024 [March 8th, 2024]
- Don't Give Your Business Data to AI Companies - Dark Reading - March 8th, 2024 [March 8th, 2024]
- NIST, the lab at the center of Bidens AI safety push, is decaying - The Washington Post - March 8th, 2024 [March 8th, 2024]
- Essay | AI is Coming! Tips for Staying Calm and Carrying On - The Wall Street Journal - March 8th, 2024 [March 8th, 2024]
- AI can be easily used to make fake election photos - report - BBC.com - March 8th, 2024 [March 8th, 2024]
- 5 Artificial Intelligence (AI) Stocks That Could Make You a Millionaire - Yahoo Finance - March 8th, 2024 [March 8th, 2024]
- AI could be an extraordinary force for good. So why do our politicians still not have a plan? - The Guardian - March 8th, 2024 [March 8th, 2024]
- Mapping Disease Trajectories from Birth to Death with AI - Neuroscience News - March 8th, 2024 [March 8th, 2024]
- India plans 10,000-GPU sovereign AI supercomputer - The Register - March 8th, 2024 [March 8th, 2024]
- SAP enhances Datasphere and SAC for AI-driven transformation - CIO - March 8th, 2024 [March 8th, 2024]
- Jim Cramer names companies and sectors poised to rally on the AI wave - CNBC - March 8th, 2024 [March 8th, 2024]
- The job applicants shut out by AI: The interviewer sounded like Siri - The Guardian - March 8th, 2024 [March 8th, 2024]
- Microsoft confirms Surface and Windows AI event for March 21st - The Verge - March 8th, 2024 [March 8th, 2024]
- Adobes new Express app brings Firefly AI tools to iOS and Android - The Verge - March 8th, 2024 [March 8th, 2024]
- A Google AI Watched 30,000 Hours of Video GamesNow It Makes Its Own - Singularity Hub - March 8th, 2024 [March 8th, 2024]
- Palantir CEO Karp on TITAN, AI Warfare Technology - Bloomberg - March 8th, 2024 [March 8th, 2024]
- Elliptic Curve Murmurations Found With AI Take Flight - Quanta Magazine - March 8th, 2024 [March 8th, 2024]
- 5 AI Stocks to Buy in March 2024, According to Analysts - TipRanks.com - TipRanks - March 8th, 2024 [March 8th, 2024]
- Wix's new AI chatbot builds websites in seconds based on prompts - The Verge - March 8th, 2024 [March 8th, 2024]
- Amid record high energy demand, America is running out of electricity - The Washington Post - March 8th, 2024 [March 8th, 2024]
- AI Crypto Tokens in 5 Minutes: What to Know and Where to Start - Inc. - February 26th, 2024 [February 26th, 2024]
- 'The Worlds I See' by AI visionary Fei-Fei Li '99 selected as Princeton Pre-read - Princeton University - February 26th, 2024 [February 26th, 2024]
- AI is having a 1995 moment, analyst says - Business Insider - February 26th, 2024 [February 26th, 2024]
- Vatican research group's book outlines AI's 'brave new world' - National Catholic Reporter - February 26th, 2024 [February 26th, 2024]
- Honor's Magic 6 Pro launches internationally with AI-powered eye tracking on the way - The Verge - February 26th, 2024 [February 26th, 2024]
- Google explains Gemini's embarrassing AI pictures of diverse Nazis - The Verge - February 26th, 2024 [February 26th, 2024]
- Google cut a deal with Reddit for AI training data - The Verge - February 26th, 2024 [February 26th, 2024]
- What's the point of Elon Musk's AI company? - The Verge - February 26th, 2024 [February 26th, 2024]
- AI agents like Rabbit aim to book your vacation and order your Uber - NPR - February 26th, 2024 [February 26th, 2024]
- Announcing Microsofts open automation framework to red team generative AI Systems - Microsoft - February 26th, 2024 [February 26th, 2024]
- After Nvidia's latest blowout, here are 20 AI stocks expected to rise as much as 44% - Yahoo Finance - February 26th, 2024 [February 26th, 2024]
- 1 Exceptional AI Chip Stock Investors Need to Know About in 2024 - The Motley Fool - February 26th, 2024 [February 26th, 2024]
- Nvidia briefly hits $2 trillion valuation as AI frenzy grips Wall Street - Reuters - February 26th, 2024 [February 26th, 2024]
- AI Chatbots Can Guess Your Personal Information From What You ... - WIRED - October 18th, 2023 [October 18th, 2023]
- Harvard IT Launches Pilot of AI Sandbox to Enable Walled-Off Use ... - Harvard Crimson - October 18th, 2023 [October 18th, 2023]
- Advancing policing through AI: Insights from the global law ... - Police News - October 18th, 2023 [October 18th, 2023]
- Hochul announces new SUNY, IBM investments in AI - Olean Times Herald - October 18th, 2023 [October 18th, 2023]
- Nvidia's banking on TensorRT to expand its generative AI dominance - The Verge - October 18th, 2023 [October 18th, 2023]
- AI expands from MRFs to vehicles - Plastics Recycling Update - October 18th, 2023 [October 18th, 2023]
- AI Reads Ancient Scroll Charred by Mount Vesuvius in Tech First - Scientific American - October 18th, 2023 [October 18th, 2023]
- A DEEPer (squared) dive into AI Harvard Gazette - Harvard Gazette - October 18th, 2023 [October 18th, 2023]
- Florida bar weighs whether lawyers using AI need client consent - Reuters - October 18th, 2023 [October 18th, 2023]
- Cognizant and Vianai Systems Announce Strategic Partnership to ... - PR Newswire - October 18th, 2023 [October 18th, 2023]
- How AI could speed up scientific discoveries, from proteins to ... - NPR - October 18th, 2023 [October 18th, 2023]
- AI challenge to deliver better healthcare | Western Australian ... - Government of Western Australia - October 18th, 2023 [October 18th, 2023]
- Henry Kissinger: The Path to AI Arms Control - Foreign Affairs Magazine - October 18th, 2023 [October 18th, 2023]
- Stability AI releases StableStudio in latest push for open-source AI - The Verge - May 18th, 2023 [May 18th, 2023]
- Google CEO Sundar Pichai Predicts That This Profession Will Be ... - The Motley Fool - May 18th, 2023 [May 18th, 2023]
- Frances privacy watchdog eyes protection against data scraping in AI action plan - TechCrunch - May 18th, 2023 [May 18th, 2023]
- Investing in Hippocratic AI - Andreessen Horowitz - May 18th, 2023 [May 18th, 2023]
- As Alphabet flexes its AI prowess, there's a 'new elephant in the room' for Google - MarketWatch - May 18th, 2023 [May 18th, 2023]
- The Boring Future of Generative AI | WIRED - WIRED - May 18th, 2023 [May 18th, 2023]
- OpenAI readies new open-source AI model, The Information reports - Reuters.com - May 18th, 2023 [May 18th, 2023]
- What every CEO should know about generative AI - McKinsey - May 18th, 2023 [May 18th, 2023]
- AI creates images of the 'perfect' man and woman - Sky News - May 18th, 2023 [May 18th, 2023]
- Audit AI search tools now, before they skew research - Nature.com - May 18th, 2023 [May 18th, 2023]
- 3 Reasons C3.ai Stock Could Be Your Golden Ticket to the AI ... - InvestorPlace - May 18th, 2023 [May 18th, 2023]
- Zoom makes a big bet on AI with investment in Anthropic - VentureBeat - May 18th, 2023 [May 18th, 2023]
- AI voice phone scams are on the rise. Here's how to avoid them - USA TODAY - May 18th, 2023 [May 18th, 2023]
- Amazon is building an AI-powered conversational experience for ... - The Verge - May 18th, 2023 [May 18th, 2023]
- AI speculators need to 'differentiate between actual spending and investment' and hype: Strategist - Yahoo Finance - May 18th, 2023 [May 18th, 2023]
- AI Can Be Both Accurate and Transparent - HBR.org Daily - May 18th, 2023 [May 18th, 2023]
- You're Probably Underestimating AI Chatbots | WIRED - WIRED - May 18th, 2023 [May 18th, 2023]
- AI presents political peril for 2024 with threat to mislead voters - The Associated Press - May 18th, 2023 [May 18th, 2023]
- We need AI to help us face the challenges of the future - The Guardian - May 18th, 2023 [May 18th, 2023]
- End Of Googles Dominance? Stock Gets Rare Analyst Downgrade Over AI Fears - Forbes - May 18th, 2023 [May 18th, 2023]
- Watch 44 million atoms simulated using AI and a supercomputer - New Scientist - May 18th, 2023 [May 18th, 2023]
- AI Is The New Electricity: Bank Of America Picks 20 Stocks To Cash In On ChatGPT Hype - Forbes - March 2nd, 2023 [March 2nd, 2023]
- Tech Giants Are Barreling Headfirst Into an AI Arms Race - February 20th, 2023 [February 20th, 2023]
- Bing's AI Is Threatening Users. That's No Laughing Matter - TIME - February 20th, 2023 [February 20th, 2023]