As AI becomes increasingly adopted in more industries, its users attempt to achieve the delicate balance of making efficient use of its utility while striving to protect the privacy of its customers. A common best practice of AI is to be transparent about its use and how it reaches certain outcomes. However, there is a good and bad side to this transparency. Here is what you should know about the pros and cons of AI transparency, and possible solutions to achieve this difficult balance.
AI increases efficiency, leverages innovation, and streamlines processes. Being transparent about how it works and how it calculates results can lead to several societal and business advantages, including the following:
The number of uses of AI has continued to expand over the last several years. AI has even extended into the justice system, with AI doing everything from fighting traffic tickets to being considered as a fairer outcome than a jury.
When companies are transparent about their use of AI, they can increase users access to justice. People can see how AI gathers key information and reaches certain outcomes. They can have access to greater technology and more information than they would typically have access to without the use of AI.
One of the original drawbacks of AI was the possibility of discriminatory outcomes when the AI was used to detect patterns and make assumptions about users based on the data it gathers.
However, AI has become much more sophisticated today and has even been used to detect discrimination. AI can ensure that all users information is included or that their voice is heard. In this regard, AI can be a great equalizer.
When AI users are upfront about their use of AI and explain this use to their client base, they are more likely to instill trust. People need to know how companies reach certain results, and being transparent can help bridge the gap between businesses and their customers.
Customers are willing to embrace AI. 62% of consumers surveyed in Salesforces State of the Connected Consumer reported that they were open to AI that improved their experiences, and businesses are willing to meet this demand.
72% of executives say that they try to gain customer trust and confidence in their product or service by being transparent about their use of AI, according to a recent Accenture survey. Companies that are able to be transparent about their use of AI and the security measures they have put in place to protect users data may be able to benefit from this increased transparency.
When people know that they are interacting with an AI system instead of being tricked into believing it is a human, they can often adapt their own behavior to get the information they need.
For example, people may use keywords in a chat box instead of completed sentences. Users may have a better understanding of the benefits and limitations of these systems and make a conscious decision to interact with the AI system.
While transparency can bring about some of the positive outcomes discussed above, it also has several drawbacks, including the following:
A significant argument against AI and its transparency is the potential lack of privacy. AI often gathers big data and uses a unique algorithm to assign a value to this data.
However, to obtain results, AI often tracks every online activity, (you can get free background checks), AI tracks keystrokes, search, and use of the business website. Some of this information may also be sold to third parties.
Additionally, AI is often used to track peoples online behavior, from which they may be able to discern critical information about a person, including his or her:
Even when people choose not to give anyone online this sensitive information, they may still experience its loss due to AI capabilities.
Additionally, AI may track publicly available information. However, when there is not a human to check the accuracy of this information, one persons information may be confused with anothers.
When companies publish their explanations of AI, hackers may use this information to manipulate the system. For example, hackers may be able to make slight changes to the code or input to achieve an inaccurate outcome.
When hackers understand the reasoning behind AI, they may be able to influence the algorithm. This type of technology is not typically encouraged to detect fraud. Therefore, the system may be easier to manipulate when stakeholders do not put additional safeguards in place.
Another potential problem that may arise when a company is transparent about its use of AI is the possibility that its proprietary trade secrets or intellectual property are stolen by these hackers. These individuals may be able to look at a companys explanations and recreate the proprietary algorithm, to the detriment of the business.
With so much information readily available online, 78 million Americans say they are concerned about cybersecurity. When companies spell out how they use AI, this may make it easier for hackers to access consumers information or create a data breach which can lead to identity theft, such as the notorious Equifax data breach that compromised 148 million Americans private records.
Disclosures about AI may bring about additional risks, such as more stringent regulation. When AI is confusing and inaccessible, regulators may not understand it or be able to regulate it. However, when businesses are transparent about the role of AI, this may bring about a more significant regulatory framework about AI and how it can be used. In this manner, innovators may be punished for their innovation.
When businesses are clear about how they are protecting consumers data in the interest of being transparent, they may unwittingly make themselves more vulnerable to legal claims by consumers who allege that their information was not used properly. Clever lawyers can carefully review AI transparency information and then develop creative legal theories about the business use of AI.
They may focus on what the business did not do to protect a consumers privacy, for example. They may then use this information to allege the business was negligent in its actions or omissions.
Additionally, many AI systems operate from a simpler model. Companies that are transparent about their algorithms may use less sophisticated algorithms that may omit certain information or cause errors in certain situations.
Experienced lawyers may be able to identify additional problems that the AI causes to substantiate their legal claims against the business.
Anyone who has seen a Terminator movie or basically any apocalyptic movie knows that even technology that was developed only for the noblest of reasons can potentially be weaponized or used as something that ultimately damages society.
Due to the potential for harm, many laws have already been passed that require certain companies to be transparent about their use of AI. For example, financial service companies are required to disclose major factors they use in determining a persons creditworthiness and why they make an adverse action in a lending decision.
If passed, these laws may establish new obligations that businesses must adhere to regarding how they collect information, how they use AI, and whether they will first need to express consent from a consumer.
In 2019, an executive order was signed into law that directs federal agencies to devote resources to the development and maintenance of AI and calls for guidelines and standards that would allow federal agencies to regulate AI technology in a way that would protect the privacy and national security.
Even if a business is not yet required to be transparent about its use of AI, the time may soon come when it does not have a choice in the matter. In response to this likely outcome, some businesses are being proactive and establishing internal review boards that test the AI and identify ethical issues surrounding it.
They may also collaborate with their legal department and developers to create solutions to problems they identify. By carefully assessing their potential risk and establishing solutions to problems before disclosure becomes mandatory, businesses may be better situated to avoid the risks associated with AI transparency.
Image Credit: cottonbro; Pexels
Ben is a Web Operations Director at InfoTracer who takes a wide view from the whole system. He authors guides on entire security posture, both physical and cyber. Enjoys sharing the best practices and does it the right way!
Excerpt from:
AI and Privacy Line: AI as a Helper and as a Danger - ReadWrite
- Classic reasoning systems like Loom and PowerLoom vs. more modern systems based on probalistic networks - November 8th, 2009 [November 8th, 2009]
- Using Amazon's cloud service for computationally expensive calculations - November 8th, 2009 [November 8th, 2009]
- Software environments for working on AI projects - November 8th, 2009 [November 8th, 2009]
- New version of my NLP toolkit - November 8th, 2009 [November 8th, 2009]
- Semantic Web: through the back door with HTML and CSS - November 8th, 2009 [November 8th, 2009]
- Java FastTag part of speech tagger is now released under the LGPL - November 8th, 2009 [November 8th, 2009]
- Defining AI and Knowledge Engineering - November 8th, 2009 [November 8th, 2009]
- Great Overview of Knowledge Representation - November 8th, 2009 [November 8th, 2009]
- Something like Google page rank for semantic web URIs - November 8th, 2009 [November 8th, 2009]
- My experiences writing AI software for vehicle control in games and virtual reality systems - November 8th, 2009 [November 8th, 2009]
- The URL for this blog has changed - November 8th, 2009 [November 8th, 2009]
- I have a new page on Knowledge Management - November 8th, 2009 [November 8th, 2009]
- N-GRAM analysis using Ruby - November 8th, 2009 [November 8th, 2009]
- Good video: Knowledge Representation and the Semantic Web - November 8th, 2009 [November 8th, 2009]
- Using the PowerLoom reasoning system with JRuby - November 8th, 2009 [November 8th, 2009]
- Machines Like Us - November 8th, 2009 [November 8th, 2009]
- RapidMiner machine learning, data mining, and visualization tool - November 8th, 2009 [November 8th, 2009]
- texai.org - November 8th, 2009 [November 8th, 2009]
- NLTK: The Natural Language Toolkit - November 8th, 2009 [November 8th, 2009]
- My OpenCalais Ruby client library - November 8th, 2009 [November 8th, 2009]
- Ruby API for accessing Freebase/Metaweb structured data - November 8th, 2009 [November 8th, 2009]
- Protégé OWL Ontology Editor - November 8th, 2009 [November 8th, 2009]
- New version of Numenta software is available - November 8th, 2009 [November 8th, 2009]
- Very nice: Elsevier IJCAI AI Journal articles now available for free as PDFs - November 8th, 2009 [November 8th, 2009]
- Verison 2.0 of OpenCyc is available - November 8th, 2009 [November 8th, 2009]
- What’s Your Biggest Question about Artificial Intelligence? [Article] - November 8th, 2009 [November 8th, 2009]
- Minimax Search [Knowledge] - November 8th, 2009 [November 8th, 2009]
- Decision Tree [Knowledge] - November 8th, 2009 [November 8th, 2009]
- More AI Content & Format Preference Poll [Article] - November 8th, 2009 [November 8th, 2009]
- New Planners Solve Rescue Missions [News] - November 8th, 2009 [November 8th, 2009]
- Neural Network Learns to Bluff at Poker [News] - November 8th, 2009 [November 8th, 2009]
- Pushing the Limits of Game AI Technology [News] - November 8th, 2009 [November 8th, 2009]
- Mining Data for the Netflix Prize [News] - November 8th, 2009 [November 8th, 2009]
- Interview with Peter Denning on the Principles of Computing [News] - November 8th, 2009 [November 8th, 2009]
- Decision Making for Medical Support [News] - November 8th, 2009 [November 8th, 2009]
- Neural Network Creates Music CD [News] - November 8th, 2009 [November 8th, 2009]
- jKilavuz - a guide in the polygon soup [News] - November 8th, 2009 [November 8th, 2009]
- Artificial General Intelligence: Now Is the Time [News] - November 8th, 2009 [November 8th, 2009]
- Apply AI 2007 Roundtable Report [News] - November 8th, 2009 [November 8th, 2009]
- What Would You do With 80 Cores? [News] - November 8th, 2009 [November 8th, 2009]
- Software Finds Learning Language Child's Play [News] - November 8th, 2009 [November 8th, 2009]
- Artificial Intelligence in Games [Article] - November 8th, 2009 [November 8th, 2009]
- Artificial Intelligence Resources - November 8th, 2009 [November 8th, 2009]
- Alan Turing: Mathematical Biologist? - April 25th, 2012 [April 25th, 2012]
- BBC Horizon: The Hunt for AI ( Artificial Intelligence ) - Video - April 30th, 2012 [April 30th, 2012]
- Can computers have true artificial intelligence" Masonic handshake" 3rd-April-2012 - Video - April 30th, 2012 [April 30th, 2012]
- Kevin B. Korb - Interview - Artificial Intelligence and the Singularity p3 - Video - April 30th, 2012 [April 30th, 2012]
- Artificial Intelligence - 6 Month Anniversary - Video - April 30th, 2012 [April 30th, 2012]
- Science Breakthroughs - April 30th, 2012 [April 30th, 2012]
- Hitman: Blood Money - Part 49 - Stupid Artificial Intelligence! - Video - April 30th, 2012 [April 30th, 2012]
- Research Members Turned Off By HAARP Artificial Intelligence - Video - April 30th, 2012 [April 30th, 2012]
- Artificial Intelligence Lecture No. 5 - Video - April 30th, 2012 [April 30th, 2012]
- The Artificial Intelligence Laboratory, 2012 - Video - April 30th, 2012 [April 30th, 2012]
- Charlie Rose - Artificial Intelligence - Video - April 30th, 2012 [April 30th, 2012]
- Expert on artificial intelligence to speak at EPIIC Nights dinner - May 4th, 2012 [May 4th, 2012]
- Filipino software engineers complete and best thousands on Stanford’s Artificial Intelligence Course - May 4th, 2012 [May 4th, 2012]
- Vodafone xone™ Hackathon Challenges Developers and Entrepreneurs to Build a New Generation of Artificial Intelligence ... - May 4th, 2012 [May 4th, 2012]
- Rocket Fuel Packages Up CPG Booster - May 4th, 2012 [May 4th, 2012]
- 2 Filipinos finishes among top in Stanford’s Artificial Intelligence course - May 5th, 2012 [May 5th, 2012]
- Why Your Brain Isn't A Computer - May 5th, 2012 [May 5th, 2012]
- 2 Pinoy software engineers complete Stanford's AI course - May 7th, 2012 [May 7th, 2012]
- Percipio Media, LLC Proudly Accepts Partnership With MIT's Prestigious Computer Science And Artificial Intelligence ... - May 10th, 2012 [May 10th, 2012]
- Google Driverless Car Ok'd by Nevada - May 10th, 2012 [May 10th, 2012]
- Moving Beyond the Marketing Funnel: Rocket Fuel and Forrester Research Announce Free Webinar - May 10th, 2012 [May 10th, 2012]
- Rocket Fuel Wins 2012 San Francisco Business Times Tech & Innovation Award - May 13th, 2012 [May 13th, 2012]
- Internet Week 2012: Rocket Fuel to Speak at OMMA RTB - May 16th, 2012 [May 16th, 2012]
- How to Get the Most Out of Your Facebook Ads -- Rocket Fuel's VP of Products, Eshwar Belani, to Lead MarketingProfs ... - May 16th, 2012 [May 16th, 2012]
- The Digital Disruptor To Banking Has Just Gone International - May 16th, 2012 [May 16th, 2012]
- Moving Beyond the Marketing Funnel: Rocket Fuel Announce Free Webinar Featuring an Independent Research Firm - May 23rd, 2012 [May 23rd, 2012]
- MASA Showcases Latest Version of MASA SWORD for Homeland Security Markets - May 23rd, 2012 [May 23rd, 2012]
- Bluesky Launches Drones for Aerial Surveying - May 23rd, 2012 [May 23rd, 2012]
- Artificial Intelligence: What happened to the hunt for thinking machines? - May 25th, 2012 [May 25th, 2012]
- Bubble Robots Move Using Lasers [VIDEO] - May 25th, 2012 [May 25th, 2012]
- UHV assistant professors receive $10,000 summer research grants - May 27th, 2012 [May 27th, 2012]
- Artificial intelligence: science fiction or simply science? - May 28th, 2012 [May 28th, 2012]
- Exetel taps artificial intelligence - May 29th, 2012 [May 29th, 2012]
- Software offers brain on the rain - May 29th, 2012 [May 29th, 2012]
- New Dean of Science has high hopes for his faculty - May 30th, 2012 [May 30th, 2012]
- Cognitive Code Announces "Silvia For Android" App - May 31st, 2012 [May 31st, 2012]
- A Rat is Smarter Than Google - June 5th, 2012 [June 5th, 2012]