Microsoft Pix Camera imitates Prisma with its AI-powered filters – Engadget

These artsy filters may sound a lot like what standalone app, Prisma, does, but Microsoft's implementation was developed by Microsoft's Asia research lab in collaboration with Skype. According to a company blog post, Pix Styles use texture, pattern, and tones learned by deep neural networks from famous works of art instead of altering the photo uniformly like other similar apps. Microsoft researcher Josh Weisberg told Engadget that the app uses two different techniques, run in tandem to save time, to produce these effects. "Our approach lends itself to styles based on source images (that are used to train the network) that are not paintings, such as the fire effect," he said in an email.

The initial 11 Styles filters are named Glass, Petals, Bacau, Charcoal, Heart, Fire, Honolulu, Zing, Pop, Glitter and Ripples -- more will be added in the coming weeks. Pix Paintings creates a timeline of your picture as if it were being painted in real time, giving you a short video of its creation. The Paintings feature is accessed with a button that shows up when you apply a new Style, and you can share or save the resulting short video (or GIF) it makes, too.

"These are meant to be fun features," said Microsoft's Josh Weisberg in a blog post. "In the past, a lot of our efforts were focused on using AI and deep learning to capture better moments and better image quality. This is more about fun. I want to do something cool and artistic with my photos."

All this AI magic works right on your iPhone or iPad and won't access the cloud, saving your data plan and decreasing your wait time. You can still use Pix's other features with the new styles, adding frames and cropping your still photos. Microsoft Pix Camera is available now in the App Store and as a free update to existing owners, as well.

Read more from the original source:

Microsoft Pix Camera imitates Prisma with its AI-powered filters - Engadget

Artificial Intelligence breaks barriers where policymakers may go wrong – The Nation

The COVID-19 outbreak has highlighted the importance of working on public health and technology together in order to fight the crisis. Countries across the world are opting for different measures where several technologies are at play to tap the positive COVID-19 cases to stop the further spread of the virus.

China was the first country to report COVID-19 cases and is now witnessing the return of normalcy, but it also had to resort to technology to contain the spread. China used technologies such as smart imaging, drones and mobile apps totrace virus-carrying individuals.

The US and Europe, however, took a slightly different approach, using data derived via artificial intelligence to stop the spread of the virus. One such data provider is US-based Mobilewall, which serves countries with data to serve public health.

In an interview with Sputnik,Anind Datta, the CEO and chairman ofMobilewall, a consumer intelligence platform that is working with US task forces and other municipalities to fight the coronavirus, reflects on the importance of the use of artificial intelligence technologies to deal with the present-day crisis, especially in highly densely populated regions like South Asia.

Question: Where has Mobilewall successfully carried out data distribution?

Anind Datta:Mobilewall data is being used by health services organizations and governmental entities around the world to better predict the spread of the Novel Coronavirus at both the macro (city/county/state/country) and micro (predicting patients at a hospital) level. Mobilewall is working with various businesses and municipalities, providing data around individual mobility that acts as a proxy for social distancing. We can provide both a social isolation score and separate data attributes, features that can be used to build a custom score. Such data includes individual mobility metrics (indicating the daily distance traveled and unique locations), cluster identification (gatherings of a high number of devices) and individual device data at both the micro and macro levels. These are all foundational inputs that can be used in COVID-19 prediction models.

Question: In a country where a huge population resides in rural areas, how can AI be implemented?

Anind Datta:The purpose ofAI is to support decision makingby revealing patterns that emerge from large amounts of data. AI is particularly useful in scenarios where (a) data can be collected at a scale allowing reliable patterns to emerge, and (b) where manual efforts to both collect and analyse data do not work well.

In remote rural areas, manual data collection is challenging, and even if possible, such data is reliability-challenged due to the social barriers against honest disclosures of questions perceived as personal. In the current COVID-19 crisis, where data collection involves gathering information about personal habits and symptoms related to infection, these impediments only increase. Yet, a lot of this information can be gathered from behaviour exhibited on mobile phones, which have spread well into India's rural areas. Mobile data, accumulated at a scale, can allow for inferences to be made to help critical decision-making both in urban and rural areas.

Sputnik: Please, describe the ways in which AI and data can be used to battle COVID-19.

Anind Datta:In the context of COVID-19, data and AI technologies are being used in new ways, particularly in countries that adopt a scientific approach to public health. Data scientists are creating machine learning models to predict infection and mortality rates and to determine resource needs and allocation based on these predictions.

AI can be used to power two key tasks of pandemic mitigation: infection tracking and infection spread prediction. If done correctly, AI can help uncover three foundational pieces of information, crucial to tracking and predicting the spread: measuring social isolation by observing individual mobility, identifying clusters of more than a certain number of individuals and identifying the corresponding locations; and risk assessment of individuals and locations, at scale, by understanding the movement of infected individuals.

Question: Do you have some suggestion for the government regarding use of AI in slums and high density population?

Anind Datta:AI is particularly suited for analysing large amounts of data collected via machines. In slums and other high density areas, in context of the COVID-19 crisis, it is difficult to both maintain and track social distancing. For this reason, these regions can be triggers of infection waves that could provide deadly for the entire country. AI offers a mechanism to both collect and track behavioural signals from this area, which can then inform early-warning and alert systems that can drive tactical pandemic management activities.

AI, particularly,big data and machine learning techniquescan be used to identify the infection risk of individuals, which can then be projected to those individuals and others in the geographic locations they have visited. Data scientists are creating models to track the spread of the virus and to determine resource needs and allocation based on the prediction of hard-hit areas. AI is an enabler; it identifies patterns and provides insights at speeds well beyond what humans can do manually.

But, the key to the successful use of AI relies on the data that is being fed into the models. If this data is inaccurate or lacks scale the ability of the model to predict outcomes will be impacted in a negative way. Data can be obtained in various ways, either by requesting information directly from individuals (such as what populous countries is attempting to do with the Arogya Setu app or by seeking data from other available sources.

Question: Government's have been advocating app's which is also a mobile platform to fight against COVID-19. How useful is app in terms of contact tracing?

Anindya Datta:Arogya Setu app is a worthy effort and could serve as a useful consumer tool to minimise risky behaviour and receive current COVID-19 information. However, it is important to understand that the app by itself is simply a front end to information delivery. The effectiveness of the app is only as good as the information it has access to, but the app itself is not producing that information.

The quality of the risk information and therefore, the usefulness of the app, depend on a number of variables outside of the control of the app, including the magnitude of infection detection, which depends on testing. It is easy to see that less the testing, lower the value of the information disseminated via the app. What also matters is the risk models that are being used to build risk scores for geographies and sub-geographies. If the risk models are ineffective, even with adequate testing, the information delivered will be of little value.

In South Asia, where social stigma still plays a key part in social interaction, one might question the likelihood of truthful disclosures at scale.

Another, perhaps more reliable option, is to use other available data sources that can model the activities of the population at scale. In many cases location data and behavioural data can be used as inputs to COVID-19 predictive models.

Question: Certain groups have been opposing the medics. Can AI help medics find ways to track them without going to the location?

Anind Datta:Yes, location data of these groups can help doctors to track them. Location-based data can be used to track individual mobility without in-person engagement. Depending on the source of the data, it is also possible to use this data to communicate risk of infection in an anonymous manner using digital identification or communication through mobile devices.

Go here to read the rest:

Artificial Intelligence breaks barriers where policymakers may go wrong - The Nation

New AI Tool GPT-3 Ascends to New Peaks, But Proves How Far We Still Need to Travel – JD Supra

If you want a glimpse of the future, check out how developers are using gpt-3.

This natural language processor was trained on parameters ten times greater than its most sophisticated rival and can be used to answer questions and write astoundingly well. Creative professionals everywhere, from top coders to professional writers marvel at what gpt-3 can produce even now in its relative infancy.

Yesterday, New York Times tech columnist Farhad Manjoo wrote that the short glimpse the general public has taken of gpt-3 is at once amazing, spooky, humbling, and more than a little terrifying. GPT-3 is capable of generating entirely original, coherent, and sometimes even factual prose. And not just prose it can write poetry, dialogue, memes, computer code, and who knows what else. Manjoo speculated on whether a similar but more advanced AI might replace him someday.

On the other hand, a recent Technology Review article describes the AI as Shockingly good and completely mindless. After describing some of the gpt-3 highlights the public has seen so far, it concedes, For one thing, the AI still makes ridiculous howlers that reveal a total lack of common sense. But even its successes have a lack of depth to them, reading more like cut-and-paste jobs than original compositions.

Wired noted in a story last week, GPT-3 was built by directing machine-learning algorithms to study the statistical patterns in almost a trillion words collected from the web and digitized books. The system memorized the forms of countless genres and situations, from C++ tutorials to sports writing. It uses its digest of that immense corpus to respond to a text prompt by generating new text with similar statistical patterns. The results can be technically impressive, and also fun or thought-provoking, as the poems, code, and other experiments attest. But the article also stated that gpt-3, often spews contradictions or nonsense, because its statistical word-stringing is not guided by any intent or a coherent understanding of reality.

Gpt-3 is the latest iteration of language-processing machine learning program from Open AI, an enterprise funded in part by Elon Musk, and its training is orders of magnitude more complex than either its previous offering or the closest competitor. The program is currently in a controlled beta test where whitelisted programmers can make requests and run projects on the AI. According to Technology Review, For now, OpenAI wants outside developers to help it explore what GPT-3 can do, but it plans to turn the tool into a commercial product later this year, offering businesses a paid-for subscription to the AI via the cloud.

Gpt-3 provides a staggering glimpse of what the future can be. Simple computer tasks can be built and then confirmed in the AI, so it will know how to create custom buttons on your webpage. Developer Sharif Shameen built a layout generator with gpt-3 so he could simply ask for a button that looks like a watermelon and the AI would give him one.

This outcome shouldnt surprise everyone as a good natural language processor develops capabilities to translate from natural English to action or to another language, and computer code is little more than an expression of intent in a language that the computer can read. So translating simple English instructions into Python should not be impossible for a sophisticated AI that has read multiple Python drafting manuals.

Of course, some of the coding community is freaking out at the prospect of being replaced by this AI. Even legendary coder John Carmack, who pioneered 3D computer graphics in early video games like Doom and is now consulting CTO at Oculus VR, was unnerved: The recent, almost accidental, discovery that GPT-3 can sort of write code does generate a slight shiver.

OK, so gpt-3 has been trained on countless coding manuals and instruction sets. But freaketh not while gpt-3 can sometimes generate usable code, it still has no application of common sense, and therefore non-technical types cant rely on it to produce machine-readable language that can perform sophisticated tasks.

For any of you who have taken a coding course, you know that coaxing the right things out of a computer requires coders to be literal and precise in ways that are difficult for an AI to approximate. So a non-coder is likely to be frustrated with AI-generated code at this point in the process. If anything, gpt-3 is a step in the process toward easier coding, requiring a practiced software engineer to develop the right sets of questions for the AI to produce usable code quickly.

I talked about the hype cycle in one of last weeks posts, and while gpt-3 is worth the hype as an advance in AI training, where more the model has 175 billion parameters is clearly better, but it is only an impressive step in the larger process. OpenAI and its competitors will find useful applications for all of this power and continue to work toward a more general intelligence.

There are many reasons to be wary. Like others, before it, this AI picks up biases in its training, and it was trained on the internet, so expect some whoppers. Wired observed, Facebooks head of AI accused the service of being unsafe and tweeted screenshots from a website that generates tweets using GPT-3 that suggested the system associates Jews with a love of money and women with a poor sense of direction. Gpt-3 has not been trained to avoid offensive assumptions.

But the AI still has the power to astonish and may permit some incredible applications. It hasnt even been officially released as a product yet. Watch this space. As developers, writers, business executives, and artists learn to do more amazing tasks with gpt-3 (and gpt-4 and 5), we will continue to report on it.

[View source.]

See the original post:

New AI Tool GPT-3 Ascends to New Peaks, But Proves How Far We Still Need to Travel - JD Supra

For the Third Consecutive Year, Verint AI and Analytics Solutions Receive Perfect Customer Satisfaction Scores in New Interaction Analytics Report -…

MELVILLE, N.Y.--(BUSINESS WIRE)--Verint (NASDAQ: VRNT), The Customer Engagement Company, today announced its artificial intelligence (AI) and analytics solutions achieved perfect scores in all 24 customer satisfaction categories for vendor satisfaction, product capabilities and product effectiveness in DMG Consulting LLCs new 2021/2022 Interaction Analytics (IA) Product and Market Report.* In addition, Verint represents the largest market share by number of customers and achieved the greatest year-over-year increase in number of customers among vendors named in the reports market activity analysis.

DMGs report focuses on contact center and service-related uses of interaction analytics. The report highlights the increasing value of operationalizing the findings from interaction analytics for voice of the customer (VoC), quality management (QM), customer journey analytics and the customer experience. It also explores how the value and benefits of IA increase substantially when this technology is embedded in third-party applications to enrich their outputs and findings.

Interaction analytics follows conversations as customers pivot from one channel to another, providing necessary insights into all touchpoints in the customer journey, said Donna Fluss, president, DMG Consulting. These solutions enable companies to alter the outcome of customer conversations, responding with real-time alerts and next-best-action guidance to agents, regardless of where they are located.

Verint Analytics solutions received:

The report reviewed Verints Speech and Text Analytics, Contextual Real-Time Guidance, Analytics-Enabled Quality Management and Experience Management applications. Verint Speech and Text Analytics automatically analyzes and identifies trends, themes, emotion, sentiment and the root causes driving customer interactions, including voice calls and unstructured text such as chat in order to proactively respond to issues and act on opportunities that enhance the customer experience and support business objectives.

Speech and text analytics solutions provided critical insights over the past yearenabling customers to adapt to the dynamics of interactions and respond to issues in real time, says Verints Celia Fleischaker, chief marketing officer. We are committed to continually innovating to help drive insights and value across the organization to create better customer experiences.

Visit Verint Speech Analytics and Verint Text Analytics.

About Verint Systems Inc.

Verint (Nasdaq: VRNT) helps the worlds most iconic brands including over 85 of the Fortune 100 companies build enduring customer relationships by connecting work, data and experiences across the enterprise. The Verint Customer Engagement portfolio draws on the latest advancements in AI and analytics, an open cloud architecture, and The Science of Customer Engagement to help customers close the engagement capacity gap.

Verint. The Customer Engagement Company. Learn more at Verint.com.

*Source: DMG Consulting LLC, 2021/2022 Interaction Analytics Product and Market Report, May 2021

This press release contains forward-looking statements, including statements regarding expectations, predictions, views, opportunities, plans, strategies, beliefs, and statements of similar effect relating to Verint Systems Inc. These forward-looking statements are not guarantees of future performance and they are based on management's expectations that involve a number of risks, uncertainties and assumptions, any of which could cause actual results to differ materially from those expressed in or implied by the forward-looking statements. For a detailed discussion of these risk factors, see our Annual Report on Form 10-K for the fiscal year ended January 31, 2021, and other filings we make with the SEC. The forward-looking statements contained in this press release are made as of the date of this press release and, except as required by law, Verint assumes no obligation to update or revise them or to provide reasons why actual results may differ.

VERINT, THE CUSTOMER ENGAGEMENT COMPANY, BOUNDLESS CUSTOMER ENGAGEMENT, THE ENGAGEMENT CAPACITY GAP and THE SCIENCE OF CUSTOMER ENGAGEMENT are trademarks of Verint Systems Inc. or its subsidiaries. Verint and other parties may also have trademark rights in other terms used herein.

More:

For the Third Consecutive Year, Verint AI and Analytics Solutions Receive Perfect Customer Satisfaction Scores in New Interaction Analytics Report -...

Artificial Intelligence (AI) in Cyber Security Market Type, Application, Specification, Technology and Forecast to 2025 – 3rd Watch News

Updated Research Report ofArtificial Intelligence (AI) in Cyber Security Market 2020-2025:

Summary:

Wiseguyreports.Com Adds Artificial Intelligence (AI) in Cyber Security Market Demand, Growth, Opportunities, Manufacturers and Analysis of Top Key Players to 2025 To Its Research Database.

Overview

Artificial Intelligence (AI) in Cyber Security Market by Service Type (Network Security, Data Security, Identity & Access Security, Cloud Security), by Technology (Machine Learning, Natural Language Processing, Speech Recognition, Image Processing), by Application (Anomaly Detection, Firewall, Intrusion Detection, Distributed Denial of Services, Data Loss Prevention, Web Filtering), by Geography (U.S., Canada, U.K., Germany, France, Italy, China, Japan, India, Indonesia, U.A.E., Saudi Arabia, Qatar, Brazil, Mexico) Global Market Size, Share, Development, Growth and Demand Forecast, 2013-2025

Artificial intelligence is playing a crucial role in cyber security by identifying threats and protecting organizations data from lethal cyber-attacks. It speeds up the process of noticing attacks and enables organizations to adopt predictive measures in combating cyber-crimes.

According to this study, over the next five years the Artificial Intelligence (AI) in Cyber Security market will register a 27.3% CAGR in terms of revenue, the global market size will reach $ 23100 million by 2025, from $ 8802.5 million in 2019. In particular, this report presents the global revenue market share of key companies in Artificial Intelligence (AI) in Cyber Security business, shared in Chapter 3.

This report presents a comprehensive overview, market shares and growth opportunities of Artificial Intelligence (AI) in Cyber Security market by type, application, key companies and key regions.

This study considers the Artificial Intelligence (AI) in Cyber Security value generated from the sales of the following segments:

Segmentation by type: breakdown data from 2015 to 2020 in Section 2.3; and forecast to 2025 in section 10.7.Machine LearningNatural Language ProcessingOtherMachine learning is taking the most market percentage, with over 69% market share.

Get Free Sample Report of Artificial Intelligence (AI) in Cyber Security [emailprotected]https://www.wiseguyreports.com/sample-request/5098081-global-artificial-intelligence-ai-in-cyber-security-market

Segmentation by application: breakdown data from 2015 to 2020, in Section 2.4; and forecast to 2025 in section 10.8.BFSIGovernmentIT & TelecomHealthcareAerospace and DefenseOtherBFSI, government and IT & telecom segments occupied the largest market share, while healthcare, aerospace and defense and other industries are expected to grow at a steady speed in future.

This report also splits the market by region: Breakdown data in Chapter 4, 5, 6, 7 and 8.AmericasUnited StatesCanadaMexicoBrazilAPACChinaJapanKoreaSoutheast AsiaIndiaAustraliaEuropeGermanyFranceUKItalyRussiaSpainMiddle East & AfricaEgyptSouth AfricaIsraelTurkeyGCC Countries

The report also presents the market competition landscape and a corresponding detailed analysis of the major vendor/manufacturers in the market. The key manufacturers covered in this report: Breakdown data in in Chapter 3.BAE SystemsPalo Alto NetworksCiscoFireEyeCheck PointFortinetSymantecIBMJuniper NetworkRSA Security

Enquiry About Report @https://www.wiseguyreports.com/enquiry/5098081-global-artificial-intelligence-ai-in-cyber-security-market

In addition, this report discusses the key drivers influencing market growth, opportunities, the challenges and the risks faced by key players and the market as a whole. It also analyzes key emerging trends and their impact on present and future development.

Research objectivesTo study and analyze the global Artificial Intelligence (AI) in Cyber Security market size by key regions/countries, type and application, history data from 2015 to 2019, and forecast to 2025.To understand the structure of Artificial Intelligence (AI) in Cyber Security market by identifying its various subsegments.Focuses on the key global Artificial Intelligence (AI) in Cyber Security players, to define, describe and analyze the value, market share, market competition landscape, SWOT analysis and development plans in next few years.To analyze the Artificial Intelligence (AI) in Cyber Security with respect to individual growth trends, future prospects, and their contribution to the total market.To share detailed information about the key factors influencing the growth of the market (growth potential, opportunities, drivers, industry-specific challenges and risks).To project the size of Artificial Intelligence (AI) in Cyber Security submarkets, with respect to key regions (along with their respective key countries).To analyze competitive developments such as expansions, agreements, new product launches and acquisitions in the market.To strategically profile the key players and comprehensively analyze their growth strategies.

Table of Contents

1 Scope of the Report

2 Executive Summary

3 Global Artificial Intelligence (AI) in Cyber Security by Players

4 Artificial Intelligence (AI) in Cyber Security by Regions

5 Americas

6 APAC

7 Europe

8 Middle East & Africa

9 Market Drivers, Challenges and Trends

10 Global Artificial Intelligence (AI) in Cyber Security Market Forecast

11 Key Players Analysis

Continued

Contact US:NORAH TRENTPartner Relations & Marketing Manager[emailprotected]Ph: +1-646-845-9349 (US)Ph: +44 208 133 9349 (UK)

ABOUT US:Wise Guy Reports is part of the Wise Guy Consultants Pvt. Ltd. and offers premium progressive statistical surveying, market research reports, analysis & forecast data for industries and governments around the globe. Wise Guy Reports features an exhaustive list of market research reports from hundreds of publishers worldwide. We boast a database spanning virtually every market category and an even more comprehensive collection of market research reports under these categories and sub-categories.

Note:Our team is studying Covid-19 and its impact on various industry verticals and wherever required we will be considering Covid-19 footprints for a better analysis of markets and industries. Cordially get in touch for more details.

The rest is here:

Artificial Intelligence (AI) in Cyber Security Market Type, Application, Specification, Technology and Forecast to 2025 - 3rd Watch News

Global Artificial Intelligence of Things (Technology & Solutions) Markets 2020-2025: Market will Reach $65.9 Billion by 2025, Growing at 39.1%…

DUBLIN, Oct. 30, 2020 /PRNewswire/ -- The "Artificial Intelligence of Things: AIoT Market by Technology and Solutions 2020 - 2025" report has been added to ResearchAndMarkets.com's offering.

This AIoT market report provides an analysis of technologies, leading companies and solutions. The report also provides quantitative analysis including market sizing and forecasts for AIoT infrastructure, services, and specific solutions for the period 2020 through 2025. The report also provides an assessment of the impact of 5G upon AIoT (and vice versa) as well as blockchain and specific solutions such as Data as a Service, Decisions as a Service, and the market for AIoT in smart cities.

Many industry verticals will be transformed through AI integration with enterprise, industrial, and consumer product and service ecosystems. It is destined to become an integral component of business operations including supply chains, sales and marketing processes, product and service delivery, and support models.

We see AIoT evolving to become more commonplace as a standard feature from big analytics companies in terms of digital transformation for the connected enterprise. This will be realized in infrastructure, software, and SaaS managed service offerings. More specifically, we see 2020 as a key year for IoT data-as-a-service offerings to become AI-enabled decisions-as-a-service-solutions, customized on a per industry and company basis. Certain data-driven verticals such as the utility and energy services industries will lead the way.

As IoT networks proliferate throughout every major industry vertical, there will be an increasingly large amount of unstructured machine data. The growing amount of human-oriented and machine-generated data will drive substantial opportunities for AI support of unstructured data analytics solutions. Data generated from IoT supported systems will become extremely valuable, both for internal corporate needs as well as for many customer-facing functions such as product life-cycle management.

The use of AI for decision making in IoT and data analytics will be crucial for efficient and effective decision making, especially in the area of streaming data and real-time analytics associated with edge computing networks. Real-time data will be a key value proposition for all use cases, segments, and solutions. The ability to capture streaming data, determine valuable attributes, and make decisions in real-time will add an entirely new dimension to service logic.

In many cases, the data itself, and actionable information will be the service. AIoT infrastructure and services will, therefore, be leveraged to achieve more efficient IoT operations, improve human-machine interactions, and enhance data management and analytics, creating a foundation for IoT Data as a Service (IoTDaaS) and AI-based Decisions as a Service.

The fastest-growing 5G AIoT applications involve private networks. Accordingly, the 5GNR market for private wireless in industrial automation will reach $4B by 2025. Some of the largest market opportunities will be AIoT market IoTDaaS solutions. We see machine learning in edge computing as the key to realizing the full potential of IoT analytics.

Select Report Findings:

Key Topics Covered:

1.0 Executive Summary

2.0 Introduction2.1 Defining AIoT2.2 AI in IoT vs. AIoT2.3 Artificial General Intelligence2.4 IoT Network and Functional Structure2.5 Ambient Intelligence and Smart Lifestyles2.6 Economic and Social Impact2.7 Enterprise Adoption and Investment2.8 Market Drivers and Opportunities2.9 Market Restraints and Challenges2.10 AIoT Value Chain2.10.1 Device Manufacturers2.10.2 Equipment Manufacturers2.10.3 Platform Providers2.10.4 Software and Service Providers2.10.5 User Communities

3.0 AIoT Technology and Market3.1 AIoT Market3.1.1 Equipment and Component3.1.2 Cloud Equipment and Deployment3.1.3 3D Sensing Technology3.1.4 Software and Data Analytics3.1.5 AIoT Platforms3.1.6 Deployment and Services3.2 AIoT Sub-Markets3.2.1 Supporting Device and Connected Objects3.2.2 IoT Data as a Service3.2.3 AI Decisions as a Service3.2.4 APIs and Interoperability3.2.5 Smart Objects3.2.6 Smart City Considerations3.2.7 Industrial Transformation3.2.8 Cognitive Computing and Computer Vision3.2.9 Consumer Appliances3.2.10 Domain Specific Network Considerations3.2.11 3D Sensing Applications3.2.12 Predictive 3D Design3.3 AIoT Supporting Technologies3.3.1 Cognitive Computing3.3.2 Computer Vision3.3.3 Machine Learning Capabilities and APIs3.3.4 Neural Networks3.3.5 Context-Aware Processing3.4 AIoT Enabling Technologies and Solutions3.4.1 Edge Computing3.4.2 Blockchain Networks3.4.3 Cloud Technologies3.4.4 5G Technologies3.4.5 Digital Twin Technology and Solutions3.4.6 Smart Machines3.4.7 Cloud Robotics3.4.8 Predictive Analytics and Real-Time Processing3.4.9 Post Event Processing3.4.10 Haptic Technology

4.0 AIoT Applications Analysis4.1 Device Accessibility and Security4.2 Gesture Control and Facial Recognition4.3 Home Automation4.4 Wearable Device4.5 Fleet Management4.6 Intelligent Robots4.7 Augmented Reality Market4.8 Drone Traffic Monitoring4.9 Real-time Public Safety4.10 Yield Monitoring and Soil Monitoring Market4.11 HCM Operation

5.0 Analysis of Important AIoT Companies5.1 Sharp5.2 SAS5.3 DT425.4 Chania Tech Giants: Baidu, Alibaba, and Tencent5.4.1 Baidu5.4.2 Alibaba5.4.3 Tencent5.5 Xiaomi Technology5.6 NVidia5.7 Intel Corporation5.8 Qualcomm5.9 Innodisk5.10 Gopher Protocol5.11 Micron Technology5.12 ShiftPixy5.13 Uptake5.14 C3 IoT5.15 Alluvium5.16 Arundo Analytics5.17 Canvass Analytics5.18 Falkonry5.19 Interactor5.20 Google5.21 Cisco5.22 IBM Corp.5.23 Microsoft Corp.5.24 Apple Inc.5.25 Salesforce Inc.5.26 Infineon Technologies AG5.27 Amazon Inc.5.28 AB Electrolux5.29 ABB Ltd.5.30 AIBrian Inc.5.31 Analog Devices5.32 ARM Limited5.33 Atmel Corporation5.34 Ayla Networks Inc.5.35 Brighterion Inc.5.36 Buddy5.37 CloudMinds5.38 Cumulocity GmBH5.39 Cypress Semiconductor Corp5.40 Digital Reasoning Systems Inc.5.41 Echelon Corporation5.42 Enea AB5.43 Express Logic Inc.5.44 Facebook Inc.5.45 Fujitsu Ltd.5.46 Gemalto N.V.5.47 General Electric5.48 General Vision Inc.5.49 Graphcore5.50 H2O.ai5.51 Haier Group Corporation5.52 Helium Systems5.53 Hewlett Packard Enterprise5.54 Huawei Technologies5.55 Siemens AG5.56 SK Telecom5.57 SoftBank Robotics5.58 SpaceX5.59 SparkCognition5.60 STMicroelectronics5.61 Symantec Corporation5.62 Tellmeplus5.63 Tend.ai5.64 Tesla5.65 Texas Instruments5.66 Thethings.io5.67 Veros Systems5.68 Whirlpool Corporation5.69 Wind River Systems5.70 Juniper Networks5.71 Nokia Corporation5.72 Oracle Corporation5.73 PTC Corporation5.74 Losant IoT5.75 Robert Bosch GmbH5.76 Pepper5.77 Terminus5.78 Tuya Smart

6.0 AIoT Market Analysis and Forecasts 2020 - 20256.1 Global AIoT Market Outlook and Forecasts6.1.1 Aggregate AIoT Market 2020 - 20256.1.2 AIoT Market by Infrastructure and Services 2020 - 20256.1.3 AIoT Market by AI Technology 2020 - 20256.1.4 AIoT Market by Application 2020 - 20256.1.5 AIoT in Consumer, Enterprise, Industrial, and Government 2020 - 20256.1.6 AIoT Market in Cities, Suburbs, and Rural Areas 2020 - 20256.1.7 AIoT in Smart Cities 2020 - 20256.1.8 IoT Data as a Service Market 2020 - 20256.1.9 AI Decisions as a Service Market 2020 - 20256.1.10 Blockchain Support of AIoT 2020 - 20256.1.11 AIoT in 5G Networks 2020 - 20256.2 Regional AIoT Markets 2020 - 2025

7.0 Conclusions and Recommendations7.1 Advertisers and Media Companies7.2 Artificial Intelligence Providers7.3 Automotive Companies7.4 Broadband Infrastructure Providers7.5 Communication Service Providers7.6 Computing Companies7.7 Data Analytics Providers7.8 Immersive Technology (AR, VR, and MR) Providers7.9 Networking Equipment Providers7.10 Networking Security Providers7.11 Semiconductor Companies7.12 IoT Suppliers and Service Providers7.13 Software Providers7.14 Smart City System Integrators7.15 Automation System Providers7.16 Social Media Companies7.17 Workplace Solution Providers7.18 Enterprise and Government

For more information about this report visit https://www.researchandmarkets.com/r/ruog0x

Research and Markets also offers Custom Research services providing focused, comprehensive and tailored research.

Media Contact:

Research and Markets Laura Wood, Senior Manager [emailprotected]

For E.S.T Office Hours Call +1-917-300-0470 For U.S./CAN Toll Free Call +1-800-526-8630 For GMT Office Hours Call +353-1-416-8900

U.S. Fax: 646-607-1907 Fax (outside U.S.): +353-1-481-1716

SOURCE Research and Markets

http://www.researchandmarkets.com

Read more from the original source:

Global Artificial Intelligence of Things (Technology & Solutions) Markets 2020-2025: Market will Reach $65.9 Billion by 2025, Growing at 39.1%...

Laura’s Digital disruptors: healthcare gets smart on AI – Digital Health

Digital Health News reporter Laura Stevens explores how the brave new world of artificial intelligence is now being applied to healthcare, the huge potential opportunities and the new ethical and privacy challenges it raises.

Digital Disruptors artificial intelligence

The unsettling yet fascinating power of artificial intelligence is a favourite dystopian trope for film-makers. From robots taking over the world to falling in love with an operating system, the future seems to be disconcertingly jam packed full with this particular technology.

However, stepping back from Hollywood into the world of the NHS, how much do these fantastic scenarios relate to healthcare reality?

Firstly, while it may not be a mature technology, AI is definitely not a tool from the future; its in use right now by allowing researchers to compute vast amounts of data and replicating clinicians professional opinions.

The computational power of AI has been demonstrated in dermatology, cardiology and cancer research, where its analysis has provided an unbiased support to clinical opinion.

Secondly, there are huge challenges facing the introduction of this cutting edge technology into the health service, from creaky IT infrastructure, unverified data and patient data confidentiality.

The altruistic power of AI Nature recently published the results of a Stanford University study that found algorithms matched dermatologists when identifying skin cancer in photographs. The machine learning was trained on 129,450 images of 2032 different diseases, and when tested against 21 clinicians it achieved performance on par with all tested experts.

Roberto Novoa, a clinical assistant professor at the university and co-author of the study, said that while further research was needed as it was a proof of concept study, there is significant potential for AI within dermatology. The possibilities chiefly lie with smartphones being able to dramatically improve access to life-saving medical care.

The study said that the technology can potentially provide low-cost universal access to vital diagnostic care through mobiles, meaning there is the potential to profoundly expand access to vital medical care.

Brett Kuprel, a fellow co-author on the study, described the automated diagnosis of skin cancer as having the power to help people in rural communities and poor countries who may not have access to premium healthcare.

Predicting when you die

Another AI trial that made headlines recently was the MRC London Institute of Medical Sciences research into computers predicting with 80% accuracy when a patient with a heart disorder will die.

The software used advanced image processing to build up a virtual 3D heart (as shown below), which when combined with eight years worth of patient data, could predict survival rates.

Declan ORegan from the institute led the research and explained the team studied patients with pulmonary hypertension, which often affects young people and rapidly leads to heart failure. For the treatment what is important to know is the risk that an individual patient wont be survive 12 months, he explained.

However, these predictions can be difficult given the number of tests available and knowing what weight to give to each, so that was the motivation for using this AI approach as many different tests could be interpreted simultaneously and very rapidly.

AI doing research humans could never do The sheer power of AI to process vast quantities of information is something also noted by Chris Bakal, a team leader at the Institute of Cancer Research. While for decades, decision making and interpretation has been done by humans, now AI allows us to take this information and make decisions using an unbiased way and using quantitated information, he said.

I think that information is going to have to be processed by AI because its literally so much information, so complicated, that humans cant do it.

But while the processing can be done by technology and it is likely to be an aid to decision making shortly, Bakal is clear that for at least a long time, the clinician is going to have the final decision.

Artificial barriers for AI in healthcare As AI relies on learning from huge amounts of data, it needs to have access to said data. For ORegan, this is where the challenges lie as you have to link confidential information to companies who can analyse it.

We need to break down some of the artificial barriers that might prevent machine learning being used more in clinical work, ORegan said. There are issues around confidentiality which are important to maintain, but its finding smart solutions that can enable machine learning to be used in healthcare.

The DeepMind and Royal Free debacle Mention AI and the NHS, and you cant miss out the controversy thats stalked the Royal Free London NHS Foundation Trust and Google DeepMinds work on its acute kidney injury app, Streams, despite the app being billed as not using AI. New Scientist revealed in May last year the partnership had involved giving the company a huge haul of patient data.

As a result there was a huge public backlash and an on-going investigation by the Information Commissioners Office. However, Royal Free stuck to its guns and confirmed a five year deal with DeepMind in December last year.

DeepMind has also been involved with other NHS trusts. These include Imperial College Healthcare NHS Trust to deploy Streams; University College London Hospitals NHS Foundation Trust in a research partnership for head and neck cancer; and at Moorfields Eye Hospital NHS Foundation Trust to apply machine learning algorithms to automatically detect and segment eye scans.

Dodgy data and shaky infrastructure There are not only patient confidentiality issues. Owen Johnson, a senior teaching fellow in computing at the University of Leeds, said there is a huge problem with implementing this technology in the NHS.

The NHS has underinvested in its core infrastructure, and it needs to invest in its core infrastructure, as it cannot keep putting smart technology on top of shaky technology, he said.

The fragility of IT infrastructure is a common refrain across the NHS. Just this month, St Georges University Hospitals NHS Foundation Trust reported a lack of investment has resulted in an end of life infrastructure that is likely to fail and result in catastrophic implication for the Trust in terms of corporate and clinical systems failures.

In December, Johnsons local hospital, Leeds Teaching Hospitals NHS Trust believed 30 out of its 300 most critical IT systems and archived records may fail without warning due to being held on old systems and insufficient data storage and computers.

Bakal agreed with Johnsons concerns about infrastructure. For AI, the computational infrastructure is quite heavy and so theres no way its going to be at most clinics in the NHS, he said. To counter this, Bakal said the power of cloud computing could be utilised.

Johnson also pointed out that that the data that AI is basing its work on is fallible through ordinary human error and practice. While the data may be safe for clinical practice, that doesnt necessarily mean that reusing that data for an AI engine can be done safely or reliably, he said.

An extra pair of belts and braces For most of the AI experts I spoke to the conclusion was that AI will shortly be in use in a clinical setting, but as an aid to decision making. As Johnson described it: an additional pair of belts and braces.

But there is inescapability to the impact of AI on healthcare, says ORegan, to be able to fully exploit the increasingly rich information thats available about patients health, I think its inevitable really, were going to have to use computers more often to make better sense of the data.

Excerpt from:

Laura's Digital disruptors: healthcare gets smart on AI - Digital Health

An Indian politician used AI to translate his speech into other languages – The Verge

As social media platforms move to crack down on deepfakes and misinformation in the US elections, an Indian politician has used artificial intelligence techniques to make it look like he said things he didnt say, Vice reports. In one version of a campaign video, Manoj Tiwari speaks in English; in the fabricated version, he speaks in Haryanvi, a dialect of Hindi.

Political communications firm The Ideaz Factory told Vice it was working with Tiwaris Bharatiya Janata Party to create positive campaigns using the same technology used in deepfake videos, and dubbed in an actors voice to read the script in Haryanvi.

We used a lip-sync deepfake algorithm and trained it with speeches of Manoj Tiwari to translate audio sounds into basic mouth shapes, Sagar Vishnoi of The Ideaz Factory said, adding that it allowed the candidate to target voters he might not have otherwise been able to reach as directly (while India has two official languages, Hindi and English, some Indian states have their own languages and there are hundreds of various dialects).

The faked video reached about 15 million people in India, according to Vice.

Even though more deepfake videos are used to create nonconsensual pornography, the now-infamous 2018 deepfake video of President Obama raised concerns about how false or misleading videos could be used in the political arena. Last May, faked videos were posted on social media that appeared to show House Speaker Nancy Pelosi slurring her words.

In October, however, California passed a bill making it illegal to share deepfakes of politicians within 60 days of an election. And in January, the US House Ethics Committee informed members that posting deepfakes on social media could be considered a violation of House rules.

Social media companies have announced plans to try to combat the spread of deepfakes on their platforms. Twitters deceptive media ban takes effect in March. Facebook banned some deepfakes last month and Reddit updated its policy to ban all impersonation on the platform, which includes deepfakes.

How and when intentional use of altered videos might affect the 2020 US elections is anyones guess, but as one expert told Vice, even though the Tiwari video was meant to be part of a positive effort, the genie is out of the bottle now.

Read the original here:

An Indian politician used AI to translate his speech into other languages - The Verge

How AI detectives are cracking open the black box of deep learning – Science Magazine

By Paul VoosenJul. 6, 2017 , 2:00 PM

Jason Yosinski sits in a small glass box at Ubers San Francisco, California, headquarters, pondering the mind of an artificial intelligence. An Uber research scientist, Yosinski is performing a kind of brain surgery on the AI running on his laptop. Like many of the AIs that will soon be powering so much of modern life, including self-driving Uber cars, Yosinskis program is a deep neural network, with an architecture loosely inspired by the brain. And like the brain, the program is hard to understand from the outside: Its a black box.

This particular AI has been trained, using a vast sum of labeled images, to recognize objects as random as zebras, fire trucks, and seat belts. Could it recognize Yosinski and the reporter hovering in front of the webcam? Yosinski zooms in on one of the AIs individual computational nodesthe neurons, so to speakto see what is prompting its response. Two ghostly white ovals pop up and float on the screen. This neuron, it seems, has learned to detect the outlines of faces. This responds to your face and my face, he says. It responds to different size faces, different color faces.

No one trained this network to identify faces. Humans werent labeled in its training images. Yet learn faces it did, perhaps as a way to recognize the things that tend to accompany them, such as ties and cowboy hats. The network is too complex for humans to comprehend its exact decisions. Yosinskis probe had illuminated one small part of it, but overall, it remained opaque. We build amazing models, he says. But we dont quite understand them. And every year, this gap is going to get a bit larger.

Each month, it seems, deep neural networks, or deep learning, as the field is also called, spread to another scientific discipline. They can predict the best way to synthesize organic molecules. They can detect genes related to autism risk. They are even changing how science itself is conducted. The AIs often succeed in what they do. But they have left scientists, whose very enterprise is founded on explanation, with a nagging question: Why, model, why?

That interpretability problem, as its known, is galvanizing a new generation of researchers in both industry and academia. Just as the microscope revealed the cell, these researchers are crafting tools that will allow insight into the how neural networks make decisions. Some tools probe the AI without penetrating it; some are alternative algorithms that can compete with neural nets, but with more transparency; and some use still more deep learning to get inside the black box. Taken together, they add up to a new discipline. Yosinski calls it AI neuroscience.

Loosely modeled after the brain, deep neural networks are spurring innovation across science. But the mechanics of the models are mysterious: They are black boxes. Scientists are now developing tools to get inside the mind of the machine.

GRAPHIC: G. GRULLN/SCIENCE

Marco Ribeiro, a graduate student at the University of Washington in Seattle, strives to understand the black box by using a class of AI neuroscience tools called counter-factual probes. The idea is to vary the inputs to the AIbe they text, images, or anything elsein clever ways to see which changes affect the output, and how. Take a neural network that, for example, ingests the words of movie reviews and flags those that are positive. Ribeiros program, called Local Interpretable Model-Agnostic Explanations (LIME), would take a review flagged as positive and create subtle variations by deleting or replacing words. Those variants would then be run through the black box to see whether it still considered them to be positive. On the basis of thousands of tests, LIME can identify the wordsor parts of an image or molecular structure, or any other kind of datamost important in the AIs original judgment. The tests might reveal that the word horrible was vital to a panning or that Daniel Day Lewis led to a positive review. But although LIME can diagnose those singular examples, that result says little about the networks overall insight.

New counterfactual methods like LIME seem to emerge each month. But Mukund Sundararajan, another computer scientist at Google, devised a probe that doesnt require testing the network a thousand times over: a boon if youre trying to understand many decisions, not just a few. Instead of varying the input randomly, Sundararajan and his team introduce a blank referencea black image or a zeroed-out array in place of textand transition it step-by-step toward the example being tested. Running each step through the network, they watch the jumps it makes in certainty, and from that trajectory they infer features important to a prediction.

Sundararajan compares the process to picking out the key features that identify the glass-walled space he is sitting inoutfitted with the standard medley of mugs, tables, chairs, and computersas a Google conference room. I can give a zillion reasons. But say you slowly dim the lights. When the lights become very dim, only the biggest reasons stand out. Those transitions from a blank reference allow Sundararajan to capture more of the networks decisions than Ribeiros variations do. But deeper, unanswered questions are always there, Sundararajan saysa state of mind familiar to him as a parent. I have a 4-year-old who continually reminds me of the infinite regress of Why?

The urgency comes not just from science. According to a directive from the European Union, companies deploying algorithms that substantially influence the public must by next year create explanations for their models internal logic. The Defense Advanced Research Projects Agency, the U.S. militarys blue-sky research arm, is pouring $70 million into a new program, called Explainable AI, for interpreting the deep learning that powers drones and intelligence-mining operations. The drive to open the black box of AI is also coming from Silicon Valley itself, says Maya Gupta, a machine-learning researcher at Google in Mountain View, California. When she joined Google in 2012 and asked AI engineers about their problems, accuracy wasnt the only thing on their minds, she says. Im not sure what its doing, they told her. Im not sure I can trust it.

Rich Caruana, a computer scientist at Microsoft Research in Redmond, Washington, knows that lack of trust firsthand. As a graduate student in the 1990s at Carnegie Mellon University in Pittsburgh, Pennsylvania, he joined a team trying to see whether machine learning could guide the treatment of pneumonia patients. In general, sending the hale and hearty home is best, so they can avoid picking up other infections in the hospital. But some patients, especially those with complicating factors such as asthma, should be admitted immediately. Caruana applied a neural network to a data set of symptoms and outcomes provided by 78 hospitals. It seemed to work well. But disturbingly, he saw that a simpler, transparent model trained on the same records suggested sending asthmatic patients home, indicating some flaw in the data. And he had no easy way of knowing whether his neural net had picked up the same bad lesson. Fear of a neural net is completely justified, he says. What really terrifies me is what else did the neural net learn thats equally wrong?

Todays neural nets are far more powerful than those Caruana used as a graduate student, but their essence is the same. At one end sits a messy soup of datasay, millions of pictures of dogs. Those data are sucked into a network with a dozen or more computational layers, in which neuron-like connections fire in response to features of the input data. Each layer reacts to progressively more abstract features, allowing the final layer to distinguish, say, terrier from dachshund.

At first the system will botch the job. But each result is compared with labeled pictures of dogs. In a process called backpropagation, the outcome is sent backward through the network, enabling it to reweight the triggers for each neuron. The process repeats millions of times until the network learnssomehowto make fine distinctions among breeds. Using modern horsepower and chutzpah, you can get these things to really sing, Caruana says. Yet that mysterious and flexible power is precisely what makes them black boxes.

Gupta has a different tactic for coping with black boxes: She avoids them. Several years ago Gupta, who moonlights as a designer of intricate physical puzzles, began a project called GlassBox. Her goal is to tame neural networks by engineering predictability into them. Her guiding principle is monotonicitya relationship between variables in which, all else being equal, increasing one variable directly increases another, as with the square footage of a house and its price.

Gupta embeds those monotonic relationships in sprawling databases called interpolated lookup tables. In essence, theyre like the tables in the back of a high school trigonometry textbook where youd look up the sine of 0.5. But rather than dozens of entries across one dimension, her tables have millions across multiple dimensions. She wires those tables into neural networks, effectively adding an extra, predictable layer of computationbaked-in knowledge that she says will ultimately make the network more controllable.

Caruana, meanwhile, has kept his pneumonia lesson in mind. To develop a model that would match deep learning in accuracy but avoid its opacity, he turned to a community that hasnt always gotten along with machine learning and its loosey-goosey ways: statisticians.

In the 1980s, statisticians pioneered a technique called a generalized additive model (GAM). It built on linear regression, a way to find a linear trend in a set of data. But GAMs can also handle trickier relationships by finding multiple operations that together can massage data to fit on a regression line: squaring a set of numbers while taking the logarithm for another group of variables, for example. Caruana has supercharged the process, using machine learning to discover those operationswhich can then be used as a powerful pattern-detecting model. To our great surprise, on many problems, this is very accurate, he says. And crucially, each operations influence on the underlying data is transparent.

Caruanas GAMs are not as good as AIs at handling certain types of messy data, such as images or sounds, on which some neural nets thrive. But for any data that would fit in the rows and columns of a spreadsheet, such as hospital records, the model can work well. For example, Caruana returned to his original pneumonia records. Reanalyzing them with one of his GAMs, he could see why the AI would have learned the wrong lesson from the admission data. Hospitals routinely put asthmatics with pneumonia in intensive care, improving their outcomes. Seeing only their rapid improvement, the AI would have recommended the patients be sent home. (It would have made the same optimistic error for pneumonia patients who also had chest pain and heart disease.)

Caruana has started touting the GAM approach to California hospitals, including Childrens Hospital Los Angeles, where about a dozen doctors reviewed his models results. They spent much of that meeting discussing what it told them about pneumonia admissions, immediately understanding its decisions. You dont know much about health care, one doctor said, but your model really does.

Sometimes, you have to embrace the darkness. That's the theory of researchers pursuing a third route toward interpretability. Instead of probing neural nets, or avoiding them, they say, the way to explain deep learning is simply to do more deep learning.

If we can't ask why they do something and get a reasonable response back, people will just put it back on the shelf.

Like many AI coders, Mark Riedl, director of the Entertainment Intelligence Lab at the Georgia Institute of Technology in Atlanta, turns to 1980s video games to test his creations. One of his favorites is Frogger, in which the player navigates the eponymous amphibian through lanes of car traffic to an awaiting pond. Training a neural network to play expert Frogger is easy enough, but explaining what the AI is doing is even harder than usual.

Instead of probing that network, Riedl asked human subjects to play the game and to describe their tactics aloud in real time. Riedl recorded those comments alongside the frogs context in the games code: Oh, theres a car coming for me; I need to jump forward. Armed with those two languagesthe players and the codeRiedl trained a second neural net to translate between the two, from code to English. He then wired that translation network into his original game-playing network, producing an overall AI that would say, as it waited in a lane, Im waiting for a hole to open up before I move. The AI could even sound frustrated when pinned on the side of the screen, cursing and complaining, Jeez, this is hard.

Riedl calls his approach rationalization, which he designed to help everyday users understand the robots that will soon be helping around the house and driving our cars. If we cant ask a question about why they do something and get a reasonable response back, people will just put it back on the shelf, Riedl says. But those explanations, however soothing, prompt another question, he adds: How wrong can the rationalizations be before people lose trust?

Back at Uber, Yosinski has been kicked out of his glass box. Ubers meeting rooms, named after cities, are in high demand, and there is no surge pricing to thin the crowd. Hes out of Doha and off to find Montreal, Canada, unconscious pattern recognition processes guiding him through the office mazeuntil he gets lost. His image classifier also remains a maze, and, like Riedl, he has enlisted a second AI to help him understand the first one.

Researchers have created neural networks that, in addition to filling gaps left in photos, can identify flaws in an artificial intelligence.

PHOTOS: ANH NGUYEN

First, Yosinski rejiggered the classifier to produce images instead of labeling them. Then, he and his colleagues fed it colored static and sent a signal back through it to request, for example, more volcano. Eventually, they assumed, the network would shape that noise into its idea of a volcano. And to an extent, it did: That volcano, to human eyes, just happened to look like a gray, featureless mass. The AI and people saw differently.

Next, the team unleashed a generative adversarial network (GAN) on its images. Such AIs contain two neural networks. From a training set of images, the generator learns rules about imagemaking and can create synthetic images. A second adversary network tries to detect whether the resulting pictures are real or fake, prompting the generator to try again. That back-and-forth eventually results in crude images that contain features that humans can recognize.

Yosinski and Anh Nguyen, his former intern, connected the GAN to layers inside their original classifier network. This time, when told to create more volcano, the GAN took the gray mush that the classifier learned and, with its own knowledge of picture structure, decoded it into a vast array of synthetic, realistic-looking volcanoes. Some dormant. Some erupting. Some at night. Some by day. And some, perhaps, with flawswhich would be clues to the classifiers knowledge gaps.

Their GAN can now be lashed to any network that uses images. Yosinski has already used it to identify problems in a network trained to write captions for random images. He reversed the network so that it can create synthetic images for any random caption input. After connecting it to the GAN, he found a startling omission. Prompted to imagine a bird sitting on a branch, the networkusing instructions translated by the GANgenerated a bucolic facsimile of a tree and branch, but with no bird. Why? After feeding altered images into the original caption model, he realized that the caption writers who trained it never described trees and a branch without involving a bird. The AI had learned the wrong lessons about what makes a bird. This hints at what will be an important direction in AI neuroscience, Yosinski says. It was a start, a bit of a blank map shaded in.

The day was winding down, but Yosinskis work seemed to be just beginning. Another knock on the door. Yosinski and his AI were kicked out of another glass box conference room, back into Ubers maze of cities, computers, and humans. He didnt get lost this time. He wove his way past the food bar, around the plush couches, and through the exit to the elevators. It was an easy pattern. Hed learn them all soon.

See the original post here:

How AI detectives are cracking open the black box of deep learning - Science Magazine

Google’s AI Learns Betrayal and "Aggressive" Actions Pay Off | Big … – Big Think

As the development of artificial intelligence continues at breakneck speed, questions about whether we understand what we are getting ourselves into persist. One fear is that increasingly intelligent robots will take all our jobs. Another fear is that we will create a world where a superintelligence will one day decide that it has no need for humans. This fear is well-explored in popular culture, through books and films like the Terminator series.

Another possibility is maybe the one that makes the most sense - since humans are the ones creating them, the machines and machine intelligences are likely to behave just like humans. For better or worse. DeepMind, Googles cutting-edge AI company, has shown just that.

The accomplishments of the DeepMind program so far include learning from its memory, mimicking human voices, writing music, and beating the best Go player in the world.

Recently, the DeepMind team ran a series of tests to investigate how the AI would respond when faced with certain social dilemmas. In particular, they wanted to find out whether the AI is more likely to cooperate or compete.

One of the tests involved 40 million instances of playing the computer game Gathering, during which DeepMind showed how far its willing to go to get what it wants. The game was chosen because it encapsulates aspects of the classic Prisoners Dilemma from game theory.

Pitting AI-controlled characters (called agents) against each other, DeepMind had them compete to gather the most virtual apples. Once the amount of available apples got low, the AI agents started to display "highly aggressive" tactics, employing laser beams to knock each other out. They would also steal the opponents apples.

Heres how one of those games played out:

The DeepMind AI agents are in blue and red. The apples are green, while the laser beams are yellow.

The DeepMind team described their test in a blog postthis way:

We let the agents play this game many thousands of times and let them learn how to behave rationally using deep multi-agent reinforcement learning. Rather naturally, when there are enough apples in the environment, the agents learn to peacefully coexist and collect as many apples as they can. However, as the number of apples is reduced, the agents learn that it may be better for them to tag the other agent to give themselves time on their own to collect the scarce apples.

Interestingly, what appears to have happened is that the AI systems began to develop some forms of human behavior.

This model... shows that some aspects of human-like behaviour emerge as a product of the environment and learning. Less aggressive policies emerge from learning in relatively abundant environments with less possibility for costly action.The greed motivation reflects the temptation to take out a rival and collect all the apples oneself, said Joel Z. Leibo from the DeepMind team to Wired.

Besides the fruit gathering, the AI was also tested via a Wolfpack hunting game. In it, two AI characters in the form of wolves chased a third AI agent - the prey. Here the researchers wanted to see if the AI characters would choose to cooperate to get the prey because they were rewarded for appearing near the prey together when it was being captured.

"The idea is that the prey is dangerous - a lone wolf can overcome it, but is at risk of losing the carcass to scavengers. However, when the two wolves capture the prey together, they can better protect the carcass from scavengers, and hence receive a higher reward, wrote the researchers in their paper.

Indeed, the incentivized cooperation strategy won out in this instance, with the AI choosing to work together.

This is how that test panned out:

The wolves are red, chasing the blue dot (prey), while avoiding grey obstacles.

If you are thinking Skynet is here, perhaps the silver lining is that the second test shows how AIs self-interest can include cooperation rather than the all-out competitiveness of the first test. Unless, of course, its cooperation to hunt down humans.

Here's a chart showing the results of the game tests that shows a clear increase in aggression during "Gathering":

Movies aside, the researchers are working to figure out how AI can eventually control complex multi-agent systems such as the economy, traffic systems, or the ecological health of our planet all of which depend on our continued cooperation.

One nearby AI implementation where this could be relevant - self-driving cars which will have to choose safest routes, while keeping the objectives of all the parties involved under consideration.

The warning from the tests is that if the objectives are not balanced out in the programming, the AI might act selfishly, probably not for everyones benefit.

Whats next for the DeepMind team? Joel Leibo wants the AI to go deeper into the motivations behind decision-making:

Going forward it would be interesting to equip agents with the ability to reason about other agents beliefs and goals, said Leibo to Bloomberg.

See the article here:

Google's AI Learns Betrayal and "Aggressive" Actions Pay Off | Big ... - Big Think

To achieve AI-based fully automated drivingR&D project on elemental technologies at DENSO – Automotive World

On June 9, 2020, DENSO Tech Links Tokyo #7, an event organized by DENSO Corporation, was held as a webinar. The theme was Human Drivers and AIAutomated Driving from the Viewpoint of Human Characteristics. DENSO employees who play a key role in advanced technology talked about the development of automated driving technology and research on AI, taking human characteristics into account, to realize new mobility. The next speaker was Naoki Ito, General Manager of the Applied AI R&I Dept. , AI R&I Div., Advanced Research and Innovation Center. He introduced an R&D project using tracking, free-viewpoint image synthesis, and DNN accelerator to achieve AI-based automated driving.

SpeakersNaoki ItoGeneral Manager of the Applied AI R&I Dept., AI R&I Div., Advanced Research and Innovation Center

Four Automated Driving Levels and Future Mission

Naoki Ito: I will talk about DENSOs initiatives in automated driving from the viewpoint of R&D on AI.

The theme of my presentation is DENSOs initiatives on advanced driver assistance systems (ADAS) and automated driving (AD) systems. These will become a global trend in the future.

The vertical axis shows the target vehicles, including passenger cars or so-called privately owned vehicles, commercial vehicles such as trucks, and shared and service cars including taxis and small buses. The horizontal axis shows the AD/ADAS levels. The level increases from left to right.

Automated driving is categorized into four levels (Level 1 to 4). Active safety, which is shown on the left, and about half of ADAS/AD functions fall under Level 1 or 2. On Level 1 and 2, the driver is basically responsible for driving, and the system supports the driver. On Level 3 and Level 4, the system performs the driving task. On Level 3, the driver must take over driving when the system cannot handle it.

On Level 4, the fully automated driving system achieves driverless driving of small vehicles. Automated valet parking will also be required, though this may fall outside the definition of automated driving.

At present, Level 1 and Level 2 active safety technologies, including collision avoidance braking, adaptive cruise control (ACC), and lane keep assist (LKA) on expressways, are spreading.

Level 2 and Level 3 technologies will need to be developed for arterial and general public roads and Level 4 technologies for service cars in operational design domains (ODDs). Our department serves as an R&D team, so our mission is to develop elemental technologies for Level 3 and Level 4 and contribute to the companys business.

Current Issues in Automated Driving

In research on automated driving, we must address various difficult issues. For example, the sensing performance must be maintained to cope with the backlight at the exit of a tunnel, as well as very poor weather conditions such as heavy rain and dense fog.

Even if the sensing performance is maintained, objects must be detected by the system. Ordinary traffic participants include vehicles, pedestrians, and bicycles. Sometimes there may be unfamiliar objects on the road, and they must be detected. Collision with animals that may enter the road must also be avoided. If a road which was open yesterday is under construction today, it is necessary to make a detour. There are many difficult situations to cope with.

Human drivers can correctly understand the situation based on appropriate judgment, but this is a very difficult problem for fully automated driving.

As I explained, automated driving systems are expected to evolve gradually from simple systems to more complicated systems. Applications will spread from expressways, where it is relatively easy to achieve automated driving, to arterial and general public roads.

There are various challenging issues from the viewpoint of R&D, so this is an exciting field of research.

What Can Be Achieved by Using AI

AI is one solution to address these challenging issues. Here, AI refers to deep learning and machine learning technologies. We have been working to apply AI technologies to vehicles.

You may be familiar with the above figure. Image recognition technologies undergo benchmarking each year. The lower the point on the graph, the higher the accuracy. The recognition accuracy used to be improved only by a few percentage points annually, but deep learning technology, which is one type of AI technology, improved the accuracy by nearly 10%. Deep learning and machine learning technologies have attracted much public attention and many researchers.

Today, the accuracy of image recognition, which is one of the specific tasks, is higher than that of recognition by humans.

We have been working to apply deep learning and machine learning technologies to ADAS/AD.

The figure below is a simple block diagram for automated driving. Information obtained from various sensors of a vehicle is used to recognize the surrounding environment and understand the scene. The recognition block outputs the results of recognition of objects, and the trajectory of objects can then be predicted. The vehicle behavior and trajectories are measured by using the prediction results to ensure proper control of the steering, braking, and acceleration.

We are attempting to use deep learning and other AI technologies to predict trajectories and plan paths, in addition to recognition.DENSOs Initiatives to Apply AI to Vehicles

This figure outlines DENSOs approach to apply AI to vehicles.

Algorithms are the main focus of R&D on AI. To improve the accuracy of AI algorithms, a large amount of data will be required, but the larger the amount of data, the more time the learning process takes. We will need technologies to efficiently use computers for the learning process.

R&D on algorithms and computer technologies is actively being conducted around the world, so I do not believe that DENSO is the global leader in these technologies.

As various technologies and papers are presented around the world, we try to identify technologies that may be used for vehicles, apply them to vehicle systems, and determine their effectiveness. As described here, we are committed to developing technologies to achieve speedy implementation to systems.

Even if there is an algorithm which seems to have a high recognition performance, it may not work in real time in a vehicle system. We develop technologies to quickly attain real-time performance.

These three technologies alone are not enough to apply AI to vehicles. Specifically, the calculation resources in a vehicle are limited. Embedded technologies and semiconductor technologies are also required to operate AI properly.

Quality is a very important factor in vehicle production, so it is essential to assure the quality of AI.

Regarding the bottom two items of the pentagon, we have well-established embedded technologies as well as quality assurance technologies and expertise. It is essential to harness these capabilities and work on the five elemental technologies in a comprehensive manner in order to apply AI to vehicles.

Now, I would like to briefly introduce the development process of respective elemental technologies.

Tracking Algorithms

First, I will talk about algorithms, which are the main topic. The block diagram that I showed earlier is indicated (in the upper part of the slide). First, lets take a look at the recognition process.

Please click here to view the full press release

SOURCE: DENSO

Continued here:

To achieve AI-based fully automated drivingR&D project on elemental technologies at DENSO - Automotive World

Netflix CEO: Our future audience may be AI lifeforms – USA TODAY

Founder and CEO of Netflix Reed Hastings smiles during a keynote at the Mobile World Congress in Barcelona, Spain, Monday, Feb. 27, 2017.(Photo: AP Photo/Manu Fernandez)

Twenty to fifty years from now, when youre starting to get into some serious AI, Reed Hastings isnt sure whether Netflix is going to be entertaining you or entertaining the artificially intelligent bots.

The Netflix founder and CEO opined on the future during an interview on stage at Mobile World Congress here.

Whats amazing about technology is its really hard to predict, he says. What we do is try to learn and adapt rather than try to commit to one particular view of whats going to happen. And if virtual reality takes off well adapt to that, if it becomes contact lenses that have amazing powers well adapt to that.

Hastings appearance at the mobile industrys signature trade shindig was largely focused on Netflix experiences globally. Last January, Netflix expanded to 130 countries and is now just about everywhere, with one big exception China. About half of Netflixs nearly 100 million streaming members are international.

MORE:

Robots will outnumber humans in 30 years, Softbank says

Nokia 3310 relaunched with classic game Snake

Amazon, Netflix elbow into Oscars with 4 wins

The company has been on a roll lately. In January, Netflix (NFLX) easily beat Wall Street earnings forecasts, boosted by international subscribers. The stock is trading at $143.41, not far off its 52-week high.

Hastings discussed the appeal behind the companys international push.

If youre a filmmaker in Spain or Italy youre excited about Netflix because it can give global reach for your film. Hastings said hes seen strong mobile usage throughout Africa, Middle East, Asia.

Indeed, Hastings doesnt think theres anything uniquely American about the Netflix viewer. I dont know if I would call it an American experience. Its on a mobile phone, its on a Samsung TV. Fundamentally, the Internet is the most global medium weve ever seen and were trying to continue to learn how to do things well on the Internet.

Hastings says Netflix is investing heavily around the world on network servers, and improving the connection. Five or ten years from now the quality of Netflix on all of your devices will be just incredible and we dont know exactly what that is. But we know that the Internet is allowing new experiences to get created.

Meantime, Netflix is offering more content with a distinctly international flavor.

Autoplay

Show Thumbnails

Show Captions

For example, a Spanish show called Cable Girls is a 1920s period-piece about the original women hired to work telephone switchboards in Madrid. The sci-fi series 3% is produced out of Brazil.

We want to give global producers a global audience to entertain, Hastings said.

Netflix spent the first couple of years delivering mostly Hollywood content. Now the company is developing relationships with producers in Turkey, Korea, Japan and elsewhere.

The most surprising thing has been the tastes of people around the world, Hastings says. The story of the Internet is connecting people everywhere, and the role that we play in it is around stories of all types.

And he believes binge viewing appeals everywhere. The original binge view was the novel that you stayed up late to read or read on the beach at your leisure.

The Internet has brought back binge viewing to human beings and its just a much better way than watching a show every week. And youre going to see most linear networks convert to binge viewing, and thats very exciting.

Whats unknown right now is whether the audience binge watching many decades from now will be mostly robot or human.

Email: ebaig@usatoday.com; Follow USA TODAY Personal Tech Columnist @edbaig on Twitter

Read or Share this story: http://usat.ly/2lNCPuP

Read more here:

Netflix CEO: Our future audience may be AI lifeforms - USA TODAY

Top 4 AI Companies With Over US$100m in Funding During 2016 – The Merkle

The artificial intelligence industry has attracted a lot of investments over the past few years. 2016 Has been a rather spectacular year in this regard, with multiple mega-deals coming to fruition between January and December. It is evident there is a growing interest in this sector by venture capitalists and other investors, as it is one area where a lot of progress can be made with the right amount of funding. Below is a list of AI companies who secured the most funding as of 2016.

Although the company name may not necessarily resemble a presence in artificial intelligence, VCs look at Flatiron health as one of the unicorns in this industry right now. The company focuses on bringing artificial intelligence solutions to cancer care providers and life science companies. Through a web-based platform, these AI solutions can be put to good use in the healthcare sector.

The year 2016 has proven to be quite successful for Flatiron Health, as the company has raised a total of US$313m as of January 6, 2016. That is a lot more than some of its closest competitors, indicating Flatiron Health has something investors seem to have taken a liking to. It will be interesting to see what this company can bring to the AI sector over the next few years, as that large amount of funding must be put to good use.

It does not happen often an AI company can successfully complete a large round of private funding. StackPath is one of the few exceptions in this regard, as they raised US$180m during 2016. Their suite of secure web services gathers information through a machine learning engine that becomes more threat-aware over time. Moreover, the software solution is capable of communicating real-time threats. Considering how the number of cyber security threats continues to grow, it makes a lot of sense to combat these issues with artificial intelligence.

We have briefly touched upon Zoox in an earlier article, as it is one of the hottest AI startups to keep an eye on right now. The objective of bringing artificial intelligence to the auto industry has attracted a lot of attention Venture capitalists also see a lot of merit in this company, which explains how they raised US$200m in Series A funding during 2016. The year 2017 could be a defining year for Zoox, as they will have to keep innovating and impressing investors moving forward.

It is impossible to discuss AI startup funding in 2016 and not mention Gett. This Israeli ride-hailing application will compete with existing solutions, including Uber and Lyft. The company does things differently, though, by using AI algorithms to assist in on-demand car deployment. Moreover, Getts software solution can also be used to power autonomous vehicles in the future.

One has to admit Gett has been doing some impressive work throughout 2016. To date, the company has raised US$613m from investors all over the world, which is quite a high number. In comparison, Lyft has raised US$2bn to date, although that company has been around for a few more years. Gett is available in 60 cities all over the world right now, and powers over half of the black cabs in London, England.

If you liked this article, follow us on Twitter @themerklenews and make sure to subscribe to our newsletter to receive the latest bitcoin, cryptocurrency, and technology news.

Original post:

Top 4 AI Companies With Over US$100m in Funding During 2016 - The Merkle

How The Business World Effects AI – AI Daily

As society grapples with how technology is changing the way we work and live, AI has been dominating the headlines for years.

The 2018 AI Index Report has found increased interest within the topic worldwide, including in occupations that need deep learning skills. But is that the hype about AI's immediate potential exaggerated, or is it becoming more important to the economy?

The global impact of AI is going to be profound, and to some extent already is, but there's far more development ahead. The McKinsey Global Institute recently analysed data from the US Bureau of the Census and therefore the World Economic Forum and reported that AI has the potential to feature 16% (about $13 trillion) to the worldwide economy by 2030. Consistent with a recent report by the International Monetary Fund (IMF), it could also boost global gross domestic product (GDP) by 26%.

McKinsey has previously reported that within the same year, a minimum of 70% of companies were on the verge of adopting AI technologies like AI and machine learning. AI enables large amounts of knowledge to be processed and integrated into business processes. Big data in AI helps companies gain access to more information about their customers, the requirements of their customers and their business models so as to realize a plus over their competitors.

Today, the digital age has succeeded in generating huge amounts of knowledge, with the quantity of data increasing to 44 zettabytes by 2020. Before the introduction of massive data, business intelligence had become a legitimate career option.

But big data and AI are already having a big impact on the company front, and not just in data science.

According to a survey, 97% of executives confirm that their companies are now investing not only in AI, but also in big data. As algorithm-driven AI continues to spread, people are going to be happier than they are today. In 2017, around 51% of companies used AI, up from 70% in 2018.

Read more from the original source:

How The Business World Effects AI - AI Daily

Microsoft will counter cyberattacks on Windows 10 with AI from Hexadite – TechRepublic

Image: iStockphoto.com/Bannosuke

There is really no way to sugarcoat the situation. A war is going on between enterprises to whom you have entrusted your dataand a determined criminal element is intent on stealing it. The weapons used in this battle for data often act indiscriminately, leaving civilian casualties in their wake. It is ugly and messy and not likely to end in the foreseeable future.

This is why large software producers like Microsoft have been spending extraordinary resources on counter-weaponry, particularly when it comes to operating systems. The only way to combat the clever and ever-evolving cyberattacks is to match them with similar innovation. Microsoft's acquisition of Hexadite is the company's latest attempt to instill innovation and get in front of the next enterprise-level malware attack.

One of the most effective ways for criminals to breach enterprise security is with automation. An automated attack made with bots running on devices illegally controlled by cybercriminals can overwhelm even the most efficient and best staffed security response teams. It seems only practical that the best way to fight automation is with automation.

Hexadite has developed agentless intelligent security orchestration and an automation platform that enables enterprises to go from detecting a security breach or threat to remediating that threat in minutes. The platform uses artificial intelligence and automation to recognize the problem and then fix the problem without having to wait for the IT response team.

The Hexadite platform helps alleviate what has become a major concern for enterprises in their war against malicious cybercriminalstime. The number and frequency of attacks on major organizations often overwhelms the people assigned to combat them. An automated AI platform can read and counter common attacks, freeing IT security personnel to deal with other security threats and patch vulnerabilities.

Microsoft plans to incorporate Hexadite's platform into the Windows Defender Advanced Threat Protection (WDATP) system. Enterprises use the WDATP protocol to actively detect and remediate cyber threats to their networks running Windows 10 devices. Adding an automated AI platform should make the WDATP that more effective, and most important, timely.

In the battle over access to data, the enterprise, in general, is losing to the criminal element. In many cases, enterprise IT personnel are outgunned and overwhelmed by the weapons used by the opposing side. The only way to get the upper hand is by winning what has become, for all intents and purposes, an arms race.

By acquiring Hexadite and incorporating its innovative AI platform into existing Windows 10 security systems, Microsoft is looking to get a leg up on the cybercriminals and thwart their next attack. Automating an effective response to an automated attack is the only way enterprises can keep up with, and potentially get ahead of, the criminal element hellbent on getting access to their data. This is just the world we do business insad as that may be.

Have you and your team been overwhelmed by a cyberattack? Would automation and AI help? Share your thoughts and opinions with your peers at TechRepublic in the discussion thread below.

Read this article:

Microsoft will counter cyberattacks on Windows 10 with AI from Hexadite - TechRepublic

Ai Weiwei – Wikipedia

Ai Weiwei (Chinese: ; pinyin: i Wiwi, English pronunciation(helpinfo); born 28 August 1957 in Beijing) is a Chinese contemporary artist and activist. His father's (Ai Qing) original surname was written Jiang ().[1][2][3] Ai collaborated with Swiss architects Herzog & de Meuron as the artistic consultant on the Beijing National Stadium for the 2008 Olympics.[4] As a political activist, he has been highly and openly critical of the Chinese Government's stance on democracy and human rights. He has investigated government corruption and cover-ups, in particular the Sichuan schools corruption scandal following the collapse of so-called "tofu-dreg schools" in the 2008 Sichuan earthquake.[5] In 2011, following his arrest at Beijing Capital International Airport on 3 April, he was held for 81 days without any official charges being filed; officials alluded to their allegations of "economic crimes".[6]

Ai's father was the Chinese poet Ai Qing,[7] who was denounced during the Anti-Rightist Movement. In 1958, the family was sent to a labour camp in Beidahuang, Heilongjiang, when Ai was one year old. They were subsequently exiled to Shihezi, Xinjiang in 1961, where they lived for 16 years. Upon Mao Zedong's death and the end of the Cultural Revolution, the family returned to Beijing in 1976.[8]

In 1978, Ai enrolled in the Beijing Film Academy and studied animation.[9] In 1978, he was one of the founders of the early avant garde art group the "Stars", together with Ma Desheng, Wang Keping, Huang Rui, Li Shuang, Ah Cheng and Qu Leilei. The group disbanded in 1983,[10] yet Ai participated in regular Stars group shows, The Stars: Ten Years, 1989 (Hanart Gallery, Hong Kong and Taipei), and a retrospective exhibition in Beijing in 2007: Origin Point (Today Art Museum, Beijing).[citation needed]

From 1981 to 1993, he lived in the United States. For the first few years, Ai lived in Philadelphia and San Francisco. He studied English at the University of Pennsylvania and the University of California, Berkeley.[12] Later, he moved to New York City.[10] He studied briefly at Parsons School of Design.[13] Ai attended the Art Students League of New York from 1983 to 1986, where he studied with Bruce Dorfman, Knox Martin and Richard Pousette-Dart.[14] He later dropped out of school, and made a living out of drawing street portraits and working odd jobs. During this period, he gained exposure to the works of Marcel Duchamp, Andy Warhol, and Jasper Johns, and began creating conceptual art by altering readymade objects.

Ai befriended beat poet Allen Ginsberg while living in New York, following a chance meeting at a poetry reading where Ginsberg read out several poems about China. Ginsberg had travelled to China and met with Ai's father, the noted poet Ai Qing, and consequently Ginsberg and Ai became friends.[15]

When he was living in the East Village (from 1983 to 1993), Ai carried a camera with him all the time and would take pictures of his surroundings wherever he was. The resulting collection of photos were later selected and is now known as the New York Photographs.[16]

At the same time, Ai became fascinated by blackjack card games and frequented Atlantic City casinos. He is still regarded in gambling circles as a top tier professional blackjack player according to an article published on blackjackchamp.com.[17][18][19]

In 1993, Ai returned to China after his father became ill.[20] He helped establish the experimental artists' Beijing East Village and co-published a series of three books about this new generation of artists with Chinese curator Feng Boyi: Black Cover Book (1994), White Cover Book (1995), and Gray Cover Book (1997).[21]

In 1999, Ai moved to Caochangdi, in the northeast of Beijing, and built a studio house his first architectural project. Due to his interest in architecture, he founded the architecture studio FAKE Design, in 2003.[22] In 2000, he co-curated the art exhibition Fuck Off with curator Feng Boyi in Shanghai, China.[23]

Ai is married to artist Lu Qing,[24] and has a son from an extramarital relationship.[25]

In 2005, Ai was invited to start blogging by Sina Weibo, the biggest internet platform in China. He posted his first blog on 19 November. For four years, he "turned out a steady stream of scathing social commentary, criticism of government policy, thoughts on art and architecture, and autobiographical writings."[26] The blog was shut down by Sina on 28 May 2009. Ai then turned to Twitter and wrote prolifically on the platform, claiming at least eight hours online every day. He wrote almost exclusively in Chinese using the account @aiww.[citation needed] As of 31 December 2013, Ai has declared that he would stop tweeting but the account remains active in forms of retweets and Instagram posts.[27]

Ai supported the Amnesty International petition for Iranian filmmaker Hossein Rajabian and his brother, musician Mehdi Rajabian, and released the news on his Twitter pages.[28]

Ten days after the 8.0-magnitude earthquake in Sichuan province on 12 May 2008, Ai led a team to survey and film the post-quake conditions in various disaster zones. In response to the government's lack of transparency in revealing names of students who perished in the earthquake due to substandard school campus constructions, Ai recruited volunteers online and launched a "Citizens' Investigation" to compile names and information of the student victims. On 20 March 2009, he posted a blog titled "Citizens' Investigation" and wrote: "To remember the departed, to show concern for life, to take responsibility, and for the potential happiness of the survivors, we are initiating a "Citizens' Investigation." We will seek out the names of each departed child, and we will remember them."[29]

As of 14 April 2009, the list had accumulated 5,385 names.[30] Ai published the collected names as well as numerous articles documenting the investigation on his blog which was shut down by Chinese authorities in May 2009.[31] He also posted his list of names of schoolchildren who died on the wall of his office at FAKE Design in Beijing.[32]

Ai suffered headaches and claimed he had difficulty concentrating on his work since returning from Chengdu in August 2009, where he was beaten by the police for trying to testify for Tan Zuoren, a fellow investigator of the shoddy construction and student casualties in the earthquake. On 14 September 2009, Ai was diagnosed to be suffering internal bleeding in a hospital in Munich, Germany, and the doctor arranged for emergency brain surgery.[33] The cerebral hemorrhage is believed to be linked to the police attack.[34][35]

According to the Financial Times, in an attempt to force Ai to leave the country, two accounts used by him had been hacked in a sophisticated attack on Google in China dubbed Operation Aurora, their contents read and copied; his bank accounts were investigated by state security agents who claimed he was under investigation for "unspecified suspected crimes".[36]

In November 2010, Ai was placed under house arrest by the Chinese police. He said this was to prevent the planned party marking the demolition of his newly built Shanghai studio.[37]

The building was designed and built by Ai upon encouragement and persuasion from a "high official [from Shanghai]" as part of a new cultural area designated by Shanghai Municipal authorities; Ai would have used it as a studio and to teach architecture courses. But now Ai has been accused of erecting the structure without the necessary planning permission and a demolition notice has been ordered, even though, Ai said, officials had been extremely enthusiastic, and the entire application and planning process was "under government supervision". According to Ai, a number of artists were invited to build new studios in this area of Shanghai because officials wanted to create a cultural area.[38]

On 3 November 2010, Ai said the government had informed him two months earlier that the newly completed studio would be knocked down because it was illegal. Ai complained that this was unfair, as he was "the only one singled out to have my studio destroyed". The Guardian reported Ai saying Shanghai municipal authorities were "frustrated" by documentaries on subjects they considered sensitive:[38] two of the better known ones featured Shanghai resident Feng Zhenghu, who lived in forced exile for three months in Narita Airport, Tokyo; another well-known documentary focused on Yang Jia, who murdered six Shanghai police officers.[39]

In the end, the party took place without Ai's presence; his supporters feasted on river crab, an allusion to "harmony", and a euphemism used to jeer official censorship. Ai was released from house arrest the next day.[40]

Like other activists and intellectuals, Ai was prevented from leaving China in late 2010. Ai suggested that the authorities wanted to prevent him from attending the ceremony in December 2010 to award the 2010 Nobel Peace Prize to fellow dissident Liu Xiaobo.[41] Ai said that he had not been invited to the ceremony, and was attempting to travel to South Korea for a meeting when he was told that he could not leave for reasons of national security.[42]

In the evening of 11 January 2011, Ai's studio was demolished in a surprise move by the local government.[43][44]

On 3 April 2011, Ai was arrested at Beijing Capital International Airport just before catching a flight to Hong Kong and his studio facilities were searched.[45] A police contingent of approximately 50 officers came to his studio, threw a cordon around it and searched the premises. They took away laptops and the hard drive from the main computer; along with Ai, police also detained eight staff members and Ai's wife, Lu Qing. Police also visited the mother of Ai's two-year-old son.[46] While state media originally reported on 6 April that Ai was arrested at the airport because "his departure procedures were incomplete,"[47] the Chinese Ministry of Foreign Affairs said on 7 April that Ai was arrested under investigation for alleged economic crimes.[48] Then, on 8 April, police returned to Ai's workshop to examine his financial affairs.[49] On 9 April, Ai's accountant, as well as studio partner Liu Zhenggang and driver Zhang Jingsong, disappeared,[50] while Ai's assistant Wen Tao has remained missing since Ai's arrest on 3 April.[51] Ai's wife said that she was summoned by the Beijing Chaoyang district tax bureau, where she was interrogated about his studio's tax on 12 April.[52] South China Morning Post reports that Ai received at least two visits from the police, the last being on 31 March three days before his detention apparently with offers of membership to the Chinese People's Political Consultative Conference. A staff member recalled that Ai had mentioned receiving the offer earlier, "[but Ai] didn't say if it was a membership of the CPPCC at the municipal or national level, how he responded or whether he accepted it or not."[52]

On 24 February, amid an online campaign for Middle East-style protests in major Chinese cities by overseas dissidents, Ai posted on his Twitter account: "I didnt care about jasmine at first, but people who are scared by jasmine sent out information about how harmful jasmine is often, which makes me realize that jasmine is what scares them the most. What a jasmine!"[53][54]

Analysts and other activists said Ai had been widely thought to be untouchable, but Nicholas Bequelin from Human Rights Watch suggested that his arrest, calculated to send the message that no one would be immune, must have had the approval of someone in the top leadership.[55] International governments, human rights groups and art institutions, among others, called for Ai's release, while Chinese officials did not notify Ai's family of his whereabouts.[56]

State media started describing Ai as a "deviant and a plagiarist" in early 2011.[57] The China Daily subsidiary, the Global Times editorial on 6 April 2011 attacked Ai, saying "Ai Weiwei likes to do something 'others dare not do.' He has been close to the red line of Chinese law. Objectively speaking, Chinese society does not have much experience in dealing with such persons. However, as long as Ai Weiwei continuously marches forward, he will inevitably touch the red line one day."[58] Two days later, the journal scorned Western media for questioning Ai's charge as a "catch-all crime", and denounced the use of his political activism as a "legal shield" against everyday crimes. It said "Ai's detention is one of the many judicial cases handled in China every day. It is pure fantasy to conclude that Ai's case will be handled specially and unfairly."[59] Frank Ching expressed in the South China Morning Post that how the Global Times could radically shift its position from one-day to the next was reminiscent of Alice in Wonderland.[60]

Michael Sheridan of The Times suggested that Ai had offered himself to the authorities on a platter with some of his provocative art, particularly photographs of himself nude with only a toy alpaca hiding his modesty with a caption ("grass mud horse covering the middle"). The term possesses a double meaning in Chinese: one possible interpretation was given by Sheridan as: "Fuck your mother, the party central committee".[61]

Ming Pao in Hong Kong reacted strongly to the state media's character attack on Ai, saying that authorities had employed "a chain of actions outside the law, doing further damage to an already weak system of laws, and to the overall image of the country."[57] Pro-Beijing newspaper in Hong Kong, Wen Wei Po, announced that Ai was under arrest for tax evasion, bigamy and spreading indecent images on the internet, and vilified him with multiple instances of strong rhetoric.[62][63] Supporters said "the article should be seen as a mainland media commentary attacking Ai, rather than as an accurate account of the investigation."[64]

The United States and European Union protested Ai's detention.[65] The international arts community also mobilised petitions calling for the release of Ai: "1001 Chairs for Ai Weiwei" was organized by Creative Time of New York that calls for artists to bring chairs to Chinese embassies and consulates around the world on 17 April 2011, at 1pm local time "to sit peacefully in support of the artist's immediate release."[66][67] Artists in Hong Kong,[68] Germany[68] and Taiwan demonstrated and called for Ai to be released.[69]

One of the major protests by U.S. museums took place on 19 and 20 May when the Museum of Contemporary Art San Diego organized a 24-hour silent protest in which volunteer participants, including community members, media, and museum staff, occupied two traditionally styled Chinese chairs for one-hour periods.[70] The 24-hour sit-in referenced Ai's sculpture series, Marble Chair, two of which were on view and were subsequently acquired for the Museum's permanent collection.

The Solomon R. Guggenheim Foundation and the International Council of Museums, which organised petitions, said they had collected more than 90,000 signatures calling for the release of Ai.[71] On 13 April 2011, a group of European intellectuals led by Vclav Havel had issued an open letter to Wen Jiabao, condemning the arrest and demanding the immediate release of Ai. The signatories include Ivan Klma, Ji Grua, Jchym Topol, Elfriede Jelinek, Adam Michnik, Adam Zagajewski, Helmuth Frauendorfer; Bei Ling (Chinese:), a Chinese poet in exile drafted and also signed the open letter.[72]

On 16 May 2011, the Chinese authorities allowed Ai's wife to visit him briefly. Liu Xiaoyuan, his attorney and personal friend, reported that Wei was in good physical condition and receiving treatment for his chronic diabetes and hypertension; he was not in a prison or hospital but under some form of house arrest.[73]

He is the subject of the 2012 documentary film Ai Weiwei: Never Sorry, directed by American filmmaker Alison Klayman, which received a special jury prize at the 2012 Sundance Film Festival and opened the Hot Docs Canadian International Documentary Festival, North America's largest documentary festival, in Toronto on 26 April 2012.[74]

On 22 June 2011, the Chinese authorities released Ai from jail after almost three months' detention on charges of tax evasion.[75] Beijing Fa Ke Cultural Development Ltd. (Chinese: ), a company Ai controlled, had allegedly evaded taxes and intentionally destroyed accounting documents. State media also reports that Ai was granted bail on account of Ai's "good attitude in confessing his crimes", willingness to pay back taxes, and his chronic illnesses.[76] According to the Chinese Foreign Ministry, he is prohibited from leaving Beijing without permission for one year.[77][78] Ai's supporters widely viewed his detention as retaliation for his vocal criticism of the government.[79] On 23 June 2011, professor Wang Yujin of China University of Political Science and Law stated that the release of Ai on bail shows that the Chinese government could not find any solid evidence of Ai's alleged "economic crime".[80] On 24 June 2011, Ai told a Radio Free Asia reporter that he was thankful for the support of the Hong Kong public, and praised Hong Kong's conscious society. Ai also mentioned that his detention by the Chinese regime was hellish (Chinese: ), and stressed that he is forbidden to say too much to reporters.[81]

After his release, his sister gave some details about his detention condition to the press, explaining that he was subjected to a kind of psychological torture: he was detained in a tiny room with constant light, and two guards were set very close to him at all times, and watched him constantly.[82] In November, Chinese authorities were again investigating Ai and his associates, this time under the charge of spreading pornography.[83][84] Lu was subsequently questioned by police, and released after several hours though the exact charges remain unclear.[85][86] In January 2012, in its International Review issue Art in America magazine featured an interview with Ai Weiwei at his home in China. J.J. Camille (the pen name of a Chinese-born writer living in New York), "neither a journalist nor an activist but simply an art lover who wanted to talk to him" had travelled to Beijing the previous September to conduct the interview and to write about his visit to "China's most famous dissident artist" for the magazine.[87]

On 21 June 2012, Ai's bail was lifted. Although he is allowed to leave Beijing, the police informed him that he is still prohibited from traveling to other countries because he is "suspected of other crimes," including pornography, bigamy and illicit exchange of foreign currency.[88][89] Until 2015, he remained under heavy surveillance and restrictions of movement, but continues to criticize through his work.[90][91] In July 2015, he was given a passport and may travel abroad.[92]

In June 2011, the Beijing Local Taxation Bureau demanded a total of over 12 million yuan (US$1.85million) from Beijing Fa Ke Cultural Development Ltd. in unpaid taxes and fines,[93][94] and accorded three days to appeal the demand in writing. According to Ai's wife, Beijing Fa Ke Cultural Development Ltd. has hired two Beijing lawyers as defense attorneys. Ai's family state that Ai is "neither the chief executive nor the legal representative of the design company, which is registered in his wife's name."

Offers of donations poured in from Ai's fans across the world when the fine was announced. Eventually an online loan campaign was initiated on 4 November 2011, and close to 9 million RMB was collected within ten days, from 30,000 contributions. Notes were folded into paper planes and thrown over the studio walls, and donations were made in symbolic amounts such as 8964 (4 June 1989, Tiananmen Massacre) or 512 (12 May 2008, Sichuan earthquake). To thank creditors and acknowledge the contributions as loans, Ai designed and issued loan receipts to all who participated in the campaign.[95] Funds raised from the campaign were used as collateral, required by law for an appeal on the tax case. Lawyers acting for Ai submitted an appeal against the fine in January 2012; the Chinese government subsequently agreed to conduct a review.[96]

In June 2012, the court heard the tax appeal case. Ai's wife, Lu Qing, the legal representative of the design company, attended the hearing. Lu was accompanied by several lawyers and an accountant, but the witnesses they had requested to testify, including Ai, were prevented from attending a court hearing.[97] Ai asserts that the entire matter including the 81 days he spent in jail in 2011 is intended to suppress his provocations. Ai said he had no illusions as to how the case would turn out, as he believes the court will protect the government's own interests. On 20 June, hundreds of Ai's supporters gathered outside the Chaoyang District Court in Beijing despite a small army of police officers, some of whom videotaped the crowd and led several people away.[98] On 20 July, Ai's tax appeal was rejected in court.[99][100] The same day Ai's studio released "The Fake Case" which tracks the status and history of this case including a timeline and the release of official documents.[101] On 27 September, the court upheld the 2.4million tax evasion fine.[102] Ai had previously deposited 1.33million in a government-controlled account in order to appeal. Ai said he will not pay the remainder because he does not recognize the charge.[103]

In October 2012, authorities revoked the license of Beijing Fa Ke Cultural Development Ltd. for failing to re-register, an annual requirement by the administration. The company was not able to complete this procedure as its materials and stamps were confiscated by the government.[104]

On 26 April 2014, Ai's name was removed from a group show taking place at the Shanghai Power Station of Art. The exhibition was held to celebrate the fifteenth anniversary of the art prize created by Uli Sigg in 1998, with the purpose of promoting and developing Chinese contemporary art. Ai won the Lifetime Contribution Award in 2008 and was part of the jury during the first three editions of the prize.[105] He was then invited to take part in the group show together with the other selected Chinese artists. Shortly before the exhibition's opening, some museum workers removed his name from the list of winners and jury members painted on a wall. Also, Ai's works Sunflower Seeds and Stools were removed from the show and kept in a museum office (see photo on Ai Weiwei's Instagram).[106] Sigg declared that it was not his decision and that it was a decision of the Power Station of Art and the Shanghai Municipal Bureau of Culture.[105]

In May 2014, the Ullens Center for Contemporary Art, a non-profit art center situated in the 798 art district of Beijing, held a retrospective exhibition in honor of the late curator and scholar, Hans Van Dijk. Ai, a good friend of Hans and a fellow co-founder of the China Art Archives and Warehouse (CAAW), participated in the exhibition with three artworks.[107] On the day of the opening, Ai realized his name was omitted from both Chinese and English versions of the exhibition's press release. Ai's assistants went to the art center and removed his works.[108] It is Ai's belief that, in omitting his name, the museum altered the historical record of van Dijk's work with him. Ai started his own research about what actually happened, and between 23 and 25 May he interviewed the UCCA's director, Philip Tinari, the guest curator of the exhibition, Marianne Brouwer, and the UCCA chief, Xue Mei.[107] He published the transcripts of the interviews on Instagram.[109][110][111][112][113][114][115][116][117] In one of the interviews, the CEO of the UCCA, Xue Mei, admitted that, due to the sensitive time of the exhibition, Ai's name was taken out of the press releases on the day of the opening and it was supposed to be restored afterwards. This was to avoid problems with the Chinese authorities, who threatened to arrest her.[107]

Beijing video works

From 2003 to 2005, Ai Weiwei recorded the results of Beijing's developing urban infrastructure and its social conditions.

2003, Video, 150 hours

Beginning under the Dabeiyao highway interchange, the vehicle from which Beijing 2003 was shot traveled every road within the Fourth Ring Road of Beijing and documented the road conditions. Approximately 2400 kilometers and 150 hours of footage later, it ended where it began under the Dabeiyao highway interchange. The documentation of these winding alleyways of the city center now largely torn down for redevelopment preserved a visual record of the city that is free of aesthetic judgment.

2004, Video, 10h 13m

Moving from east to west, Chang'an Boulevard traverses Beijing's most iconic avenue. Along the boulevard's 45-kilometer length, it recorded the changing densities of its far-flung suburbs, central business districts, and political core. At each 50-meter increment, the artist records a single frame for one minute. The work reveals the rhythm of Beijing as a capital city, its social structure, cityscape, socialist-planned economy, capitalist market, political power center, commercial buildings, and industrial units as pieces of a multi-layered urban collage.

2005, Video, 1h 6m

2005 Video, 1h 50m

Beijing: The Second Ring and Beijing: The Third Ring capture two opposite views of traffic flow on every bridge of each Ring Road, the innermost arterial highways of Beijing. The artist records a single frame for one minute for each view on the bridge. Beijing: The Second Ring was entirely shot on cloudy days, while the segments for Beijing: The Third Ring were entirely shot on sunny days. The films document the historic aspects and modern development of a city with a population of nearly 11 million people.

2007, video, 2h 32m[118]

This video is about Ai Weiwei's project Fairytale for Europe's most innovative five-year art event Documenta 12 in Kassel, Germany in 2007: Ai Weiwei invited 1001 Chinese citizens of different ages and from various backgrounds to Germany to experience their own fairytale for 28 days.[119] The 152 minutes film documents the whole process beginning with project preparations, over the challenge that the participants had to face until the actual travel to Germany, as well as the artist's ideas behind the work. "This is a work I emotionally relate to. It grows and it surprised me" Ai Weiwei in Fairytale.

2008, video, 1h 18m[120]

On 15 December 2008, a citizens' investigation began with the goal of seeking an explanation for the casualties of the Sichuan earthquake that happened on 12 May 2008. The investigation covered 14 counties and 74 townships within the disaster zone, and studied the conditions of 153 schools that were affected by the earthquake. By gathering and confirming comprehensive details about the students, such as their age, region, school, and grade, the group managed to affirm that there were 5,192 students who perished in the disaster. Among a hundred volunteers, 38 of them participated in fieldwork, with 25 of them being controlled by the Sichuan police for a total of 45 times. This documentary is a structural element of the citizens' investigation.

2009, looped video, 1h 27m[121]

At 14:28 on 12 May 2008, an 8.0-magnitude earthquake happened in Sichuan, China. Over 5,000 students in primary and secondary schools perished in the earthquake, yet their names went unannounced. In reaction to the government's lack of transparency, a citizen's investigation was initiated to find out their names and details about their schools and families. As of 2 September 2009, there were 4,851 confirmed. This video is a tribute to these perished students and a memorial for innocent lives lost.

2009, video, 48m[122]

This video documents the story of Chinese citizen Feng Zhenghu and his struggles to return home. The Shanghai authorities rejected Feng Zhenghu, originated from Wenzhou, Zhejiang, China, from returning to the country for a total of eight times in 2009. On 4 November 2009, Feng Zhenghu attempted to return home for the ninth time but the police from Shanghai used violence and kidnapped him to board a flight to Japan. Feng refused to enter Japan and decided to live in the Immigration Hall at Terminal 1 of the Narita Airport in Tokyo, as an act of protest. He relied on food gifts from tourists for sustenance and lived at a passageway in the Narita Airport for 92 days. He posted updates over Twitter, they attracted much concern and led to wide media coverage from Chinese netizens and international communities. On 31 January, Feng announced an end to his protest at the Narita Airport. On 12 February, Feng was allowed entry to China, where he reunited with his family at home in Shanghai. Ai Weiwei and his assistant Gao Yuan, went from Beijing to interview Feng Zhenghu three times at the Narita Airport of Japan on 16 November 20 November 2009 and 31 January 2010, and documented his life at the airport passageway and the entire process of his return to China. No country should refuse entry to its own citizens.

2009, video, 1h 19m[123]

Ai Weiwei studio production "Laoma Tihua" is a documentary of an incident during Tan Zuoren's trial on 12 August 2009. Tan Zuoren was charged with "inciting subversion of state power". Chengdu police detained witnessed during the trial of the civil rights advocate, which is an obstruction of justice and violence. Tan Zuoren was charged as a result of his research and questioning regarding the 5.12 Wenchuan students' casualties and the corruption resulting poor building construction. Tan Zuoren was sentenced to five years of prison.

2010, video, 3h[124]

In June 2008, Yang Jia carried a knife, a hammer, a gas mask, pepper spray, gloves and Molotov cocktails to the Zhabei Public Security Branch Bureau and killed six police officers, injuring another police officer and a guard. He was arrested on the scene, and was subsequently charged with intentional homicide. In the following six months, while Yang Jia was detained and trials were held, his mother has mysteriously disappeared. This video is a documentary that traces the reasons and motivations behind the tragedy and investigates into a trial process filled with shady cover-ups and questionable decisions. The film provides a glimpse into the realities of a government-controlled judicial system and its impact on the citizens' lives.

2010, video, 2h 6m[125]

The future dictionary definition of 'crackdown' will be: First cover ones head up firmly, and then beat him or her up violently. @aiww In the summer of 2010, the Chinese government began a crackdown on dissent, and Hua Hao Yue Yuan documents the stories of Liu Dejun and Liu Shasha, whose activism and outspoken attitude led them to violent abuse from the authorities. On separate occasions, they were kidnapped, beaten and thrown into remote locations. The incidents attracted much concern over the Internet, as well as wide speculation and theories about what exactly happened. This documentary presents interviews of the two victims, witnesses and concerned netizens. In which it gathers various perspectives about the two beatings, and brings us closer to the brutal reality of Chinas crackdown on crime.

2010, voice recording, 3h 41m[126]

On 24 April 2010 at 00:51, Ai Weiwei (@aiww) started a Twitter campaign to commemorate students who perished in the earthquake in Sichuan on 12 May 2008. 3,444 friends from the Internet delivered voice recordings, the names of 5,205 perished were recited 12,140 times. Remembrance" is an audio work dedicated to the young people who lost their lives in the Sichuan earthquake. It expresses thoughts for the passing of innocent lives and indignation for the cover-ups on truths about sub-standard architecture, which led to the large number of schools that collapsed during the earthquake.

2010, video, 1h 8m[127]

The shooting and editing of this video lasted nearly seven months at the Ai Weiwei studio. It began near the end of 2007 in an interception organized by cat-saving volunteers in Tianjin, and the film locations included Tianjin, Shanghai, Rugao of Jiangsu, Chaoshan of Guangzhou, and Hebei Province. The documentary depicts a complete picture of a chain in the cat-trading industry. Since the end of 2009 when the government began soliciting expert opinion for the Animal Protection Act, the focus of public debate has always been on whether one should be eating cats or not, or whether cat-eating is a Chinese tradition or not. There are even people who would go as far as to say that the call to stop eating cat meat is "imposing the will of the minority on the majority". Yet the "majority" does not understand the complete truth of cat-meat trading chains: cat theft, cat trafficking, killing cats, selling cats, and eating cats, all the various stages of the trade and how they are distributed across the country, in cities such as Beijing, Tianjin, Shanghai, Nanjing, Suzhou, Wuxi, Rugao, Wuhan, Guangzhou, and Hebei. This well-organized, smooth-running industry chain of cat abuse, cat killing and skinning has already existed among ordinary Chinese folks for 20 years, or perhaps even longer. The degree of civilization of a country can be seen from its attitude towards animals.

2011, video, 1h 1m[128]

This documentary is about the construction project curated by Herzog & de Meuron and Ai Weiwei. One hundred architects from 27 countries were chosen to participate and design a 1000 square meter villa to be built in a new community in Inner Mongolia. The 100 villas would be designed to fit a master plan designed by Ai Weiwei. On 25 January 2008, the 100 architects gathered in Ordos for a first site visit. The film Ordos 100 documents the total of three site visits to Ordos, during which time the master plan and design of each villa was completed. As of 2016, the Ordos 100 project remains unrealized.

2011, video, 54m[129]

As a sequel to Ai Weiwei's film Lao Ma Ti Hua, the film So Sorry (named after the artist's 2009 exhibition in Munich, Germany) shows the beginnings of the tension between Ai Weiwei and the Chinese Government. In Lao Ma Ti Hua, Ai Weiwei travels to Chengdu, Sichuan to attend the trial of the civil rights advocate Tan Zuoren, as a witness. In So Sorry, you see the investigation led by Ai Weiwei studio to identify the students who died during the Sichuan earthquake as a result of corruption and poor building constructions leading to the confrontation between Ai Weiwei and the Chengdu police. After being beaten by the police, Ai Weiwei traveled to Munich, Germany to prepare his exhibition at the museum Haus der Kunst. The result of his beating led to intense headaches caused by a brain hemorrhage and was treated by emergency surgery. These events mark the beginning of Ai Weiwei's struggle and surveillance at the hands of the state police.

2011, video, 2h 22m[130]

This documentary investigates the death of popular Zhaiqiao village leader Qian Yunhui in the fishing village of Yueqing, Zhejiang province. When the local government confiscated marshlands in order to convert them into construction land, the villagers were deprived of the opportunity to cultivate these lands and be fully self-subsistent. Qian Yunhui, unafraid of speaking up for his villagers, travelled to Beijing several times to report this injustice to the central government. In order to silence him, he was detained by local government repeatedly. On 25 December 2010, Qian Yunhui was hit by a truck and died on the scene. News of the incident and photos of the scene quickly spread over the internet. The local government claimed that Qian Yunhui was the victim of an ordinary traffic accident. This film is an investigation conducted by Ai Weiwei studio into the circumstances of the incident and its connection to the land dispute case, mainly based on interviews of family members, villagers and officials. It is an attempt by Ai Weiwei to establish the facts and find out what really happened on 25 December 2010. During shooting and production, Ai Weiwei studio experienced significant obstruction and resistance from local government. The film crew was followed, sometimes physically stopped from shooting certain scenes and there were even attempts to buy off footage. All villagers interviewed for the purposes of this documentary have been interrogated or illegally detained by local government to some extent.

2011, video, 1h 1m[131]

Early in 2008, the district government of Jiading, Shanghai invited Ai Weiwei to build a studio in Malu Township, as a part of the local government's efforts in developing its cultural assets. By August 2010, the Ai Weiwei Shanghai Studio completed all of its construction work. In October 2010, the Shanghai government declared the Ai Weiwei Shanghai Studio an illegal construction, and was subjected to demolition. On 7 November 2010, when Ai Weiwei was placed under house arrest by public security in Beijing, over 1,000 netizens attended the "River Crab Feast" at the Shanghai Studio. On 11 January 2011, the Shanghai city government forcibly demolished the Ai Weiwei Studio within a day, without any prior notice.

2013, video, 1h 17m[132]

This video tells the story of Liu Ximei, who at her birth in 1985 was given to relatives to be raised because she was born in violation of China's strict one-child policy. When she was ten years old, Liu was severely injured while working in the fields and lost large amounts of blood. While undergoing treatment at a local hospital, she was given a blood transfusion that was later revealed to be contaminated with HIV. Following this exposure to the virus, Liu contracted AIDS. According to official statistics, in 2001 there were 850,000 AIDS sufferers in China, many of whom contracted the illness in the 1980s and 1990s as the result of a widespread plasma market operating in rural, impoverished areas and using unsafe collection methods.

2014, video, 2h 8m[133]

Ai Weiwei's Appeal 15,220,910.50 opens with Ai Weiwei's mother at the Venice Biennial in the summer of 2013 examining Ai's large S.A.C.R.E.D. installation portraying his 81-day imprisonment. The documentary goes onto chronologically reconstruct the events that occurred from the time he was arrested at the Beijing airport in April 2011 to his final court appeal in September 2012. The film portrays the day-to-day activity surrounding Ai Weiwei, his family and his associates ranging from consistent visits by the authorities, interviews with reporters, support and donations from fans, and court dates. The Film premiered at the International Film Festival Rotterdam on 23 January 2014.

2015, video, 30m[134]

This documentary on the Fukushima Art Project is about artist Ai Weiwei's investigation of the site as well as the project's installation process. In August 2014, Ai Weiwei was invited as one of the participating artists for the Fukushima Nuclear Zone by the Japanese art coalition ChimPom, as part of the project Don't Follow the Wind . Ai accepted the invitation and sent his assistant Ma Yan to the exclusion zone in Japan to investigate the site. The Fukushima Nuclear Exclusion Zone is thus far located within the 20-kilometer radius of land area of the Fukushima Daiichi Nuclear Power Plant. 25,000 people have already been evacuated from the Exclusion Zone. Both water and electric circuits were cut off. Entrance restriction is expected to be relieved in the next thirty years, or even longer. The art project will also be open to public at that time. The three spots usable as exhibition spaces by the artists are all former residential houses, among which exhibition site one and two were used for working and lodging; and exhibition site three was used as a community entertainment facility with an ostrich farm. Ai brought about two projects, "A Ray of Hope" and "Family Album" after analyzing materials and information generated from the site. In "A Ray of Hope", a solar photovoltaic system is built on exhibition site one, on the second level of the old warehouse. Integral LED lighting devices are used in the two rooms. The lights would turn on automatically from 7 to 10pm, and from 6 to 8am daily. This lighting system is the only light source in the Exclusion Zone after this project was installed. Photos of Ai and his studio staff at Caochangdi that make up project "Family Album" are displayed on exhibition site two and three, in the seven rooms where locals used to live. The twenty-two selected photos are divided in five categories according to types of event spanning eight years. Among these photos, six of them were taken from the site investigation at the 2008 Sichuan earthquake; two were taken during the time when he was illegally detained after pleading the Tan Zuoren case in Chengdu, China in August 2009; and three others taken during his surgical treatment for his head injury from being attacked in the head by police officers in Chengdu; five taken of him being followed by the police and his Beijing studio Fake Design under surveillance due to the studio tax case from 2011 to 2012; four are photos of Ai Weiwei and his family from year 2011 to year 2013; and the other two were taken earlier of him in his studio in Caochangdi (One taken in 2005 and the other in 2006).

A feature documentary directed and co-produced by Ai Weiwei about the global refugee crisis.

Ai's visual art includes sculptural installations, woodworking, video and photography. "Ai Weiwei: According to What," adapted and expanded by the Hirshhorn Museum and Sculpture Garden from a 2009 exhibition at Tokyo's Mori Art Museum, was Ai's first North American museum retrospective. [135] It opened at the Hirshhorn in Washington, D.C. in 2013, and subsequently traveled to the Brooklyn Museum, New York, [136] and two other venues. His works address his investigation into the aftermath of the Sichuan earthquake and responses to the Chinese government's detention and surveillance of him. [137] His recent public pieces have called attention to the Syrian refugee crisis.[138]

(1995) Performance in which Ai lets an ancient ceramic urn fall from his hands and smash to pieces on the ground. The performance was memorialized in a series of three photographic still frames.[139]

(2008) Sculpture resembling a park bench or tree trunk, but its cross-section is a map of China. It is four metres long and weighs 635 kilograms. It is made from wood salvaged from Qing Dynasty temples.[140]

(2008) Ming dynasty table cut in half and rejoined at a right angle to rest two feet on the wall and two on the floor. The reconstruction was completed using Chinese period specific joinery techniques.[141]

(20082012) 150 tons of twisted steel reinforcements recovered from the 2008 Sichuan earthquake building collapse sites were straightened out and displayed as an installation.[142]

(2010) Opening in October 2010 at the Tate Museum in London, Ai displayed 100 million handmade and painted porcelain sunflower seeds. These seeds weighed about 150 tons and were made over a span of two and a half years by 1,600 Jingdezhen artisans. This city made porcelain for the government for over one thousand years. The artisans produced the sunflower seeds in the traditional method that the city is known for, in which a thirty step procedure is employed. The sculpture relates back to chairman Mao's rule and the Chinese Communist Party. The combination of all the seeds represents that together, the people of China can stand up and overthrow the Chinese Communist Party. Along with this, the seeds represent China's growing mass production stemming from the consumerist culture in the west. The sculpture directly challenges the "Made in China" mantra that China is known for, considering the labor-intensive and traditional method of creating the work.[143]

(2010) Sculptures in marble to resemble the cameras placed in front of Ai's studio.[144]

(2011) Sculptures of zodiac animals inspired by the water clock-fountain at the Old Summer Palace.[145]

(2014) Han dynasty vase with the Coca-Cola logo brushed on in red acrylic paint.[146]

(2014) 32 Qing dynasty stools joined together in a cluster with legs pointing out.[147]

(2014) Individual porcelain ornaments, each painted with characters for "free speech", which when set together form a map of China.[148]

(2014) Consisting of 176 2D-portraits in Lego which are set onto a large floor space, Trace was commissioned by the FOR-SITE Foundation, the United States National Park Service and the Golden Gate Park Conservancy. The original installation was at Alcatraz Prison in San Francisco Bay; the 176 portraits being of various political prisoners and prisoners of conscience. After seeing one million visitors during its one-year display at Alcatraz, the installation was moved and put on display at the Hirshhorn Museum in Washington, D.C. (in a modified form; the pieces had to be arranged to fit the circular floor space). The display at the Hirshhorn ran from 28 June 2017 1 January 2018. The display also included two versions of his wallpaper work The Animal That Looks Like a Llama but Is Really an Alpaca and a video running on a loop.[149]

(2017) As the culmination of Ai's experiences visiting 40 refugee camps in 2016, Law of the Journey featured an all-black, 230-foot-long inflatable boat carrying 258 faceless refugee figures. The art piece is currently on display at the National Gallery in Prague until 7 January 2018.[150]

(2017) Permanent exhibit, unique setting of two Iron Trees from now on frame the Shrine of the Book in Jerusalem, Israel where Dead Sea Scrolls are preserved[151][152]

(2017) On the view in Israel Museum until the end of October 2017, Journey of Laziz is a video installation, showing mental breakdown and overall suffering of tiger living in the "world's worst ZOO" in Gaza[151][152]

(2017) On view at the Park Avenue Armory through 6 August 2017, Hansel and Gretel is an installation exploring the theme of surveillance. The project, a collaboration of Ai Weiwei and architects Jacques Herzog and Pierre de Meuron, features surveillance cameras equipped with facial recognition software, near-infrared floor projections, tethered, autonomous drones and sonar beacons. A companion website includes a curatorial statement, artist biographies, a livestream of the installation and a timeline of surveillance technology from ancient to modern times.[153]

View original post here:

Ai Weiwei - Wikipedia

China got a wake-up call on AI when a Western machine beat the best Asian players at Go – Quartz

China intends to become the worlds artificial intelligence leader in 2030, according to the manifesto it just released describing plans to create an industry of $150 billion and an environment that has AI everywhere. According to The New York Times (paywall), these ambitions are propelled in no small part by a machines dominance over Asian champions in the ancient strategy game called Go.

Since last year, AlphaGo, developed by the Google-owned artificial-intelligence firm DeepMind, has consistently beaten the worlds top players, from China and South Korea, in a game said to be trickier than chess. The machines superiority over humansa development that as recently as 2014 was not expected to arrive for another decade, marked a major milestone in AI development. At the closely watched recent man-against-machine tournament in China in May, Ke Jie, the worlds best human player at the time, wrote a letter saying the future belongs to AI before he sat down to play. Then he lost in back-to-back games to AlphaGo.

The Times spoke to two professors, not named, who had advised China on AI, and who said that these Go defeats played a role in the ambitions Chinas State Council, the countrys cabinet, displayed in the document (link in Chinese) it released on on Thursday (July 20). The plan covers almost every field: from using the technology to for voice recognition to dispatching robots for deep-sea and Arctic exploration, as well as using AI in military security. The Council said the country must firmly grasp this new stage of AI development.

China said it plans to build special-force AI robots for ocean and Arctic exploration, use the technology for gathering evidence and reading court documents, and also use machines for emotional interaction functions.

China has announced other efforts to accelerate AI development in recent months. In February, Chinas National Development and Reform Commission, the countrys top economic policy planner, said it would fund search giant Baidus development of an AI-backed national deep-learning research lab. Thats clearly modeled on DeepMind. The countrys dominant player in social media, Tencent, has also been pouring resources into the technology, and set up a research lab on AI in Seattle, in the US, in May.

There are ways in which China is well-placed for innovations in AI. Advancements in machine learning depend on access to lots of data and Chinathanks to its large digitally-connected populationhas that. Tencents social messaging app WeChat, for example, counts on nearly 900 million users. There have been a few setbacks in recent months, though, as Chinese tech firms have seen the departure of tech stars, including online learning company Coursera co-founder AI wizarrd Andrew Ng from Baidu in March.

A recent test-taking robot also offered some clues to how far China has to go. The robot sat with millions of Chinese students to take the math component of the infamous gaokao test, which determines which colleges students can get into. With its score, the robot wouldnt have managed to get into a top university, unlike the best of the human test takers.

Read next: An expert says AI will create two global superpowers in 50 years

Read next: Your art degree might save you from automation, an AI expert says

Read more:

China got a wake-up call on AI when a Western machine beat the best Asian players at Go - Quartz

Banks arent as stupid as enterprise AI and fintech entrepreneurs think – TechCrunch

Announcements like Selina Finances $53 million raise and another $64.7 million raise the next day for a different banking startup spark enterprise artificial intelligence and fintech evangelists to rejoin the debate over how banks are stupid and need help or competition.

The complaint is banks are seemingly too slow to adopt fintechs bright ideas. They dont seem to grasp where the industry is headed. Some technologists, tired of marketing their wares to banks, have instead decided to go ahead and launch their own challenger banks.

But old-school financiers arent dumb. Most know the buy versus build choice in fintech is a false choice. The right question is almost never whether to buy software or build it internally. Instead, banks have often worked to walk the difficult but smarter path right down the middle and thats accelerating.

Thats not to say banks havent made horrendous mistakes. Critics complain about banks spending billions trying to be software companies, creating huge IT businesses with huge redundancies in cost and longevity challenges, and investing into ineffectual innovation and intrapreneurial endeavors. But overall, banks know their business way better than the entrepreneurial markets that seek to influence them.

First, banks have something most technologists dont have enough of: Banks have domain expertise. Technologists tend to discount the exchange value of domain knowledge. And thats a mistake. So much abstract technology, without critical discussion, deep product management alignment and crisp, clear and business-usefulness, makes too much technology abstract from the material value it seeks to create.

Second, banks are not reluctant to buy because they dont value enterprise artificial intelligence and other fintech. Theyre reluctant because they value it too much. They know enterprise AI gives a competitive edge, so why should they get it from the same platform everyone else is attached to, drawing from the same data lake?

Competitiveness, differentiation, alpha, risk transparency and operational productivity will be defined by how highly productive, high-performance cognitive tools are deployed at scale in the incredibly near future. The combination of NLP, ML, AI and cloud will accelerate competitive ideation in order of magnitude. The question is, how do you own the key elements of competitiveness? Its a tough question for many enterprises to answer.

If they get it right, banks can obtain the true value of their domain expertise and develop a differentiated edge where they dont just float along with every other bank on someones platform. They can define the future of their industry and keep the value. AI is a force multiplier for business knowledge and creativity. If you dont know your business well, youre wasting your money. Same goes for the entrepreneur. If you cant make your portfolio absolutely business relevant, you end up being a consulting business pretending to be a product innovator.

So are banks at best cautious, and at worst afraid? They dont want to invest in the next big thing only to have it flop. They cant distinguish whats real from hype in the fintech space. And thats understandable. After all, they have spent a fortune on AI. Or have they?

It seems they have spent a fortune on stuff called AI internal projects with not a snowballs chance in hell to scale to the volume and concurrency demands of the firm. Or they have become enmeshed in huge consulting projects staggering toward some lofty objective that everyone knows deep down is not possible.

This perceived trepidation may or may not be good for banking, but it certainly has helped foster the new industry of the challenger bank.

Challenger banks are widely accepted to have come around because traditional banks are too stuck in the past to adopt their new ideas. Investors too easily agree. In recent weeks, American challenger banks Chime unveiled a credit card, U.S.-based Point launched and German challenger bank Vivid launched with the help of Solarisbank, a fintech company.

Traditional banks are spending resources on hiring data scientists too sometimes in numbers that dwarf the challenger bankers. Legacy bankers want to listen to their data scientists on questions and challenges rather than pay more for an external fintech vendor to answer or solve them.

This arguably is the smart play. Traditional bankers are asking themselves why should they pay for fintech services that they cant 100% own, or how can they buy the right bits, and retain the parts that amount to a competitive edge? They dont want that competitive edge floating around in a data lake somewhere.

From banks perspective, its better to fintech internally or else theres no competitive advantage; the business case is always compelling. The problem is a bank is not designed to stimulate creativity in design. JPMCs COIN project is a rare and fantastically successful project. Though, this is an example of a super alignment between creative fintech and the bank being able to articulate a clear, crisp business problem a Product Requirements Document for want of a better term. Most internal development is playing games with open source, with the shine of the alchemy wearing off as budgets are looked at hard in respect to return on investment.

A lot of people are going to talk about setting new standards in the coming years as banks onboard these services and buy new companies. Ultimately, fintech firms and banks are going to join together and make the new standard as new options in banking proliferate.

So, theres a danger to spending too much time learning how to do it yourself and missing the boat as everyone else moves ahead.

Engineers will tell you that untutored management can fail to steer a consistent course. The result is an accumulation of technical debt as development-level requirements keep zigzagging. Laying too much pressure on your data scientists and engineers can also lead to technical debt piling up faster. A bug or an inefficiency is left in place. New features are built as workarounds.

This is one reason why in-house-built software has a reputation for not scaling. The same problem shows up in consultant-developed software. Old problems in the system hide underneath new ones and the cracks begin to show in the new applications built on top of low-quality code.

So how to fix this? Whats the right model?

Its a bit of a dull answer, but success comes from humility. It needs an understanding that big problems are solved with creative teams, each understanding what they bring, each being respected as equals and managed in a completely clear articulation on what needs to be solved and what success looks like.

Throw in some Stalinist project management and your probability of success goes up an order of magnitude. So, the successes of the future will see banks having fewer but way more trusted fintech partners that jointly value the intellectual property they are creating. Theyll have to respect that neither can succeed without the other. Its a tough code to crack. But without it, banks are in trouble, and so are the entrepreneurs that seek to work with them.

Read the original:

Banks arent as stupid as enterprise AI and fintech entrepreneurs think - TechCrunch

Why Clearview AI is a threat to us all – Engadget

Corporate backlash against Clearview clearly hasn't dissuaded law enforcement agencies from using the surveillance system either. According to the company, more than 600 police departments across the US reportedly use the Clearview service -- including the FBI and DHS.

The Chicago Police Department paid $50,000 for a two-year license for the system, CBS News reports, though a spokesperson for the CPD noted that only 30 officers have access to it and the system is not used for live surveillance as it is in London.

"The CPD uses a facial matching tool to sort through its mugshot database and public source information in the course of an investigation triggered by an incident or crime," it said in a statement to CBS.

Despite the CPD's assurances that it would not take advantage of the system, Clearview's own marketing team appears to be pushing police departments to do exactly that. In a November email to the Green Bay PD, acquired by BuzzFeed, the company actively encouraged officers to search the database for themselves, acquaintances, even celebrities.

"Have you tried taking a selfie with Clearview yet?" the email read. "It's the best way to quickly see the power of Clearview in real time. Try your friends or family. Or a celebrity like Joe Montana or George Clooney."

"Your Clearview account has unlimited searches. So feel free to run wild with your searches," the email continued.

That's not to say that the system is completely without merit. Participating law enforcement agencies are already using it to quickly track down shoplifting, identity theft and credit card fraud suspects. Clearview also claims that its app helped the NYPD track down a terrorism suspect last August, but the agency disputes the company's involvement in the case. Clearview is also reportedly being used to help locate child sex victims; however, its use in those classes of cases remains anecdotal at best and runs the risk of hurting the same kids it's aiming to help.

Using Clearview to track minors, even if done with the best of lawful intentions, is a veritable minefield of privacy and data security concerns. Because the police are expected to upload investigation images to Clearview's servers, the company could potentially collect a massive amount of highly sensitive data on any number of underage sex abuse survivors. And given that the company's security measures are untested, unregulated and unverified, the public has no assurances that data will be safe if and when Clearview's systems are attacked.

What's more, Clearview's system suffers the same shortcomings as other facial recognition systems: It's not as good at interpreting black and brown faces as it is for whites. The company claims that its search is accurate across "all demographic groups," but the ACLU vehemently disagrees. When Clearview pitched its services to the North Miami Police Department back in October 2019, the company included a report from a three-member panel reading, "The Independent Review Panel determined that Clearview rated 100 percent accurate, producing instant and accurate matches for every photo image in the test. Accuracy was consistent across all racial and demographic groups." This study was reportedly conducted using the same methodology as the ACLU's 2018 test of Amazon's Rekognition system, a claim that the ACLU rejects. The Civil Liberties Union notes that none of the three sitting on the review board panel had any prior experience in evaluating facial recognition systems.

"Clearview's technology gives government the unprecedented power to spy on us wherever we go -- tracking our faces at protests, [Alcoholics Anonymous] meetings church, and more," ACLU Northern California attorney Jacob Snow told BuzzFeed News. "Accurate or not, Clearview's technology in law enforcement hands will end privacy as we know it."

And it's not like the police abusing their surveillance powers for personal gain is anything new. In 2016, an Associated Press investigation discovered that police around the country routinely accessed secure databases to look up information on citizens that had nothing to do with their police work, including to stalk ex-girlfriends. In 2013, a Florida cop looked up the personal information of a bank teller he was interested in. In 2009, a pair of FBI agents were caught surveilling a women's dressing room where teenage girls were trying on prom dresses. These are not isolated incidents. In the same year that Clearview was founded, DC cops attempted to intimidate Facebook into giving them access to the personal profiles of more than 230 presidential inauguration protesters. With Clearview available, the police wouldn't even need to contact Facebook as Clearview has likely already scraped and made accessible the dirt the cops are looking for.

"The weaponization possibilities of this are endless," Eric Goldman, co-director of the High Tech Law Institute at Santa Clara University, told The New York Times in January. "Imagine a rogue law enforcement officer who wants to stalk potential romantic partners, or a foreign government using this to dig up secrets about people to blackmail them or throw them in jail."

Unsurprisingly, Clearview's financial backers remain unconcerned about the system's potential for abuse. "I've come to the conclusion that because information constantly increases, there's never going to be privacy," David Scalzo, founder of Kirenaga Partners and early Clearview investor, told The New York Times. "Laws have to determine what's legal, but you can't ban technology. Sure, that might lead to a dystopian future or something, but you can't ban it."

Luckily, our elected representatives are starting to take notice of the dangers that unregulated facial recognition technologies like Clearview pose to the public. A handful of California cities including San Francisco, Oakland and Alameda have all passed moratoriums on their local governments' use of the technology. California, New Hampshire and Oregon have passed restrictions at the state level and a number of other municipalities are considering taking similar steps in the near future.

Senator Edward J. Markey (D-MA) has also taken recent note of Clearview's behavior. In January, the Senator sent a strongly worded letter to CEO Ton-That stating, "Clearview's product appears to pose particularly chilling privacy risks, and I am deeply concerned that it is capable of fundamentally dismantling Americans' expectation that they can move, assemble or simply appear in public without being identified." The senator also included a list of 14 questions for Ton-That to address by Wednesday, February 12th.

Whether Clearview bows to legal and legislative pressure here in the US remains to be seen, but don't get your hopes up. The company is already looking to expand its services to 22 countries around the world, including a number of nations which have been accused of committing human rights abuses. That includes the UAE, Qatar and Singapore, as well as Brazil and Columbia, both of which have endured years of political and social strife. There are even a few EU nations Clearview is looking to target, including Italy, Greece and the Netherlands.

Pretty soon, we won't be able to set foot in public without our presence being noticed, cataloged and tabulated. And when the government has the ability to know where anyone is at given time, our civil liberties will irreparably erode. All so that a handful of developers and investors could make a quick buck selling our faces to the police in the name of public safety.

Continued here:

Why Clearview AI is a threat to us all - Engadget

Guidehouse Insights Report Shows AI-Based Solutions for T&D Network Management Are Expected to Experience a 14% Compound Annual Growth Rate from…

BOULDER, Colo.--(BUSINESS WIRE)--A new report from Guidehouse Insights examines the current landscape for artificial intelligence (AI) technology solutions for electric transmission and distribution (T&D) network management, providing global market forecasts by region and application through 2029.

AI technologies for T&D management can help utilities minimize outages, make mobile workers more effective, improve load planning, manage real-time power quality, perform predictive asset maintenance, and more. In many applications, the superior insights derived from machine and deep learning solutions can reduce costs, improve reliability and service quality, and enhance efficiency throughout the grid. Click to tweet: According to a new report from @WeAreGHInsights, the market for AI-based solutions for T&D network management is estimated at $1.4 billion in 2020 and is projected to grow at a compound annual growth rate (CAGR) of 14.1% to more than $4.4 billion in 2029.

AI technology has profound implications for the operation of T&D networks, says William Hughes, principal research analyst with Guidehouse Insights. Utilities will likely become increasingly dependent on AI-based solutions to incorporate distributed energy resources (DER) at an accelerating pace while maintaining acceptable performance metrics and keeping costs low.

While AI-based applications can improve grid operations, key barriers to widespread adoption include an uncertainty that available data will yield results and challenges in integrating the data with operational technology (OT) systems. According to the report, these challenges can cause utilities to hesitate before adopting AI-based applications. However, once adopted, the datasets used for one application become easier to use for others, and benefits tend to accrue more rapidly.

The report, AI for Predictive T&D Network Management, describes the current landscape for AI technology solutions for T&D network management and presents drivers and barriers to implementation. It details AI-supported applications and explores the global market forecasts for these solutions by region and application. An executive summary of the report is available for free download on the Guidehouse Insights website.

About Guidehouse Insights

Guidehouse Insights, the dedicated market intelligence arm of Guidehouse, provides research, data, and benchmarking services for todays rapidly changing and highly regulated industries. Our insights are built on in-depth analysis of global clean technology markets. The teams research methodology combines supply-side industry analysis, end-user primary research, and demand assessment, paired with a deep examination of technology trends, to provide a comprehensive view of emerging resilient infrastructure systems. Additional information about Guidehouse Insights can be found at http://www.guidehouseinsights.com.

About Guidehouse

Guidehouse is a leading global provider of consulting services to the public and commercial markets with broad capabilities in management, technology, and risk consulting. We help clients address their toughest challenges with a focus on markets and clients facing transformational change, technology-driven innovation and significant regulatory pressure. Across a range of advisory, consulting, outsourcing, and technology/analytics services, we help clients create scalable, innovative solutions that prepare them for future growth and success. Headquartered in Washington DC, the company has more than 7,000 professionals in more than 50 locations. Guidehouse is led by seasoned professionals with proven and diverse expertise in traditional and emerging technologies, markets and agenda-setting issues driving national and global economies. For more information, please visit: http://www.guidehouse.com.

* The information contained in this press release concerning the report, AI for Predictive T&D Network Management, is a summary and reflects the current expectations of Guidehouse Insights based on market data and trend analysis. Market predictions and expectations are inherently uncertain and actual results may differ materially from those contained in this press release or the report. Please refer to the full report for a complete understanding of the assumptions underlying the reports conclusions and the methodologies used to create the report. Neither Guidehouse Insights nor Guidehouse undertakes any obligation to update any of the information contained in this press release or the report.

The rest is here:

Guidehouse Insights Report Shows AI-Based Solutions for T&D Network Management Are Expected to Experience a 14% Compound Annual Growth Rate from...