As AI startups focus on time-to-market, ethical considerations should be the priority – SmartCompany.com.au

A girl making friends with a robot at Kuromon Market in Osaka. Source: Andy Kelly/Unsplash.

Artificial intelligence (AI) has clearly emerged as one of the most transformational technologies of our age, with AI already prevalent in our everyday lives. Among many fascinating uses, AI has helped explore the universe, tackle complex and chronic diseases, formulate new medicines, and alleviate poverty.

As AI becomes more widespread over the next decade, like many, I believe we will see more innovative and creative uses.

Indeed, 93% of respondents in anISACAs Next Decade of Tech: Envisioning the 2020s study believe the augmented workforce (or people, robots and AI working closely together) will reshape how some or most jobs are performed in the 2020s.

The rise of social robots to assist patients with physical disabilities, manage elderly care and even educate our children are just some of the many uses being explored.

As AI continues to redefine humanity in various ways, ethical consideration is of paramount importance, which as Australians, we should be addressing in government and business. ISACAs research highlights the double-edged nature of this budding technology.

Only 39% of respondents in Australia believe that enterprises will give ethical considerations around AI and machine learning sufficient attention in the next decade to prevent potentially serious unintended consequences in their deployments. Respondents specifically pinpointed malicious AI attacks involving critical infrastructure, social engineering and autonomous weapons as their primary fears.

These concerns are quite disturbing, although not alarming, due to long-sounded early warnings about these risks.

For instance, in February 2018, prominent researchers and academics published a report about the increasing possibilities that rogue states, criminals, terrorists and other malefactors could soon exploit AI capabilities to cause widespread harm.

And in 2017, the late physicist Stephen Hawking cautioned that the emergence of AI could be the worst event in the history of our civilization unless society finds a way to control its development.

To date, no industry standards exist to guide the secure development and maintenance of AI systems.

Further exacerbating this lack of standards is the fact that startup firms still dominate the AI market. An MIT report revealed that, other than a few large players such as IBM and Palantir Technologies, AI remains a market of 2,600 startups. The majority of these startups are primarily focused on rapid time to market, product functionality and high return on investments. Embedding cyber resilience into their products is not a priority.

Malicious AI programs have surfaced much quicker than many pundits had anticipated. A case in point is the proliferation of deep fakes, ostensibly realistic audio or video files generated by deep learning algorithms or neural networks toperpetratea range of malevolent acts, such as faking celebrity pornographic videos, revenge porn, fake news, financial fraud, and wide range of other disinformation tactics.

Several factors underpinned the rise of deep fakes, but a few stand out.

First is the exponential increase of computing power combined with the availability of large image databases. Second, and probably the most vexing, is the absence of coherent efforts to institute global laws to curtail the development of malicious AI programs. Third, social media platforms, which are being exploited to disseminate deep fakes at scale, are struggling to keep up with the rapidly maturing and evasive threat.

Unsurprisingly, deep fake videos published online have doubled in the past nine months to almost 15,000 cases, according to DeepTrace, a Netherlands-based cyber security group.

Its clear that addressing this growing threat will prove complex and expensive, but the task is pressing.

The ACCC Digital Platforms Inquiryreport highlighted the risk of consumers being exposed to serious incidents of disinformation. Emphasising the gravity of the risk is certainly a step in the right direction, but more remains to be done.

Currently,there is no consensus globally on whether the development of AI requires its own dedicated regulator or specific statutory regime.

Ironically, the role of the auditor and IT auditor is a function that AI is touted as being able to eliminate. This premise would make for a good Hollywood script the very thing requiring ethical consideration and regulation, becomes the regulator.

Government, enterprises and startups need to be mindful of the key risks that are inherent in AI adoption, conduct appropriate oversight, and develop principles and regulation that articulate the roles that can be partially or fully automated today to secure the future of humanity and business.

Until then, AI companies need to embed protocols and cyber security into their inventions to prevent malicious use.

NOW READ:Expert warns artificial intelligence will have a huge impact on small businesses but wont take your job just yet

NOW READ:Why artificial intelligence in Australia needs to get ethical

Read the rest here:

As AI startups focus on time-to-market, ethical considerations should be the priority - SmartCompany.com.au

Related Posts

Comments are closed.