A conversation on the future of AI. – Axios

The big picture: On Thursday morning, Axios' Cities Correspondent Kim Hart and Emerging Technology Reporter Kaveh Waddell hosted a roundtable conversation to discuss the future of AI, with a focus on policy and innovation.

The conversation touched on how to balance innovation with necessary regulation, create and maintain trust with users, and prepare for the future of work.

As AI continues to become more sophisticated and more widely used, how to provide regulatory guardrails while still encouraging innovation was a focal point of the discussion.

Attendees discussed balancing regulation and innovation in the context of global competition, particularly with China.

The conversation also highlighted who is most impacted by technological development in AI, and the importance of future-proofing employment across all industries. As AI is something that touches all industries, the importance of centering the human experience in creating solutions was stressed at multiple points in the conversation.

With the accelerating development of AI, creating and maintaining trust with users, consumers, and constituents alike was central to the discussion.

Thank you SoftBank Group for sponsoring this event.

View original post here:

A conversation on the future of AI. - Axios

Codota raises $12 million for AI that suggests and autocompletes code – VentureBeat

Codota, a startup developing a platform that suggests and autocompletes Python, C, HTML, Java, Scala, Kotlin, and JavaScript code, today announced that it raised $12 million. The bulk of the capital will be spent on product R&D and sales growth, according to CEO and cofounder Dror Weiss.

Companies like Codota seem to be getting a lot of investor attention lately, and theres a reason. According to a study published by the University of Cambridges Judge Business School, programmersspend50.1% of their worktimenotprogramming; the other halfis debugging. And thetotal estimated cost of debugging is $312 billion per year. AI-powered code suggestion and review tools, then, promise to cut development costs substantially while enabling coders to focus on more creative, less repetitive tasks.

Codotas cloud-based and on-premises solutions which it claims are used by developers at Google, Alibaba, Amazon, Airbnb, Atlassian, and Netflix complete lines of code based on millions of Java programs and individual context locally, without sending any sensitive data to remote servers. They surface relevant examples of Java API within integrated development environments (IDE) including Android Studio, VSCode, IntelliJ, Webstorm, and Eclipse, and Codotas engineers vet the recommendations to ensure theyve been debugged and tested.

GamesBeat Summit 2020 Online | Live Now, Upgrade your pass for networking and speaker Q&A.

Codota says the program analysis, natural language processing, and machine learning algorithms powering its platform learn individual best practices and warn of deviation, largely by extracting an anonymized summary of the current IDE scope (but not keystrokes or string contents) and sending it via an encrypted connection to Codota. The algorithms are trained to understand the semantic models of code not just the source code itself and trigger automatically whenever they identify useful suggestions. (Alternatively, suggestions can be manually triggered with a keyboard shortcut.)

Codota is free for individual users the company makes money from Codota Enterprise, which learns the patterns and rules in a companys proprietary code. The free tiers algorithms are trained only on vetted open source code from GitHub, StackOverflow, and other sources.

Codota acquired competitor TabNine in December last year, and since then, its user base has grown by more than 1,000% to more than a million developers monthly. That positions it well against potential rivals like Kite, which raised $17 million last January for its free developer tool that leverages AI to autocomplete code, and DeepCode, whose product learns from GitHub project data to give developers AI-powered code reviews.

This latest funding round which was led by e.ventures, with the participation of existing investor Khosla Ventures and new investors TPY Capital and Hetz Ventures came after seed rounds totaling just over $2.5 million. It brings Codotas total raised to over $16 million. As a part of it, e.ventures general partner Tom Gieselmann will join Codotas board of directors.

Codota is headquartered in Tel Aviv. It was founded in 2015 by Weiss and CTO Eran Yahav, a Technion professor and former IBM Watson Fellow.

Originally posted here:

Codota raises $12 million for AI that suggests and autocompletes code - VentureBeat

AI Weekly: Transform 2020 showcased the practical side of AI and ML – VentureBeat

Watch all the Transform 2020 sessions on-demand right here.

Today marked the conclusion of VentureBeats Transform 2020 summit, which took place online for the first time in our history. Luminaries including Google Brain ethicist Timnit Gebru and IBM AI ethics leader Francesca Rossi spoke about how women are advancing AI and leading the trend of AI fairness, ethics, and human-centered AI. Twitter CTO Parag Agrawal detailed the social networks efforts to apply AI to detect fake or hateful tweets. Pinterest SVP of technology Jeremy King walked through learnings from Pinterests explorations of computer vision to create inspirational experiences. And Unity principal machine learning engineer Cesar Romero brought clarity to the link between synthetic data sets and real-world AI model training.

Thats just a sampling of the panels, interviews, and discussions to which Transform 2020 attendees had front-row seats this week. But the sessions that caught my eye were those touching on practical, tangible AI applications as opposed to theoretical. Research remains crucial to the fields advancement, and theres no sign its slowing the over 1,000 papers accepted to ICML 2020 suggest the contrary. However, production environments are perhaps the best opportunity to battle-test proposed tools and algorithms for robustness. Outcome predictions are just that: predictions. It takes real-world experimentation to know whether hypotheses will truly pan out.

Barak Turovsky, Google AI director of product for the natural language understanding team, elucidated steps Google took to mitigate gender bias from the language models powering Google Translate. Leveraging three AI models to detect gender-neutral queries and generate gender-specific translations before checking for accuracy, Googles system can provide multiple responses to translations of words like nurse and let users choose the best one (e.g., the masculine enfermero or the feminine enfermera). Google is a leader in artificial intelligence, and with leadership comes the responsibility to address a machine learning bias that has multiple examples of results about race and sex and gender across many areas, including conversational AI, Turovsky said.

Like Google, software company Cloudera doubled down on productization of its AI and ML technologies. Senior director of engineering Adam Warrington said it deployed a chatbot to improve customer question-and-answer experiences in under a month, leveraging proprietary data sets of client interactions, community posts, subject-matter expert guidance, and more. The underpinning models can understand relevant words and sentences within a support case and extract the right solution from the best source, whether a knowledge base article, product documentation, or community post.

For Yelp, deployment is a core part of the experimentation process, enabled by the companys Bunsen platform. Using Bunsen through a frontend user interface called Beaker, data scientists, engineers, execs, and even public relations reps can determine whether products and models have any negative impact on the growth of business metrics or if theyre meeting goals. Yelp employees get the scale of being able to deploy a model to a cohort of users depending on how they want to reach them, as well as the flexibility to determine if the functionality is perhaps not optimal or, worst-case scenario, is harmful. We have a rapid way of turning those experiences off and doing what we need to do to fix them on the backend, Yelp head of data science Justin Norman told VentureBeat. One of the best things about what Bunsen allows us to do is to scale at speed.

When it comes to practical uses of AI and machine learning in the financial sector, Visa is at the forefront with projects that demonstrate the potential of these technologies. As a rule, the company looks for use cases where AI and ML could deliver at least a 20% to 30% efficiency increase. Its Visa Advanced Authorization platform is a case in point: It uses recurrent neural networks along with gradient boosted trees to determine the likelihood transactions are fraudulent. Melissa McSherry, a senior vice president and global head of data for Visa, said the company prevents $25 billion in annual fraud thanks to the AI it developed. We have definitely taken a use case approach to AI, she said. We dont deploy AI for the sake of AI. We deploy it because its the most effective way to solve a problem.

AI has a role to play in health care, as well. CommonSpirit Health, the largest not-for-profit health care provider in the country, is applying models to optimize the rounds its doctors and nurses make every day. Based upon our thousands of analysis of patients, [if] we dont address the patient in room seven first, theyre going to have to stay longer than they would need to otherwise, chief strategic innovation officer Rich Roth explained. Using AI that way, really to accelerate our workflow, and to clearly show to our caregivers the clinical benefit of why that data is important, is a great example of how technology can help enhance care.

For AI coverage, send news tips toKhari JohnsonandKyle Wiggers and be sure to subscribe to the AI Weekly newsletterand bookmark ourAI Channel.

Thanks for reading,

Kyle Wiggers

AI Staff Writer

Read more:

AI Weekly: Transform 2020 showcased the practical side of AI and ML - VentureBeat

Insight into AI highlights its harmful infection potential – Poultry World

Commercial poultry should be protected from the risk of contracting harmful bird flu from migrating flocks, according to new research.

Insights from a study of the devastating 2017/17 bird flu outbreak show how highly pathogenic bird flu viruses can be transmitted from wild migrating bird population to domestic flocks and back again.

Research shows that avian influenza introduced by migratory birds has a devastating effect on commercial poultry flocks. Photo: Mark Pasveer

These viruses can readily exchange genetic material with other low pathogenic viruses which are less harmful during migration, raising the likelihood of serious outbreaks in domestic poultry and wild birds.

The study, led by a team including the Roslin Institute, representing the Global Consortium for H5N8 and Related Influenza Viruses, studied the genetic makeup of the 2016/7 bird flu virus in various birds at key stages during the flu season. The outbreak began in domestic birds in Asia before being spread via wild migratory flocks to create the largest bird flu epidemic in Europe to date. The team interpreted genetic sequence data from virus samples collected during the outbreak, together with details of where, when and in which bird species they originated. Using a computational technique, known as phylogenetic inference, researchers estimated where and when the virus exchanged genetic material with other viruses in wild or domestic birds.

The virus could easily exchange genetic material with other, less harmful viruses, at times and locations corresponding to bird migratory cycles. These included viruses carried by wild birds on intersecting migratory routes and by farmed ducks in China and central Europe. Migratory birds harbouring weaker viruses are more likely to survive their journey and potentially pass disease to domestic birds, the study found.

Bird flu viruses can readily exchange genetic material with other influenza viruses...

Commenting on the results, Dr Sam Lycett of the Roslin Institute, said: Bird flu viruses can readily exchange genetic material with other influenza viruses and this, in combination with repeated transmission of viruses between domestic and wild birds, means that a viral strain can emerge and persist in wild bird populations, which carries a high risk of disease for poultry. This aids our understanding of how a pathogenic avian flu virus could become established in wild bird populations.

The research, published in Proceedings of the National Academy of Sciences, was carried out in collaboration with the Friedrich Loeffler Institute, Germany, the Erasmus University Medical Centre, Holland and the University of Edinburghs Usher Institute and Roslin Institute. It was supported by funding from EU Horizon 2020 and others.

Here is the original post:

Insight into AI highlights its harmful infection potential - Poultry World

China’s Plan for World Domination in AI Isn’t So Crazy After All – Bloomberg

Xu Lis software scans more faces than maybe any on earth. He has the Chinese police to thank.

Xu runs SenseTime Group Ltd., which makes artificial intelligence software that recognizes objects and faces, and counts Chinas biggest smartphone brands as customers. In July, SenseTime raised $410 million, a sum it said was the largest single round for an AI company to date. That feat may soon be topped, probably by another startup in China.

The nation is betting heavily on AI. Money is pouring in from Chinas investors, big internet companies and its government, driven by a belief that the technology can remake entire sectors of the economy, as well as national security. A similar effort is underway in the U.S., but in this new global arms race, China has three advantages: A vast pool of engineers to write the software, a massive base of 751 million internet users to test it on, and most importantlystaunch government support that includes handing over gobs of citizens data - something that makes Western officials squirm.

Data is key because thats how AI engineers train and test algorithms to adapt and learn new skills without human programmers intervening. SenseTime built its video analysis software using footage from the police force in Guangzhou, a southern city of 14 million. Most Chinese mega-cities have set up institutes for AI that include some data-sharing arrangements, according to Xu. "In China, the population is huge, so its much easier to collect the data for whatever use-scenarios you need," he said. "When we talk about data resources, really the largest data source is the government."

This flood of data will only rise. China just enshrined the pursuit of AI into a kind of national technology constitution. A state plan, issued in July, calls for the nation to become the leader in the industry by 2030. Five years from then, the government claims the AI industry will create 400 billion yuan ($59 billion) in economic activity. Chinas tech titans, particularly Tencent Holdings Ltd. and Baidu Inc., are getting on board. And the science is showing up in unexpected places: Shanghais courts are testing an AI system that scours criminal cases to judge the validity of evidence used by all sides, ostensibly to prevent wrongful prosecutions.

Data access has always been easier in China, but now people in government, organizations and companies have recognized the value of data, said Jiebo Luo,a computer science professor at the University of Rochester who has researched China. As long as they can find someone they trust, they are willing to share it.

The AI-MATHS machine took the math portion of Chinas annual university entrance exam in Chengdu.

Photographer: AFP via Getty Images

Every major U.S. tech company is investing deeply as well. Machine learning -- a type of AI that lets driverless cars see, chatbots speak and machines parse scores of financial information -- demands computers learn from raw data instead of hand-cranked programming. Getting access to that data is a permanent slog. Chinas command-and-control economy, and its thinner privacy concerns, mean that country can dispense video footage, medical records, banking information and other wells of data almost whenever it pleases.

Xu argued this is a global phenomenon. "Theres a trend toward making data more public. For example, NHS and Google recently shared some medical image data," he said. But that example does more to illustrate Chinas edge.

DeepMind, the AI lab of Googles Alphabet Inc., has labored for nearly two years to access medical records from the U.K.s National Health Service for a diagnostics app. The agency began a trial with the company using 1.6 million patient records. Last month, the top U.K. privacy watchdog declared the trial violates British data-protection laws, throwing its future into question.

Go player Lee Se-Dol, right, in a match against Googles AlphaGo, during the DeepMind Challenge Match in March 2016.

Photographer: Google via Getty Images

Contrast that with how officials handled a project in Fuzhou. Government leaders from that southeastern Chinese city of more than seven million people held an event on June 26. Venture capital firm Sequoia Capital helped organize the event, which included representatives from Dell Inc., International Business Machines Corp. and Lenovo Group Ltd.A spokeswoman for Dell characterized the event as the nations first "Healthcare and Medical Big Data Ecology Summit."

The summit involved a vast handover of data. At the press conference, city officials shared 80 exabytes worth of heart ultrasound videos, according to one company that participated. With the massive data set, some of the companies were tasked with building an AI tool that could identify heart disease, ideally at rates above medical experts. They were asked to turn it around by the fall.

"The Chinese AI market is moving fast because people are willing to take risks and adopt new technology more quickly in a fast-growing economy," said Chris Nicholson, co-founder of Skymind Inc., one of the companies involved in the event. "AI needs big data, and Chinese regulators are now on the side of making data accessible to accelerate AI."

Representatives from IBM and Lenovo declined to comment. Last month, Lenovo Chief Executive Officer Yang Yuanqing said he will invest $1 billion into AI research over the next three to four years.

Along with health, finance can be a lucrative business in China. In part, thats because the country has far less stringent privacy regulations and concerns than the West. For decades the government has kept a secret file on nearly everyone in China called a dangan. The records run the gamut from health reports and school marks to personality assessments and club records. This dossier can often decide a citizens future -- whether they can score a promotion or be allowed to reside in the city they work.

U.S. companies that partner in China stress that AI efforts, like those in Fuzhou, are for non-military purposes. Luo, the computer science professor, said most national security research efforts are relegated to select university partners. However, one stated goal of the governments national plan is for a greater integration of civilian, academic and military development of AI.

The government also revealed in 2015 that it was building a nationwide database that would score citizens on their trustworthiness, which in turn would feed into their credit ratings. Last year, China Premier Li Keqiang said 80 percent of the nations data was in public hands and would be opened to the public, with an unspecific pledge to protect privacy. The raging popularity of live video feeds -- where Chinese internet users spend hours watching daily footage caught by surveillance video -- shows the gulf in privacy concerns between the country and the West. Embraced in China, the security cameras also reel in mountains of valuable data.

Some machine-learning researchers dispel the idea that data can be a panacea. Advanced AI operations, like DeepMind, often rely on "simulated" data, co-founder Demis Hassabis explained during a trip to China in May. DeepMind has used Atari video games to train its systems. Engineers building self-driving car software frequently test it this way, simulating stretches of highway or crashes virtually.

"Sure, there might be data sets you could get access to in China that you couldnt in the U.S.," said Oren Etzioni, director of the Allen Institute for Artificial Intelligence. "But that does not put them in a terrific position vis-a-vis AI. Its still a question of the algorithm, the insights and the research."

Historically, the country has been a lightweight in those regards. Its suffered through a "brain drain," a flight of academics and specialists out of the country. "China currently has a talent shortage when it comes to top tier AI experts," said Connie Chan, a partner at venture capital firm Andreessen Horowitz. "While there have been more deep learning papers published in China than the U.S. since 2016, those papers have not been as influential as those from the U.S. and U.K."

Exclusive insights on technology around the world.

Get Fully Charged, from Bloomberg Technology.

But China is gaining ground. The country is producing more top engineers, who craft AI algorithms for U.S. companies and, increasingly, Chinese ones. Chinese universities and private firms are actively wooing AI researchers from across the globe. Juo, the University of Rochester professor, said top researchers can get offers of $500,000 or more in annual compensation from U.S. tech companies, while Chinese companies will often double that.

Meanwhile, Chinas homegrown talent is starting to shine. A popular benchmark in AI research is the ImageNet competition, an annual challenge to devise a visual recognition system with the lowest error rate. Like last year, this years top winners were dominated by researchers from China, including a team from the Ministry of Public Securitys Third Research Institute.

Relentless pollution in metropolises like Beijing and Shanghai has hurt Chinese companies ability to nab top tech talent. In response, some are opening shop in Silicon Valley. Tencent recently set up an AI research lab in Seattle.

Photographer: David Paul Morris/Bloomberg

Baidu managed to pull a marquee name from that city. The firm recruited Qi Lu, one of Microsofts top executives, to return to China to lead the search giants push into AI. He touted the technologys potential for enhancing Chinas "national strength" and cited a figure that nearly half of the bountiful academic research on the subject globally has ethnically Chinese authors, using the Mandarin term "huaren" -- a term for ethnic Chinese that echoes government rhetoric.

"China has structural advantages, because China can acquire more and better data to power AI development," Lu told the cheering crowd of Chinese developers. "We must have the chance to lead the world!"

Continue reading here:

China's Plan for World Domination in AI Isn't So Crazy After All - Bloomberg

iFLYTEK and Hancom Group Launch Accufly.AI to Help Combat the Coronavirus Pandemic – Business Wire

HEFEI, China--(BUSINESS WIRE)--Asias leading artificial intelligence (AI) and speech technology company, iFLYTEK has partnered with the South Korean technology company, Hancom Group, to launch the joint venture Accufly.AI in South Korea. Accufly.AI launched its AI Outbound Calling System to assist the South Korean government at no cost and provide information to individuals who have been in close contact with or have had a confirmed coronavirus case.

The AI Outbound Calling System is a smart, integrated system that is based on iFLYTEK solutions and Hancom Groups Korean-based speech recognition. The technology saves manpower and assists in the automatic distribution of important information to potential carriers of the virus and provides a mechanism for follow up with recovered patients. iFLYTEK is looking to make this technology available in markets around the world, including North American and Europe.

The battle against the Covid-19 epidemic requires collective wisdom and sharing of best practices from the international community, said iFLYTEK Chief Financial Officer Mr. Dawei Duan. Given the challenges we all face, iFLYTEK is continuously looking at ways to provide technologies and support to partners around the world, including in the United States, Canada, the United Kingdom, New Zealand, and Australia.

In February, the Hancom Group donated 20,000 protective masks and 5 thermal devices to check temperatures to Anhui to help fight the epidemic.

iFLYTEKs AI technology helped stem the spread of the virus in China and will help the South Korean government conduct follow-up, identify patients with symptoms, manage self-isolated residents, and reduce the risk of cross-infection. The system also will help the government distribute important health updates, increase public awareness, and bring communities together.

iFLYTEK is working to create a better world through artificial intelligence and seeks to do so on a global scale. iFLYTEK will maximize its technical advantages in smart services to support the international community in defeating the coronavirus, said Mr. Duan.

More:

iFLYTEK and Hancom Group Launch Accufly.AI to Help Combat the Coronavirus Pandemic - Business Wire

Heres why AI didnt save us from COVID-19 – The Next Web

When the COVID-19 pandemic began we were all so full of hope. We assumed our technology would save us from a disease that could be stymied by such modest steps as washing our hands and wearing face masks. We were so sure that artificial intelligence would become our champion in a trial by combat with the coronavirus that we abandoned any pretense of fear the moment the curve appeared to flatten in April and May. We let our guard down.

Pundits and experts back in January and February very carefully explained how AI solutions such as contact tracing, predictive modeling, and chemical discovery would lead to a truncated pandemic. Didnt most of us figure wed be back to business as usual by mid to late June?

But June turned to July and now were seeing record case numbers on a daily basis. August looks to be brutal. Despite playing home to nearly all of the worlds largest technology companies, the US has become the epicenter of the outbreak. Other nations with advanced AI programs arent necessarily fairing much better.

Among the countries experts would consider competitive in the field of AI compared to the US, nearly all of them have lost the handle on the outbreak: China, Russia, UK, South Korea, etc. Its bad news all the way down.

Figuring out why requires a combination of retrospect and patience. Were not far enough through the pandemic to understand exactly whats gone wrong this things far too alive and kicking for a post-mortem. But we can certainly see where AI hype is currently leading us astray.

Among the many early promises made by the tech community and the governments depending on it, was the idea that contact tracing would make it possible for targeted reopenings. The big idea was that AI could sort out who else a person who contracted COVID-19 may have also infected. More magical AI would then figure out how to keep the healthies away from the sicks and wed be able to both quarantine and open businesses at the same time.

This is an example of the disconnect between AI devs and general reality. A system wherein people allow the government to track their every movement can only work with complete participation from a population with absolute faith in their government. Worse, the more infections you have the less reliable contact-tracing becomes.

Thats why only a handful of small countries even went so far as to try it and as far as we know there isnt any current data supporting this approach actually mitigates the spread of COVID-19.

The next big area where AI was supposed to help was in modeling. For a time, the entire technology news cycle was dominated by headlines declaring that AI had first discovered the COVID-19 threat and machine learning would determine exactly how the virus would spread.

Unfortunately modeling a pandemic isnt an exact science. You cant train a neural network on data from past COVID-19 pandemics because there arent any, this coronavirus is novel. That means our models started with guesses and were subsequently trained on up-to-date data from the unfolding pandemic.

To put this in perspective: using on-the-fly data to model a novel pandemic is the equivalent of knowing you have at least a million dollars in pennies, but only being able to talk about the amount youve physically counted in any given period of time.

In other words: our AI models havent proven much better than our best guesses. And they can only show us a tiny part of the overall picture because were only working with the data we can actually see. Up to 80 percent of COVID-19 carriers are asymptomatic and a mere fraction of all possible carriers have been tested.

What about testing? Didnt AI make testing easier? Kind of but not really. AI has made a lot of things easier for the medical community, but not perhaps in the way you think. There isnt a test bot that you can pour a vial of blood into to get an instantgreen or red infected indicator. The best weve got, for the most part, is background AI that generally helps the medical world run.

Sure theres some targeted solutions from the ML community helping frontline professionals deal with the pandemic. Were not taking anything away from the thousands of developers working hard to solve problems. But, realistically, AI isnt providing game-changer solutions that face up against major pandemic problems.

Its making sure truck drivers know which supplies to deliver first. Its helping nurses autocorrect their emails. Its working traffic lights in some cities, which helps with getting ambulances and emergency responders around.

And its even making pandemic life easier for regular folks too. The fact that youre still getting packages (even if theyre delayed) is a testament to the power of AI. Without algorithms, Amazon and its delivery pipeline would not be able to maintain the infrastructure necessary to ship you a set of fuzzy bunny slippers in the middle of a pandemic.

AI is useful during the pandemic, but its not out there finding the vaccine. Weve spent the last few years here at TNW talking about how AI will one day make chemical compound discovery a trivial matter. Surely finding the proper sequence of proteins or figuring out exactly how to mutate a COVID-killer virus is all in a days work for todays AI systems right? Not so much.

Despite the fact Google and NASA told us wed reached quantum supremacy last year, we havent seen useful quantum algorithms running on cloud-accessible quantum computers like weve been told we would. Scientists and researchers almost always tout chemical discovery as one of the hard problems that quantum computers can solve. But nobody knows when. What we do know is that today, in 2020, humans are still painstakingly building a vaccine. When its finished itll be squishy meatbags who get the credit, not quantum robots.

In times of peace, every new weapon looks like the end-all-be-all solution until you test it. We havent had many giant global emergencies to test our modern AI on. Its done well with relatively small-scale catastrophes like hurricanes and wildfires, but its been relegated to the rear echelon of the pandemic fight because AI simply isnt mature enough to think outside of the boxes we build it in yet.

At the end of the day, most of our pandemic problems are human problems. The science is extremely clear: wear a mask, stay more than six feet away from each other, and wash your hands. This isnt something AI can directly help us with.

But that doesnt mean AI isnt important. The lessons learned by the field this year will go a long way towards building more effective solutions in the years to come. Heres hoping this pandemic doesnt last long enough for these yet-undeveloped systems to become important in the fight against COVID-19.

Published July 24, 2020 19:21 UTC

Read the original here:

Heres why AI didnt save us from COVID-19 - The Next Web

How Hospitals Are Using AI to Battle Covid-19 – Harvard Business Review

Executive Summary

The spread of Covid-19 is stretching operational systems in health care and beyond. The reason is both simple: Our economy and health care systems are geared to handle linear, incremental demand, while the virus grows at an exponential rate. Our national health system cannot keep up with this kind of explosive demand without the rapid and large-scale adoption of digital operating models.While we race to dampen the viruss spread, we can optimize our response mechanisms, digitizing as many steps as possible. Heres how some hospitals are employing artificial intelligence to handle the surge of patients.

Weve made our coronavirus coverage free for all readers. To get all of HBRs content delivered to your inbox, sign up for the Daily Alert newsletter.

On Monday March 9, in an effort to address soaring patient demand in Boston, Partners HealthCare went live with a hotline for patients, clinicians, and anyone else with questions and concerns about Covid-19. The goals are to identify and reassure the people who do not need additional care (the vast majority of callers), to direct people with less serious symptoms to relevant information and virtual care options, and to direct the smaller number of high-risk and higher-acuity patients to the most appropriate resources, including testing sites, newly created respiratory illness clinics, or in certain cases, emergency departments. As the hotline became overwhelmed, the average wait time peaked at 30 minutes. Many callers gave up before they could speak with the expert team of nurses staffing the hotline. We were missing opportunities to facilitate pre-hospital triage to get the patient to the right care setting at the right time.

The Partners team, led by Lee Schwamm, Haipeng (Mark) Zhang, and Adam Landman, began considering technology options to address the growing need for patient self-triage, including interactive voice response systems and chatbots. We connected with Providence St. Joseph Health system in Seattle, which served some of the countrys first Covid-19 patients in early March. In collaboration with Microsoft, Providence built an online screening and triage tool that could rapidly differentiate between those who might really be sick with Covid-19 and those who appear to be suffering from less threatening ailments. In its first week, Providences tool served more than 40,000 patients, delivering care at an unprecedented scale.

Our team saw potential for this type of AI-based solution and worked to make a similar tool available to our patient population. The Partners Covid-19 Screener provides a simple, straightforward chat interface, presenting patients with a series of questions based on content from the U.S. Centers for Disease Control and Prevention (CDC) and Partners HealthCare experts. In this way, it too can screen enormous numbers of people and rapidly differentiate between those who might really be sick with Covid-19 and those who are likely to be suffering from less threatening ailments. We anticipate this AI bot will alleviate high volumes of patient traffic to the hotline, and extend and stratify the systems care in ways that would have been unimaginable until recently. Development is now under way to facilitate triage of patients with symptoms to most appropriate care setting, including virtual urgent care, primary care providers, respiratory illness clinics, or the emergency department. Most importantly, the chatbot can also serve as a near instantaneous dissemination method for supporting our widely distributed providers, as we have seen the need for frequent clinical triage algorithm updates based on a rapidly changing landscape.

Similarly, at both Brigham and Womens Hospital and at Massachusetts General Hospital, physician researchers are exploring the potential use of intelligent robots developed at Boston Dynamics and MIT to deploy in Covid surge clinics and inpatient wards to perform tasks (obtaining vital signs or delivering medication) that would otherwise require human contact in an effort to mitigate disease transmission.

Several governments and hospital systems around the world have leveraged AI-powered sensors to support triage in sophisticated ways. Chinese technology company Baidu developed a no-contact infrared sensor system to quickly single out individuals with a fever, even in crowds. Beijings Qinghe railway station is equipped with this system to identify potentially contagious individuals, replacing a cumbersome manual screening process. Similarly, Floridas Tampa General Hospital deployed an AI system in collaboration with Care.ai at its entrances to intercept individuals with potential Covid-19 symptoms from visiting patients. Through cameras positioned at entrances, the technology conducts a facial thermal scan and picks up on other symptoms, including sweat and discoloration, to ward off visitors with fever.

Beyond screening, AI is being used to monitor Covid-19 symptoms, provide decision support for CT scans, and automate hospital operations. Meanwhile, Zhongnan Hospital in China uses an AI-driven CT scan interpreter that identifies Covid-19 when radiologists arent available. Chinas Wuhan Wuchang Hospital established a smart field hospital staffed largely by robots. Patient vital signs were monitored using connected thermometers and bracelet-like devices. Intelligent robots delivered medicine and food to patients, alleviating physician exposure to the virus and easing the workload of health care workers experiencing exhaustion. And in South Korea, the government released an app allowing users to self-report symptoms, alerting them if they leave a quarantine zone in order to curb the impact of super-spreaders who would otherwise go on to infect large populations.

The spread of Covid-19 is stretching operational systems in health care and beyond. We have seen shortages of everything, from masks and gloves to ventilators, and from emergency room capacity to ICU beds to the speed and reliability of internet connectivity. The reason is both simple and terrifying: Our economy and health care systems are geared to handle linear, incremental demand, while the virus grows at an exponential rate. Our national health system cannot keep up with this kind of explosive demand without the rapid and large-scale adoption of digital operating models.

While we race to dampen the viruss spread, we can optimize our response mechanisms, digitizing as many steps as possible. This is because traditional processes those that rely on people to function in the critical path of signal processing are constrained by the rate at which we can train, organize, and deploy human labor. Moreover, traditional processes deliver decreasing returns as they scale. On the other hand, digital systems can be scaled up without such constraints, at virtually infinite rates. The only theoretical bottlenecks are computing power and storage capacity and we have plenty of both. Digital systems can keep pace with exponential growth.

Importantly, AI for health care must be balanced by the appropriate level of human clinical expertise for final decision-making to ensure we are delivering high quality, safe care. In many cases, human clinical reasoning and decision making cannot be easily replaced by AI, rather AI is a decision aid that helps human improve effectiveness and efficiency.

Digital transformation in health care has been lagging other industries. Our response to Covid today has accelerated the adoption and scaling of virtual and AI tools. From the AI bots deployed by Providence and Partners HealthCare to the Smart Field Hospital in Wuhan, rapid digital transformation is being employed to tackle the exponentially growing Covid threat. We hope and anticipate that after Covid-19 settles, we will have transformed the way we deliver health care in the future.

Read the original post:

How Hospitals Are Using AI to Battle Covid-19 - Harvard Business Review

Artificial Intelligence’s Power, and Risks, Explored in New Report – Education Week

Picture this: a small group of middle school students are learning about ancient Egypt, so they strap on a virtual reality headset and, with the assistance of an artificial intelligence tour guide, begin to explore the Pyramids of Giza.

The teacher, also journeying to one of the oldest known civilizations via a VR headset, has assigned students to gather information to write short essays. During the tour, the AI guide fields questions from students and points them to specific artifacts and discuss what they see.

In preparing the the AI-powered lesson on Egypt, the teacher beforehand would have worked with the AI program to craft a lesson plan that not only dives deep into the subject, but figures out how to keep the group moving through the virtual field trip and how to create more equal participation during the discussion. In that scenario, the AI listens, observes and interacts naturally to enhance a group learning experience, and to make a teachers job easier.

That classroom scenario doesnt quite exist yet, but its one example of AIs potential to transform students academic experiences, as described in a new report that also warns of the risks to privacy and students being treated unfairly by the technologys algorithms. Experts in the field of K-12 and AI say the day is coming when teachers will engage with AI in a way that goes beyond simply reading metrics off a dashboard to form actual partnerships that achieve end goals for students together.

The recently-released report is from the Center for Integrative Research in Computing and Learning Sciences, a hub for National Science Foundation-funded projects that focus on cyberlearning, and looks at how AI could shape K-12 education in the future, along with pressing questions centered on privacy, bias, transparency and fairness.

The report summarizes a two-day online panel that featured 22 experts in AI and learning and provides a set of recommendations for school leaders, policymakers, academics and education vendors to consider as general AI research progresses in leaps and bounds and technology is integrated into classrooms at an accelerated pace due to COVID-19.

It also provides some concrete visions for new and expanded uses of AI in K-12, from reimagined automated essay scoring and next-level assessments to AI used in combo with virtual reality and voice-or-gesture-based systems.

Researchers expect it to be about five to 10 years before AI can work lockstep with teachers as classroom partners in a process they have dubbed as orchestration.

That describes when an educator offloads time-consuming classroom tasks to AI, such as forming groups, creating lesson plans, and helping students work together to revise essays, and eventually to monitor progress towards bigger goals.

The report cautions, however, that experts are concerned about the tendency to overpromise what AI can do and to overgeneralize beyond todays limited capabilities.

The researchers also touched on longstanding risks related to AI and education, such as privacy, security, bias, transparency, and fairness and went further to discuss design risks and how poor design practices of AI systems could unintentionally harm users.

While the fusion of AI and K-12 is far from new, the technologys impact in the classroom so far can be measured only as small scale, according to the report.

Thats set to potentially change, and researchers who participated in the CIRCLS online panel made clear that decision-makers need to make sure they dont delay when it comes to planning and ensuring AI in K-12 is used in a manner that is equitable, ethical, and effective and to mitigate weaknesses, risks, and potential harm.

We do not yet know all of the uses and applications of AI that will emerge; new innovations are appearing regularly and the most consequential applications of AI to education are likely not even invented yet, the report says.In a future where technology is ubiquitous in education, AI will also become pervasive in learning, teaching, and assessment. Now is the time to begin responding to the novel capabilities and challenges this will bring.

See also:

Continued here:

Artificial Intelligence's Power, and Risks, Explored in New Report - Education Week

AI Could Revolutionize War as Much as Nukes – WIRED

In 1899, the worlds most powerful nations signed a treaty at The Hague that banned military use of aircraft, fearing the emerging technologys destructive power. Five years later the moratorium was allowed to expire, and before long aircraft were helping to enable the slaughter of World War I. Some technologies are so powerful as to be irresistible, says Greg Allen, a fellow at the Center for New American Security, a non-partisan Washington DC think tank. Militaries around the world have essentially come to the same conclusion with respect to artificial intelligence.

Allen is coauthor of a 132-page new report on the effect of artificial intelligence on national security. One of its conclusions is that the impact of technologies such as autonomous robots on war and international relations could rival that of nuclear weapons. The report was produced by Harvards Belfer Center for Science and International Affairs, at the request of IARPA, the research agency of the Office of the Director of National Intelligence. It lays out why technologies like drones with bird-like agility, robot hackers, and software that generates photo-real fake video are on track to make the American military and its rivals much more powerful.

New technologies like those can be expected to bring with them a series of excruciating moral, political, and diplomatic choices for America and other nations. Building up a new breed of military equipment using artificial intelligence is one thingdeciding what uses of this new power are acceptable is another. The report recommends that the US start considering what uses of AI in war should be restricted using international treaties.

The US military has been funding, testing and deploying various shades of machine intelligence for a long time. In 2001, Congress even mandated that one-third of ground combat vehicles should be uncrewed by 2015a target that has been missed. But the Harvard report argues that recent, rapid progress in artificial intelligence that has invigorated companies such as Google and Amazon is poised to bring an unprecedented surge in military innovation. Even if all progress in basic AI research and development were to stop, we would still have five or 10 years of applied research, Allen says.

Eric Adams

Darpa's Developing Tiny Drones That Swarm to and From Motherships

Tom Simonite

Its Too Late to Stop China From Becoming an AI Superpower

Cade Metz

Hackers Dont Have to Be Human Anymore. This Bot Battle Proves It

In the near-term, Americas strong public and private investment in AI should give it new ways to cement its position as the worlds leading military power, the Harvard report says. For example, nimbler, more intelligent ground and aerial robots that can support or work alongside troops would build on the edge in drones and uncrewed ground vehicles that has been crucial to the US in Iraq and Afghanistan. That should mean any given mission requires fewer human soldiersif any at all.

The report also says that the US should soon be able to significantly expand its powers of attack and defense in cyberwar by automating work like probing and targeting enemy networks or crafting fake information. Last summer, to test automation in cyberwar, Darpa staged a contest in which seven bots attacked each other while also patching their own flaws.

As time goes on, improvements in AI and related technology may also shake up balance of international power by making it easier for smaller nations and organizations to threaten big powers like the US. Nuclear weapons may be easier than ever to build, but still require resources, technologies, and expertise in relatively short supply. Code and digital data tend to get cheap, or end up spreading around for free, fast. Machine learning has become widely used and image and facial recognition now crop up in science fair projects.

The Harvard report warns that commoditization of technologies such as drone delivery and autonomous passenger vehicles could become powerful tools of asymmetric warfare. ISIS has already started using consumer quadcopters to drop grenades on opposing forces. Similarly, techniques developed to automate cyberwar can probably be expected to find their way into the vibrant black market in hacking tools and services.

You could be forgiven for starting to sweat at the thought of nation states fielding armies of robots that decide for themselves whether to kill. Some people who have helped build up machine learning and artificial intelligence already are. More than 3,000 researchers, scientists, and executives from companies including Microsoft and Google signed a 2015 letter to the Obama administration asking for a ban on autonomous weapons. I think most people would be very uncomfortable with the idea that you would launch a fully autonomous system that would decide when and if to kill someone, says Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence, and a signatory to the 2015 letter. Although he concedes it might just take one country deciding to field killer robots to set others changing their minds about autonomous weapons. Perhaps a more realistic scenario is that countries do have them, and abide by a strict treaty on their use, he says. In 2012, the Department of Defense set a temporary policy requiring a human to be involved in decisions to use lethal force; it was updated to be permanent in May this year.

The Harvard report recommends that the National Security Council, DoD, and State Department should start studying now what internationally agreed-on limits ought to be imposed on AI. Miles Brundage, who researches the impacts of AI on society at the University of Oxford, says theres reason to think that AI diplomacy can be effectiveif countries can avoid getting trapped in the idea that the technology is a race in which there will be one winner. One concern is that if we put such a high premium on being first, then things like safety and ethics will go by the wayside, he says. We saw in the various historical arms races that collaboration and dialog can pay dividends.

Indeed, the fact that there are only a handful of nuclear states in the world is proof that very powerful military technologies are not always irresistible. Nuclear weapons have proven that states have the ability to say I dont even want to have this technology, Allen says. Still, the many potential uses of AI in national security suggest that the self-restraint of the US, its allies, and adversaries is set to get quite a workout.

UPDATE 12:50 pm ET 07/19/17: An earlier version of this story incorrectly said the Department of Defenses directive on autonomous weapons was due to expire this year.

Follow this link:

AI Could Revolutionize War as Much as Nukes - WIRED

An IBM AI Invented Perfumes That’ll Soon Sell in 4,000 Stores

Eau de AI

We already knew that IBM’s artificial intelligence (AI) Watson was a “Jeopardy” champion that dabbled in cancer diagnosis and life insurance.

Now, according to a new story in Vox, a cousin of Watson has accomplished an even finer task: creating perfumes. Soon, these AI-invented perfumes will go on sale in about 4,000 locations in Brazil.

Smell Curve

Fragrances have traditionally been created by sought-after perfumers. But Symrise, a German fragrance company with clients including Avon and Coty, recently struck a deal with IBM to see how its AI tools could be used to modernize the process.

IBM created an algorithm — its name is Philyra, according to Datanami — that studies fragrance formulas and customer data, then spits out new perfumes inspired by those training materials.

Nose Warmer

The first results of the collaboration are two scents, and the AI-invented perfumes will soon go on sale at O Boticário, a prominent Brazilian beauty chain. Spokespeople declined to confirm to Vox precisely which fragrances were invented by the AI.

It’s a sure sign, though, that AI creations are starting to leak out into the world of consumer products — and next time you sniff a sampler, it’s possible that it wasn’t formulated by a human at all.

READ MOREIs AI the Future of Perfume? IBM Is Betting on It [Vox]

More on IBM AI: IBM’s New AI Can Predict Psychosis in Your Speech

Visit link:

An IBM AI Invented Perfumes That’ll Soon Sell in 4,000 Stores

What Salesforce Einstein teaches us about enterprise AI – VentureBeat

Every business has customers. Every customer needs care. Thats why CRM is so critical to enterprises, but between incomplete data and clunky workflows, sales and marketing operations at most companies are less-than-optimal.

At the same time, companies who arent Google or Facebook dont have the billion dollar R&D budgets to build out AI teams to take away our human efficiencies. Even companies with the right technical talent dont have the petabytes of data that the tech titans use to train cutting-edge neural network models.

Salesforce hopes to plug this AI knowledge gap with Einstein. According to Chief Scientist, Richard Socher, Einstein is an AI layer, not a standalone product, that infuses AI features and capabilities across all the Salesforce Clouds.

The 150,000+ companies who already use Salesforce should be able to simply flip a switch and deploy AI capabilities to their organization. Organizations with data science and machine learning teams of their own can extend the base functionality through predictive APIs such as Predictive Vision and Predictive Sentiment Services, which allows companies to understand how their products feature in images and video and how consumers feel about them.

The improvements are already palpable. According to Socher, Salesforce Marketing Clouds predictive audiences feature helps marketers hone in on high-value outreach as well as re-engage users who might be in danger of unsubscribing. The technology has led to an average 25% lift on clicks and opens. Customers of Salesforces Sales Cloud have seen a 300% increase in conversions from leads to opportunities with predictive lead scoring while customers of Commerce Cloud have seen a 7-15% increase in revenue per site visitor.

Achieving these results has not been cheap. Salesforces machine learning and AI buying spree includes RelateIQ ($390 million), BeyondCore ($110 million), PredictionIO ($58 million) as well as deep learning specialist Metamind of which Socher was previously founder & CEO / CTO. Marc Benioff has spent over $4 billion to acquire the right talent and tech in 2016.

Even with all the right money and the right people, rolling out AI for enterprises is fraught with peril due to competition and high expectations. Gartner analyst Todd Berkowitz pointed out that Einsteins capabilities were not nearly as sophisticated as standalone solutions on the market. Other critics say the technology is at least a year and a half from being fully baked.

Infer is one of those aforementioned standalone solutions offering predictive analytics for sales and marketing, putting them in direct competition with Salesforce. In a detailed article about the current AI hype, CEO Vik Singh challenges that big companies like Salesforce are making machine learning feel like AWS infrastructure which wont result in sticky adoption. Singh adds that Machine learning is not like AWS, which you can just spin up and magically connect to some system.

Chief Scientist Socher acknowledges that challenges exist, but believes they are surmountable.

Communication is at the core of CRM, but while computers have surpassed humans in many key computer vision tasks, natural language processing (NLP) and natural language understanding (NLU) approaches fall short of being performant in high stakes enterprise environments.

The problem with most neural network approaches is that they train models on a single task and a single data type to solve a narrow problem. Conversation, on the other hand, requires different types of functionality. You have to be able to understand social cues and the visual world, reason logically, and retrieve facts. Even the motor cortex appears to be relevant for language understanding, explains Socher. You cannot get to intelligent NLP without tackling multi-task approaches.

Thats why the Salesforce AI Research team is innovating on a joint many-task learning approach that leverages transfer learning, where a neural network applies knowledge of one domain to other domains. In theory, understanding linguistic morphology should also also accelerate understanding of semantics and syntax.

In practice, Socher and his deep learning research team have been able to achieve state-of-the-art results on academic benchmark tests for main entity recognition (can you identify key objects, locations, and persons?) and semantic similarity (can you identify words and phrases that are synonyms?). Their approach can solve five NLP tasks chunking, dependency parsing, semantic relatedness, textual entailment, and part of speech tagging all at once and also builds in a character model to handle incomplete, misspelled, or unknown words.

Socher believes that AI researchers will achieve transfer learning capabilities in more comprehensive ways in 2017 and speech recognition will be embedded in many more aspects of our lives. Right now consumers are used to asking Siri about the weather tomorrow, but we want to enable people to ask natural questions about their own unique data.

For Salesforce Einstein, Socher is building a comprehensive Q&A system on top of multi-task learning models. To learn more about Salesforces vision for AI, you can hear Socher speak at the upcoming AI By The Bay conference in San Francisco (VentureBeat discount code VB20 for 20% off).

Solving difficult research problems is only step one. Whats surprising is that you may have solved a critical research problem, but operationalizing your work for customers requires so much more engineering work and talented coordination across the company, Socher reveals.

Salesforce has hundreds of thousands of customers, each with their own analyses and data, he explains. You have to solve the problem at a meta level and abstract away all the complexity of how you do it for each customer. At the same time, people want to modify and customize the functionality to predict anything they want.

Socher identifies three key phases of enterprise AI rollout: data, algorithms, and workflows. Data happens to be the first and biggest hurdle for many companies to clear. In theory, companies have the right data, but then you find the data is distributed across too many places, doesnt have the right legal structure, is unlabeled, or is simply not accessible.

Hiring top talent is also non-trivial, as computer scientists like to say. Different types of AI problems have different complexity. While some AI applications are simpler, challenges with unstructured data such as text and vision mean experts who can handle them are rare and in-demand.

The most challenging piece is the last part: workflows. Whats the point of fancy AI research when nobody uses your work? Socher emphasizes that you have to be very careful to think about how to empower users and customers with your AI features. This is very complex but very specific. Workflow integration for sales processes is very different from those for self-driving cars.

Until we invent AI that invents AI, iterating on our data, research, and operations is a never-ending job for us humans. Einstein will never be fully complete. You can always improve workflows and make them more efficient, Socher concludes.

This article appeared originally at Topbots.

Mariya Yao is the Head of R&D atTopbots, a site devoted to chatbots and AI.

Read this article:

What Salesforce Einstein teaches us about enterprise AI - VentureBeat

AI will boost global GDP by nearly $16 trillion by 2030with much of the gains in China – Quartz

Much has already been made about how artificial intelligence is going to transform our lives, ranging from visions of the future in which robots make humans obsolete to utopias in which technology solves intractable problems and frees up people to pursue their passions. Consultancy firm PwC ran the numbers, and came up with a relatively rosy scenario with regards to the impact AI will have on the global economy. By 2030, global GDP could increase by 14%, or $15.7 trillion, because of AI, the firm said in a report today (pdf).

Almost half of these economic gains will accrue to China, where AI is projected to give the economy a 26% boost over the next 13 yearsthe equivalent of an extra $7 trillion in GDP. North America can expect a 14.5% increase in GDP, worth $3.7 trillion.

According to PwC, North America will get the biggest economic boost in the next few years as consumers and industries are more ready to incorporate AI. By the mid-2020s, however, China will rise to the top. Since Chinas economy is heavy on manufacturing, there is a lot of potential for technology to boost productivity, but it will take time for new technology and the necessary expertise to come up to speed. When this happens, AI-enabled technologies will be exported from China to North America. Whats more, Chinese firms tend to re-invest more capital than North American and European ones, further boosting business growth in AI.

PwCs study defines four types of AI: automated intelligence, which performs tasks on its own; assisted intelligence, which helps people perform tasks faster and better; augmented intelligence, which helps people make better decisions; and autonomous intelligence, which automates decision-making entirely. Their forecasts isolate the potential growth from AI, keeping all other factors in the economy equal.

A large part of the forecast GDP gains$6.6 trillionare expected to come from increased labor productivity, with businesses automating processes or using AI to assist their existing workforce. This suggests PwC believes AI will generate a productivity boost thats bigger than previous technological breakthroughsdespite recent advancements, global productivity growth is very low and economists are puzzled about how to get out of this trap.

The rest of the projected economic growth would come from increased consumer demand for personalized and AI-enhanced products and services. The sectors that have the most to gain on this front are health care, financial services, and the auto industry.

Notably, there is no panic in the report about AI leading to excessive job lossesin previous reports, PwC has already tried to debunk some of the scarier forecasts about that. Instead the researchers recommend that businesses prepare for a hybrid workforce where humans and AI work side-by-side. In harmony? TBD.

Read this next: An economist explains why education wont save us from being replaced by robots

Read more here:

AI will boost global GDP by nearly $16 trillion by 2030with much of the gains in China - Quartz

Nepal should gamble on AI – The Phnom Penh Post

A statue of Beethoven by German artist Ottmar Hoerl stands in front of a piano during a presentation of a part of the completion of Beethovens 10th symphony made using artificial intelligence at the Telekom headquarters in Bonn, western Germany. AFP

Artifical intelligence (AI) is an essential part of the fourth industrial revolution, and Nepal can still catch up to global developments.

Although unrelated, the last decade has seen two significant events: Nepal promulgated a new constitution after decades of instability and is now aiming for prosperity. At the same time, AI saw a resurgence through deep learning, impacting a wide variety of fields. Though unrelated, one can help the other AI can help Nepal in its quest for development and prosperity.

AI was conceptualised during the 1950s, and have seen various phases. The concept caught the publics imagination and hundreds, if not thousands, of movies and novels were created based on a similar idea of a machines intelligence being on par with humans.

But human intelligence is a very complex phenomenon and is diverse in its abilities like rationalisation or recognising a persons face. Even the seemingly simple task of recognising faces, when captured at different camera angles, was considered a difficult challenge for AI as late as the first decade of this century.

However, thanks to better algorithms, computation capabilities, and loads of data from the magic of the internet and social media giants, the current AI systems are now capable of performing such facial recognition tasks better than humans. Some other exciting landmarks include surpassing trained medical doctors in diagnosing diseases from X-ray images and a self-taught AI beating professionals in the strategy board game Go. Although AI may be still far away from general human intelligence, these examples should be more than enough to start paying attention to the technology.

The current leap in AI is now considered an essential ingredient for the fourth industrial revolution. The first industrial revolution, via the steam engine, that started in Great Britain during the late 1700s and quickly expanded to other European countries and America led to rapid economic prosperity. This further opened floodgates of innovation and wealth creation leading to the second and third industrial revolution. A case study of this could be the relationship between Nokia and Finland.

Both of them failing miserably in economic terms in the late 1980s. But both the company and the country gambled on GSM technology, which later went on to become the worlds dominant network standard. In a single decade that followed, Finland achieved unprecedented economic growth with Nokia accounting for more than 70 per cent of Helsinkis stock exchange market capital. This single decade transformed Finland into the most specialised countries in terms of information and communication despite it being under a severe economic crisis since World War II.

The gamble involved not just the motivation to support new technology, but a substantial investment through the Finnish Funding Agency for Technology and Innovation into Nokias research and development projects. This funding was later returned in the form of colossal tax revenue, employment opportunities and further demand for skilled human resources. All these resulted in an ecosystem with a better educational system and entrepreneurial opportunities.

Owing to the years of political turmoil and instability, Nepal missed out on these past industrial revolutions. But overlooking the current one might leave us far behind.

Global AI phenomenon

A recent study of the global AI phenomenon has shown that developed countries have invested heavily in talent and the associated market and have already started to see a direct contribution from AI in their economy. Some African countries are making sure that they are not being left behind, with proper investment in AI research and development. AI growth in Africa has seen applications in the area of agriculture and healthcare. Google, positioning itself to be an AI-first company, has caught this trend in Africa and opened its first African AI lab in Accra, Ghana.

So is Nepal too late to this party? Perhaps. But Nepal still has a chance of catching up. Instead of scattering its focus and the available resources, the country now needs to narrow its investments into AI and technology. It will all start with the central government beginning with a concrete plan for AI development for the upcoming decade.

Similar policies have already been released by many other countries, including its neighbours India and China. It is unfortunate to note that AI strategy from China, reported in the 19th Congress by Chinese President Xi Jinping in 2017, received close to no attention in Nepal, in comparison to the Belt and Road Initiative (BRI) strategy that was announced in 2013.

An essential component of such a strategic plan should be on enhancing Nepals academic institutions. Fortunately, any such programme from the government could be facilitated by recent initiatives like Naami Nepal, a research organisation for AI or NIC Nepal, an innovation centre started by Mahabir Pun.

Moreover, thanks to the private sector, Nepal has also begun to see AI-based companies like Fuse machines or Paaila Technologies that are attempting to close the gap. It has now become necessary to leverage AI for inclusive economic growth to fulfil our dreams of prosperity.

THE KATHMANDU POST/ASIA NEWS NETWORK

View post:

Nepal should gamble on AI - The Phnom Penh Post

All Organizations Developing AI Must Be Regulated, Warns Elon Musk – Analytics Insight

Through the development of artificial intelligence (AI) in the past few years, Teslas Elon Musk has been expressing serious concerns and warnings regarding its negative effects. Tesla and SpaceX CEO Elon Musk is once again sounding a warning note regarding the development of AI. He tweeted recently, all organizations developing advanced AI should be regulated, including Tesla.

Musk was responding to a new MIT Technology Review profile of OpenAI, an organization founded in 2015 by Musk, along with Sam Altman, Ilya Sutskever, Greg Brockman, Wojciech Zaremba, and John Schulman. At first, OpenAI was formed as a non-profit backed by US$1 billion in funding from its pooled initial investors, with the aim of pursuing open research into advanced AI with a focus on ensuring it was pursued in the interest of benefiting society, rather than leaving its development in the hands of a small and narrowly-interested few (i.e., for-profit technology companies).

He also responded to a tweet posted back in July about how OpenAI originally billed itself as a nonprofit, but the company is now seeking to license its closed technology. In response, Musk, who was one of the companys founders but is no longer a part of the company, said that there were reasonable concerns.

Musk exited the company later, reportedly due to disagreements about the companys direction.

Back in April, Musk said during an interview at the World Artificial Intelligence Conference in Shanghai, that computers would eventually surpass us in every single way.

The first thing we should assume is we are very dumb, Musk said. We can definitely make things smarter than ourselves.

Musk pointed to computer programs that allow computers to beat chess champions as well as technology from Neuralink, his own brain interface company that may eventually be able to help people boost their cognitive abilities in some spheres, as examples.

AI is being criticized by others beside Musk, however. Digital rights groups and the American Civil Liberties Union (ACLU) have called for either a complete ban or more transparency in AI technology such as facial recognition software. Even Googles CEO, Sundar Pichai, has warned of the dangers of AI, calling for more regulation of the technology.

The Tesla and SpaceX CEO has been outspoken about the potential dangers of AI before. During a talk sponsored by South by Southwest in Austin in 2018, Musk talked about the dangers of artificial intelligence.

Moreover, he tweeted in 2014 that it could be more dangerous than nukes, and told an audience at an MIT Aeronautics and Astronautics symposium that year that AI was our biggest existential threat, and humanity needs to be extremely careful. He quoted, With artificial intelligence, we are summoning the demon. In all those stories where theres the guy with the pentagram and the holy water, its like yeah hes sure he can control the demon. Didnt work out.

However, not all his Big Tech contemporaries agree. Facebooks chief AI scientist Yann LeCun described his call for prompt AI regulation nuts, while Mark Zuckerberg said his comments on the risks of the tech were pretty irresponsible. Musk responded by saying the Facebook founders understanding of the subject is limited.

Share This ArticleDo the sharing thingy

About AuthorMore info about author

Smriti is a Content Analyst at Analytics Insight. She writes Tech/Business articles for Analytics Insight. Her creative work can be confirmed @analyticsinsight.net. She adores crushing over books, crafts, creative works and people, movies and music from eternity!!

Link:

All Organizations Developing AI Must Be Regulated, Warns Elon Musk - Analytics Insight

What To Know About Cryptocurrency and Scams | Consumer Advice

Confused about cryptocurrencies, like bitcoin or Ether (associated with Ethereum)? Youre not alone. Before you use or invest in cryptocurrency, know what makes it different from cash and other payment methods, and how to spot cryptocurrency scams or detect cryptocurrency accounts that may be compromised.

Cryptocurrency is a type of digital currency that generally exists only electronically. You usually use your phone, computer, or a cryptocurrency ATM to buy cryptocurrency. Bitcoin and Ether are well-known cryptocurrencies, but there are many different cryptocurrencies, and new ones keep being created.

People use cryptocurrency for many reasons quick payments, to avoid transaction fees that traditional banks charge, or because it offers some anonymity. Others hold cryptocurrency as an investment, hoping the value goes up.

You can buy cryptocurrency through an exchange, an app, a website, or a cryptocurrency ATM. Some people earn cryptocurrency through a complex process called mining, which requires advanced computer equipment to solve highly complicated math puzzles.

Cryptocurrency is stored in a digital wallet, which can be online, on your computer, or on an external hard drive. A digital wallet has a wallet address, which is usually a long string of numbers and letters. If something happens to your wallet or your cryptocurrency funds like your online exchange platform goes out of business, you send cryptocurrency to the wrong person, you lose the password to your digital wallet, or your digital wallet is stolen or compromised youre likely to find that no one can step in to help you recover your funds.

Because cryptocurrency exists only online, there are important differences between cryptocurrency and traditional currency, like U.S. dollars.

There are many ways that paying with cryptocurrency is different from paying with a credit card or other traditional payment methods.

Scammers are always finding new ways to steal your money using cryptocurrency. To steer clear of a crypto con, here are some things to know.

Spot crypto-related scamsScammers are using some tried and true scam tactics only now theyre demanding payment in cryptocurrency. Investment scams are one of the top ways scammers trick you into buying cryptocurrency and sending it on to scammers. But scammers are also impersonating businesses, government agencies, and a love interest, among other tactics.

Investment scamsInvestment scams often promise you can "make lots of money" with "zero risk," and often start on social media or online dating apps or sites. These scams can, of course, start with an unexpected text, email, or call, too. And, with investment scams, crypto is central in two ways: it can be both the investment and the payment.

Here are some common investment scams, and how to spot them.

Before you invest in crypto, search online for the name of the company or person and the cryptocurrency name, plus words like review, scam, or complaint. See what others are saying. And read more about other common investment scams.

Business, government, and job impersonators

In a business, government, or job impersonator scam, the scammer pretends to be someone you trust to convince you to send them money by buying and sending cryptocurrency.

To avoid business, government, and job impersonators, know that

Blackmail scamsScammers might send emails or U.S. mail to your home saying they have embarrassing or compromising photos, videos, or personal information about you. Then, they threaten to make it public unless you pay them in cryptocurrency. Dont do it. This is blackmail and a criminal extortion attempt. Report it to the FBI immediately.

Report fraud and other suspicious activity involving cryptocurrency to

Continue reading here:

What To Know About Cryptocurrency and Scams | Consumer Advice

Types of Cryptocurrency – Overview, Examples – Corporate Finance Institute

Presently, there are thousands of cryptocurrencies out there, with many more being started daily. While they all rely on the same premise of a consensus-based, decentralized, and immutable ledger in order to transfer value digitally between trustless parties, there are subtle and not-so-subtle differences between them.

This article will make sense of the landscape and look to help categorize cryptocurrencies into four broad types:

The first major type of cryptocurrency is payment cryptocurrency. Bitcoin, perhaps the most famous cryptocurrency, was the first successful example of a digital payment cryptocurrency. The purpose of a payment cryptocurrency, as the name implies, is not only as a medium of exchange but also as a purely peer-to-peer electronic cash to facilitate transactions.

Broadly speaking, since this type of cryptocurrency is meant to be a general-purpose currency, it has a dedicated blockchain that only supports that purpose. It means that smart contracts and decentralized applications (Dapps) cannot be run on these blockchains.

These payment cryptocurrencies also tend to have a limited number of digital coins that can ever be created, which makes them naturally deflationary. With less and less of these digital coins can be mined, the value of the digital currency is expected to rise.

Examples of payment cryptocurrencies include Bitcoin, Litecoin, Monero, Dogecoin, and Bitcoin Cash.

The second major type of cryptocurrency is the Utility Token. Tokens are any cryptographic asset that runs on top of another blockchain. Ethereum network was the first to incorporate the concept of allowing other crypto assets to piggyback on its blockchain.

As a matter of fact, Vitalik Buterin, the founder of Ethereum, envisioned his cryptocurrency as an open-sourced programmable money that could allow smart contracts and decentralized apps to disintermediate legacy financial and legal entities.

Another key difference between tokens and payment cryptocurrency is that tokens, like Ether on the Ethereum network, are not capped. These cryptocurrencies are, therefore, inflationary meaning that since more and more of these tokens are created, the value of this digital asset should be expected to fall, like a fiat currency in a country that is constantly running its cash printing press.

A Utility Token serves a specific purpose or function on the blockchain, called a use case.

Ethers use case, as an example, is for paying transaction fees to write something to the Ethereum blockchain or building and purchasing Dapps on the platform. In fact, the Ethereum network was changed in 2021 to expend, or burn off, some of the Ether used in each transaction to align the use case. You will hear these sorts of tokens referred to as Infrastructure Tokens.

Some cryptocurrency projects issue Service Tokens that grant the holder access to or allow them to perform something on a network. One such type of this service token is Storj, an alternative to Google Drive, Dropbox, or Microsoft Onedrive. The platform rents unused hard drive space to those looking to store data in the Cloud.

These users would pay for the service in Storjs native utility token. To earn these tokens, those who are storing the data must pass random file verification cryptographically every hour to ensure that the data is still in their possession.

Another example of a token is Binances Binance Coin (BNB), which was created to give the holder discounted trading fees. As this type of token grants access to a cryptocurrency exchange, you will sometimes hear it referred to as an Exchange Token.

Tokens are most commonly sold by Initial Coin Offerings (ICO), which connects early-stage cryptocurrency projects to investors. The ones that represent ownership or other rights to another security or asset are called Security Tokens, a type of fractional ownership. More broadly speaking, exchange and security tokens belong to a larger class of Financial Tokens related to financial transactions, such as borrowing, lending, trading, crowdfunding, and betting.

Another interesting use of tokens is for governance purposes. These tokens give its holders a right to vote on certain things within a cryptocurrency network. Generally, these tend to bigger and more significant changes or decisions and is necessary to maintain the decentralized nature of the network. This allows the community, through their votes, to decide on proposals, rather than focus the decision-making power in a small group.

An example would be a DAO (Decentralized Autonomous Organizations), which are a type of virtual cooperatives. The most famous of these is the Genesis DAO. More currently, the MakerDAO has a separate governance token, called the MKR. Holders of MKR get to vote on decisions pertaining to MakerDAOs stablecoin, called Dai.

Lastly, there are also Media and Entertainment Tokens, which are used for content, games, and online gambling. An example is Basic Attention Token (BAT), which awards tokens to users who opt-in to view advertisements, which then can be used to top content creators.

You might wonder why another commonly heard token hasnt been mentioned. Non-Fungible Tokens (NFTs) are certainly one of the hottest topics in the Decentralized Finance (DeFI) space. However, NFTs are not a cryptocurrency as cryptocurrencies are fungible meaning one unit of a particular cryptocurrency is identical to the next.

A holder of one BTC should be completely indifferent if another person offers them another unit of BTC. Same for any cryptocurrency. However, for NFTs, each one is unique and non-fungible, so we dont include them as a cryptocurrency.

Given the volatility experienced in many digital assets, stablecoins are designed to provide a store of value. They maintain their value because while they are built on a blockchain, this type of cryptocurrency can be exchanged for one or more fiat currencies. So stablecoins are actually pegged to a physical currency, most commonly the U.S. dollar or the Euro.

The company that manages the peg is expected to maintain reserves in order to guarantee the cryptocurrencys value. This stability, in turn, is attractive to investors who might use stablecoins as a savings vehicle or as a medium of exchange that allows for regular transfers of value free from price swings.

The highest profile stablecoin is Tethers USDT, which is the third-largest cryptocurrency by market capitalization behind Bitcoin and Ether. The USDT is pegged to the US dollar, meaning its value is supposed to remain stable at 1 USD each. It achieves this by backing every USDT with one US dollar worth of reserve assets in cash or cash equivalents.

Holders can deposit their fiat currency for USDT or redeem their USDT directly with Tether Limited at the redemption price of $1, less fees that Tether charges. Tether also lends out cash to companies to make money.

However, stablecoins arent subject to any government regulation or oversight. In May 2022, another high-profile stablecoin, TerraUSD, and its sibling coin, Luna, collapsed. TerraUSD went from $1 to just 11 cents.

The problem with TerraUSD was that instead of investing reserves into cash or other safe assets, it was backed by its own currency, Luna. During its crash in May, Luna went from over $80 to a fraction of a cent. As holders of TerraUSD clamored to redeem their stablecoins, TerraUSD lost its peg to the dollar.

The lesson here again is to do your due diligence before even buying stablecoins by looking at the whitepaper and understanding how the stablecoin maintains its reserves.

Central Bank Digital Currency is a form of cryptocurrency issued by the central banks of various countries. CBDCs are issued by central banks in token form or with an electronic record associated with the currency and pegged to the domestic currency of the issuing country or region.

Since this digital currency is issued by central banks, the central banks maintain full authority and regulation over the CBDC. The implementation of a CBDC into the financial system and monetary policy is still in the early stages for many countries; however, over time it may become more widely adopted.

Like cryptocurrencies, CBDCs are built upon blockchain technology that should increase payment efficiency and potentially lower transaction costs. While the use of CBDCs is still in the early stages of development for many central banks across the world, several CBDCs are based upon the same principles and technology as cryptocurrencies, such as Bitcoin.

The characteristic of the currency being issued in token form or with electronic records to prove ownership makes it similar to other established cryptocurrencies. However, as CBDCs are effectively monitored and controlled by the issuing government, holders of this cryptocurrency give up the advantage of decentralization, pseudonymity, and lack of censorship.

CBDCs maintain a paper trail of transactions for the government, which can lead to taxation and other economic rents to be levied by governments. On the plus side, in a stable political and inflationary environment, CBDCs can be reasonably expected to maintain their value over time or at least track the pegged physical currency.

In addition to having the full faith and credit of the issuing country, buyers of CDBCs would also not have to worry about fraud and abuse that has plagued many other cryptocurrencies.

Thank you for reading CFIs guide to the Different Types of Cryptocurrency. To keep learning and developing your knowledge base, please explore the additional relevant resources below:

See the rest here:

Types of Cryptocurrency - Overview, Examples - Corporate Finance Institute

Global Digital Asset and Cryptocurrency Association Appoints Maggie Sklar As Incoming Chairwoman of the Public Policy and Regulation Committee -…

Global Digital Asset and Cryptocurrency Association Appoints Maggie Sklar As Incoming Chairwoman of the Public Policy and Regulation Committee  Marketscreener.com

Continue reading here:

Global Digital Asset and Cryptocurrency Association Appoints Maggie Sklar As Incoming Chairwoman of the Public Policy and Regulation Committee -...