Google Brains AI achieves state-of-the-art text summarization performance – VentureBeat

Summarizing text is a task at which machine learning algorithms are improving, as evidenced by a recent paper published by Microsoft. Thats good news automatic summarization systems promise to cut down on the amount of message-reading enterprise workers do, which one survey estimates amounts to 2.6 hours each day.

Not to be outdone, a Google Brain and Imperial College London team built a system Pre-training with Extracted Gap-sentences for Abstractive SUmmarization Sequence-to-sequence, or Pegasus that leverages Googles Transformers architecture combined with pretraining objectives tailored for abstractive text generation. They say it achieves state-of-the-art results in 12 summarization tasks spanning news, science, stories, instructions, emails, patents, and legislative bills, and that it shows surprising performance on low-resource summarization, surpassing previous top results on six data sets with only 1,000 examples.

As the researchers point out, text summarization aims to generate accurate and concise summaries from input documents, in contrast to executive techniques. Rather than merely copy fragments from the input, abstractive summarization might produce novel words or cover principal information such that the output remains linguistically fluent.

Transformers are a type of neural architecture introduced in a paper by researchers at Google Brain, Googles AI research division. As do all deep neural networks, they contain functions (neurons) arranged in interconnected layers that transmit signals from input data and slowly adjust the synaptic strength (weights) of each connection thats how all AI models extract features and learn to make predictions. But Transformers uniquely have attention. Every output element is connected to every input element, and the weightings between them are calculated dynamically.

The team devised a training task in which whole, and putatively important, sentences within documents were masked. The AI had to fill in the gaps by drawing on web and news articles, including those contained within a new corpus (HugeNews) the researchers compiled.

In experiments, the team selected their best-performing Pegasus model one with 568 million parameters, or variables learned from historical data trained on either 750GB of text extracted from 350 million web pages (Common Crawl) or on HugeNews, which spans 1.5 billion articles totaling 3.8TB collected from news and news-like websites. (The researchers say that in the case of HugeNews, a whitelist of domains ranging from high-quality news publishers to lower-quality sites was used to seed a web-crawling tool.)

Pegasus achieved high linguistic quality in terms of fluency and coherence, according to the researchers, and it didnt require countermeasures to mitigate disfluencies. Moreover, in a low-resource setting with just 100 example articles, it generated summaries at a quality comparable to a model that had been trained on a full data set ranging from 20,000 to 200,000 articles.

Read more here:

Google Brains AI achieves state-of-the-art text summarization performance - VentureBeat

The Impact Of AI On Call Centres – Forbes

Friendly robot in call centre

The pandemic is a severe stress test for the business continuity plans of global corporations. The operators of call centres are playing an important role in meeting that challenge, and it has not been easy. In normal times, if an earthquake hits Bangalore, you can switch capacity to your call centre in Manila. But what do you do when all the call centres around the world that serve your customers are hit all at the same time?

The big outsourcing call centre companies which serve corporate giants have hundreds of thousands of employees, and many of these people are working from home now. Their employers can make sure they have adequate computer equipment, but staff in developing countries are often handicapped by lack of good internet access, and the lack of a calm environment without interruptions.

The pandemic will prompt another round of discussions about re-shoring call centre jobs to places which are less vulnerable in that way, but cost will remain a huge barrier. The salary of one person in a corporations home country will often pay for three people in an offshore location. Or you could employ a graduate in India, China, or the Philippines for the cost of a school leaver in the US or the UK, and keep some change.

The pandemic is also reviving talk of automation slashing the number of humans working in call centres. In 2014, the CEO of Telstra, Australias largest mobile phone company, made headlines with a forecast that within five years there would be no people in its call centres. It didnt happen, of course. Peter Monk, GM for Australia of Concentrix, one of the two big global customer engagement companies offering contact centre services, says that employment in call centres has grown modestly in recent years, but that the really significant change has been the shift from voice to digital.

When customer interactions are simple, they can often be automated. A chatbot is perfectly adequate to handle a password change, or the provision of some basic information. And the digitally native younger generations prefer to interact digitally, ideally with short videos. But when there is significant value to be generated and exchanged, a call will often still be better.

Concentrix rival for the top spot in contact centre management is the French multinational Teleperformance, whose CEO said in a recent interview, When chatbots started to arrive about five years ago, I was depressed. But since then our company has never grown so fast. The chatbot does the rational part of the job and the customer expert manages the emotional part.

Peter Monk is sceptical about vendors claims to offer services with artificial intelligence. Most vendors offer clever natural language processing technology, but it is not yet real AI. Some exceptions are coming through, but the software is still mostly pre-programmed, using lookup tables and knowledge banks.

One of the more interesting early applications of NLP is systems which can detect the emotional state of a customer on the other end of a phone line, or tapping away on their keyboard. These systems are deployed alongside call centre staff, alerting them if a customer seems to be running out of patience, and suggesting variations on the script. The more sophisticated ones can discern the context of a word or sentence, referring to words and phrases from earlier in an exchange, or even from a previous conversation.

Josh Feast is co-founder and CEO of Cogito, whose AI coach helps customer service representatives be more emotionally intelligent on the phone. He thinks that most of us under-estimate the challenging job of handling numerous calls each day, working with customers varied circumstances and communication styles, and dealing with countless policies and procedures. This can cause cognitive overload. AI can help them recognize behavioural signals by providing contextual guidance. The focus has been on automation, so we have yet to realize AIs power to coach people, helping them reach their full potential.

Thanks to the increasing sophistication of NLP systems, IVR, or Interactive Voice Response, is making a comeback. When these systems were first introduced a few years ago, they were clunky and awkward, and the voice recognition software was not quite good enough, so they were quickly abandoned.

As with most industry sectors, the other big application of early AI systems is analytics. Phone conversations can be converted to text and analysed, so that companies can track how often each customer has been interacting with them, and what they are saying, with greater and greater richness and depth of understanding. This is a big area of investment for the large call centre operators.

The call centre industry is a big one, and what happens to jobs in it will be important. It employs many millions of people around the world, in countries both rich and poor. It began in the West when large telephone systems were developed, and gradually became a major global employer. In the UK, the Birmingham Press and Mail claims to have opened the first centre, but the call centre boom really got going in 1985, when Direct Line became the first company to sell insurance entirely by phone. Today the industry employs around 1.3 million people in the UK, and more than 6 million in the USA.

Entrepreneurs in the developing world soon realised that they could bring a massive cost advantage to the industry. India was the biggest player in this market for many years, but in 2011 the Philippines stole the crown. With no connection to the south-east Asian mainland, the Philippines had failed to attract the foreign investment in manufacturing that was improving living standards in Thailand and Vietnam, but its people speak excellent English, and 1.2 million of them now work in call centres.

Artificial intelligence and related technologies are driving two other significant trends within the call centre industry. One is real-time translation, which should accelerate global trade so long as it isnt derailed by the pandemic, and populist nationalism. Googles focus on B2C (business-to-consumer) applications leaves some space for other companies to play in the B2B space, and one of the leaders here is Unbabel, a Portuguese company.

The other is the application of gig economy business models to the call centre industry. Companies like Concentrix, through their Solv solution, enable individuals anywhere in the world to get themselves accredited to work on particular types of business, and then log on and log off to work whenever they like. As the support tools improve, there is less and less need for contact centre workers to know much about the products and services of the companies they are representing. This information can be accessed instantaneously from databases in the cloud. They are evaluated more on their client handling skills, their empathy, and their ability to work with continuously evolving technologies.

More fundamentally, younger companies are designing their business processes so that customers never or almost never need to contact a human to obtain their goods and services. The websites and logistics operations of digital disruptors aim to be so intuitive and user-friendly that customers never need to search for the Contact link. When this works it generates a tremendous cost advantage. When it fails, it generates huge frustration. The worst problem is when legacy companies, which lack the slick ergonomics of the disruptors websites, try to pull off the same trick, and hide their contact links. We consumers are not so easily fooled, and this behaviour will be the downfall of many once-great companies.

The Telstra CEOs remark about call centres going dark within five years was a classic case of Amaras Law, which observes that we over-estimate the impact of any given technology in the short term, and under-estimate it in the long term. Pre-virus, employment in call centres was growing in single-digit percentages a year. Post-pandemic, assuming the economy recovers, call volumes will probably remain stable, but their share of customer contacts will decline, and the call centre will become more and more a contact centre, handling many more exchanges digitally than by voice.

In the long run, it is a fairly good bet that humans will become as scarce in contact centres as they are becoming in warehouses. The question is, how long will this take. As Peter Monk, the GM of Concentrix Australia says, Of course, the endgame - in the not too distant future - is that many aspects of even my job can be done pretty much by a machine.

More:

The Impact Of AI On Call Centres - Forbes

‘Smarter AI can help fight bias in healthcare’ – Healthcare IT News

Leading researchers discussed which requirements AI algorithms must meet to fight bias in healthcare during the 'Artificial Intelligence and Implications for Health Equity: Will AI Improve Equity or Increase Disparities?' sessionwhich was held on 1 December.

The speakers were: Ziad Obermeyer, associate professor of health policy and management at the Berkeley School of Public Health, CA; Luke Oakden-Rayner, director of medical imaging research at the Royal Adelaide Hospital, Australia; Constance Lehman, professor of radiology at Harvard Medical School, director of breast imaging, and co-director of the Avon Comprehensive Breast Evaluation Center at Massachusetts General Hospital; and Regina Barzilay, professor in the department of electrical engineering and computer science and member of the Computer Science and AI Lab at the Massachusetts Institute of Technology.

The discussion was moderated by Judy Wawira Gichoya, assistant professor in the Department of Radiology at Emory University School of Medicine, Atlanta.

WHY IT MATTERS

Artificial intelligence (AI) may unintentionally intensify inequities that already exist in modern healthcare and understanding those biases may help defeat them.

Social determinants partly cause poor healthcare outcomes and it is crucial to raise awareness about inequity in access to healthcare, as ProfSam Shah, founder and director of Faculty of Future Health in London, explained in a keynote duringthe HIMSS& Health 2.0EuropeanDigital event.

Taking in the patient experience, conducting exploratory error analysis and building smarter and robust algorithms could help reduce bias in many clinical settings, such as pain management and access to screening mammography.

ON THE RECORD

Judy Wawira Gichoya, Emory University School of Medicine, said: "The data we use is collected in a social system that already has cultural and institutional biases. () If we just use this data without understanding the inequities then algorithms will end up habituating, if not magnifying, our existing disparities."

Ziad Obermeyer, Berkeley School of Public Health, talked about the pain gap phenomenon, where the pain of white patients is treated or investigated until a cause is found, while in other races it may be ignored or overlooked.

"Society's most disadvantaged, non-white, low income, lower educated patients () are reporting severe pain much more often. An obvious explanation is that maybe they have a higher prevalence of painful conditions, but that doesn't seem to be the whole story," he said.

Obermayer explained that listening to the patient, not just the radiologist could help to develop solutions to predict the experience of pain. He referenced an NIH-sponsored dataset that helped him experiment with a new type of algorithm, with which he found more than double the number of black patients with severe pain in their knees who would be eligible for surgery than before.

Luke Oakden-Rayner, Royal Adelaide Hospital, suggested conducting exploratory error analysis to look at every error case and find common threads, instead of just looking at the AI model and seeing that it is biased.

"Look at the cases it got right and those it got wrong. All the cases AI got right will have something in common and so will the ones it got wrong, then you can find out what the system is biased toward," he said.

Constance Lehman, Harvard Medical School, said: About two millionwomen will be diagnosed with BC and over 600,000will die in the US this year. But theres a marked discrepancy in the impact of BC on women of colour vs. caucasian women.

In the EU, onein eightwomen in will developbreast cancerbefore the age of 85 and an average of 20% ofbreast cancer casesoccur in women when they are younger than 50 years old, according to Europa Donna, a Europe-wide coalition of affiliated groups of women that facilitates the exchange of information concerning breast cancer.

Lehman presented an algorithm, which she developed with Regina Barzilay to help identify women's risk for breast cancerbased on their mammogram alone. The solution uses DL and an imaging coder that takes the four views of a standard digital analogue mammogram, without requiring access to family history, prior biopsies or reproductive history.

This imaging only model performs better than other models and supports equity across the races, she said.

Regina Barzilay, MIT Institute of Medical Engineering & Science, explained how to build robust AI to support equity in health. An image-based model that is trained on diverse population can very accurately predict risk across different populations in a very consistent way, she said.

The AI community is working hard on tools that can work robustly against bias, by making sure that models are trained to be robust in the presence of bias, which may come from the nuisance variation between devices used to take the imaging.

Humans who are ultimately responsible for making a clinical decision should understand what the machine is doing, to think of all possible biases that the machine can introduce. Models that can make their reasoning understandable to humans could help, she concluded.

Read more:

'Smarter AI can help fight bias in healthcare' - Healthcare IT News

UK govt’s 17.3m AI-boffinry cash injection is just ‘a token amount’ – The Register

AI is at the forefront of the UK governments digital strategy, and believed to be crucial to the nation's future post-Brexit.

A recent study by Accenture estimated artificial intelligent systems could add up to a whopping and borderline unbelievable 654bn (US$802bn) to the British economy by 2035.

Well, you've gotta spend money to make money. So, Blighty's government has announced it is looking into fostering a thriving AI industry in the UK and pledged 17.3m ($21.2m) to bankroll machine learning and robotics research at universities. But is that enough?

The funding is better than nothing and shows the government is at least thinking about how important these technologies are, said Nick Taylor, professor of computer science and deputy director of the Edinburgh Centre for Robotics at Heriot-Watt University in Scotland. However, its not a great amount, he added.

This funding indicates that the government recognises how important AI and robotics are to our future," he said. "AI and robotics are advancing so rapidly at the current time that we could easily exhaust any amount of research funding that was directed towards them."

That 17 million quid will go to the Engineering and Physical Sciences Research Council (EPSRC) and filter down to several UK universities for a range of projects including developing robots for surgery and nuclear environments.

Of those funds, 6.5m ($8.0m) will be pumped into the UK Robotics and Autonomous Systems (UK-RAS) network. Its a small amount, considering robots are particularly expensive. Not only does it require expertise but costs of hardware need to be factored in as well.

Zoubin Ghahramani, professor of information engineering at the University of Cambridge, told The Register that AI is thriving in the UK, with academic institutions, startups and big companies from DeepMind and Amazon to Apple and Microsoft being major investors. Ghahramanis own upstart, Geometric Intelligence, which he cofounded along with Gary Marcus, Doug Bemis and Ken Stanley, was acquired by Uber for an undisclosed amount.

While he applauds the UK governments investment, he told us: Its a relatively small step in the right direction, compared to the hundreds of millions invested by Canadian and US governments.

DARPA, the US defense research arm, often readily dishes tens of millions of dollars for individual robot projects. In 2009, it awarded $32m (26.1m) to develop the LS3 robot, and a further $10m (8.1m) to test it.

The LS3 robot looks like a giant robo bull, complete with four sturdy legs and a barrel-like body. Its designed to help US soldiers carry 400 lbs (181.4 kg) of gear on their missions, but was shelved in 2015 for being too noisy and not stealthy enough to use in reality.

A cash injection of just under 20m into the UK is dwarfed by the $1.1bn (900m) spent by the US government on AI in 2015.

The UK funding is a token amount that couldnt hope to put us on the same level as Google or Microsoft, Kate Devlin, senior lecturer in the department of computing, and sex robots expert at Goldsmiths University, told The Register.

The only bright side I can see is that the government is recognising the importance of AI research in academia rightly so as that's often how big corporations acquire their technology and their expertise, she added.

Leslie Smith, professor of computer science at the University of Stirling, agrees. It's not much in comparison with the spend of large companies and the US [Defense department] on these areas," he said.

To me the question is more like: how can we get the best leverage for this sort of sum of money? How can we get companies to work with UK academics to make the most of this investment? Not unrelated to this is the issue of ensuring that the money generated from the application of these technologies sticks partly to the Universities, and partly to the UK itself rather than being exploited [elsewhere].

View post:

UK govt's 17.3m AI-boffinry cash injection is just 'a token amount' - The Register

Is FaceApp Just a Gender Swapping Gimmick or an AI and Privacy Nightmare in the Making? – Wccftech

When FaceApp initially launched back in 2017, it took the world by storm because of its capabilities. Granted, we have seen several apps in the past that could make you look old or young but the accuracy and precision were not there. FaceApp, however, used artificial intelligence to do that, and the results were mind-boggling at best. Even when it was launched, a lot of security researchers raised concerns over the consequences of using this app. After all, you are uploading your pictures onto an app for it to use and run through its AI algorithm. But people still continued using it.

After almost 3 years, the app has exploded all over Twitter, Instagram, and Facebook again. However, this time, people are using the gender swap feature that uses AI to change a person's gender and present them with a picture that is very convincing, and at the same time, quite scary.

Apple Has a New Name for Its Next iOS Update; New Image Leak Also Shows iPhone Compatibility List for That Update

Now, there have been several apps in the past like that. Even Snapchat introduced a feature that would swap your gender. But the implementation here is different since it uses artificial intelligence, the very thing many people fear, in the first place. But it is not just artificial intelligence that we should be afraid of, it is the privacy policy of the app. if you head over to the recently updated privacy policy of the app, this is the highlighted text.

Now the clever thing here is that when you do visit the page, only the aforementioned lines are highlighted, which is more than enough to convince any user that this app is indeed safe. However, if you just take a minute and read beyond those two lines, you start becoming wary of just what is being stored and used. True, some would say that they are not worried about the app or what it does with the photos but keep in mind that this app has over 100 million downloads on Google Play Store alone. One could only imagine how many underage individuals are using this app for swapping their genders and potentially risking their pictures.

Now, if you are one of those people who believe that just deleting the app will get rid of the photos that are taken or used by FaceApp, that is not the case. However, it is not as easy as it may sound. For those who want to get their pictures removed, you will actually have to put in a request for it to happen, in the first place. In order to do that, you have to go to Settings >Support >Report a bug with the word "privacy" in the subject line, and then write a formal request. Which is convoluted to a point that most people will not go through it, in the first place.

To confirm just how convincing or borderline creepy this app can become, I asked a few of my friends to provide their pictures. Now, it was easy for me to tell the difference because I know them, but to an unsuspecting eye, it might not be the same case.

And privacy is just one concern that people have raised. On the other hand, we have a shockingly powerful AI in place which could very well be learning the patterns for a much stronger facial recognition pattern.

iOS Might Be Getting Rebranded to iPhoneOS, as Apple Might Be Looking to Streamline Its Operating System Names

In all honesty, the results are shocking at best. What is even more shocking is the amount of information we are knowingly handing away to an app just for the sake of shock value. Again, whether or not this app is going to have severe consequences or not is still yet to be seen. But as a word of warning, keep in mind that the FBI did issue a warning pertaining to the safety of the app, and this happened back in December 2019.

Calling FaceApp an imminent threat to privacy or an AI nightmare would be stretching it a bit too far. However, at the same time, we have to keep in mind that in a world where our privacy is among our most important assets, there are some questionable practices and activities that can easily take place if things go rogue. We can only say that the more we are protecting our privacy, the better it is in the longer run. Currently, you can download FaceApp on both iOS and Android for your amusement.

Continued here:

Is FaceApp Just a Gender Swapping Gimmick or an AI and Privacy Nightmare in the Making? - Wccftech

Could a new academy solve the AI talent problem? – FCW.com

Defense

Eric Schmidt speaks at a March 2020 meeting of the Defense Innovation Board in Austin, Texas. (DOD photo by EJ Hersom).

Defense technology experts think adding a military academy could be the solution to the U.S. government's tech talent gap.

"The canonical view is that the government cannot hire these people because they will get paid more in private industry," said Eric Schmidt, former Google chief and current chair of the Defense Department's Innovation Advisory Board, during a July 29 Brookings Institution virtual event.

"My experience is that people are patriotic and that you have a large number of people -- and this I think is missed in the dialogue -- a very large number of people who want to serve the country that they love. And the reason that they're not doing it is there's no program that makes sense to them."

Schmidt's comments come as the National Security Commission on Artificial Intelligence, of which he chairs, issued its second quarterly report with recommendations to Congress on how the U.S. government can invest in and implement AI technology.

One key recommendation: A national digital service academy, to act like the civilian equivalent of a military service academy to train technical talent. That institution would be paired with an effort to establish a national reserve digital corps to serve on a rotational basis.

Robert Work, former deputy secretary of defense who is now NSCAI's vice chair, said the academy would bring in people who want to serve in government and would graduate students to serve as full time federal employees at GS-7 to GS-11 pay grade. Members of the digital corps would five years at 38 days a year helping government agencies figure out how to best implement AI.

For the military, the commission wants to focus on creating a clear way to test existing service members' skills and better gauge the abilities of incoming recruits and personnel.

"We think we have a lot of talent inside the military that we just aren't aware of," Work said.

To remedy that, Work said the commission recommends a grading, via a programming proficiency test, to identify government and military workers that have software development experience. The recommendations also include a computational thinking component to the armed services' vocational aptitude battery to better identify incoming talent.

"I suspect that if we can convince the Congress to make this real and the president signs off hopefully then not only will we be successful but we'll discover that we need 10 times more. The people are there and the talent is available," Schmidt said.

About the Author

Lauren C. Williams is a staff writer at FCW covering defense and cybersecurity.

Prior to joining FCW, Williams was the tech reporter for ThinkProgress, where she covered everything from internet culture to national security issues. In past positions, Williams covered health care, politics and crime for various publications, including The Seattle Times.

Williams graduated with a master's in journalism from the University of Maryland, College Park and a bachelor's in dietetics from the University of Delaware. She can be contacted at [emailprotected], or follow her on Twitter @lalaurenista.

Click here for previous articles by Wiliams.

View original post here:

Could a new academy solve the AI talent problem? - FCW.com

AI Is Already One of the Largest Industries on Earth and It’s Going to End Us All – Geek

Major tech companies are investing in AI and machine learning at an alarming rate. According to a new report, companies spent between $26 and $39 billion on AI research (with giants like Google, Facebook, and Baidu contributing more than two-thirds of that) in 2016 alone. While thats not nearly on the scale of, say, global oil trade (which cracks a trillion most years), its still enough to make it one of the largest sub-industries on Earth. Thats put it well above many very large numbers like Hollywoods box office takings from last year ($11 billion) or the GDP of Iceland ($16 billion).

On its face, this isnt too surprising. Silicon Valleys wealth is among the greatest in the world, easily dwarfing whole regions (and its kinda messed up how that isnt even an exaggeration), of course, theyd invest a sliver of that money into hyper-advanced autonomous software. What this shows, though, is that the race for AI has officially kicked off. Contrast this figures, for example, to those from 2013, and we find that investment in AI has more than tripled. Its also shifted almost entirely to research, development, and, most importantly, deployment.

Those sectors adopting AI fastest are, of course, the automobile, tech, and telecom industries. The McKinsey Global Institute for Artificial Intelligence concludes that these are the industries that have the most to gain. Each of these areas (as well as finances and health care) benefit tremendously from AI adoption, with the earliest consumers of machine learning tech in those fields yielding profits that can be 10% or more higher than the industry average year-on-year.

Now, I know all that sounds super-boring, but, in short: smart businesses are making billions thanks to AI. And thats going to get faster and faster as things go. As those businesses employing AI out-compete their rivals over the next few years, were going to start seeing the beginning of the end of us. Higher profits are awesome, and all, but a good chunk of that is coming from human workers who are becoming obsolete. Obviously, we should keep using AI to make our lives awesome, but even the Patron Saint of wacky Silicon Valley entrepreneurs, Elon Musk, is certain that the day will soon come when we need to change up how we conceive of our entire economy if we want to keep the world intact.

Amazon might be the best example of a modern business profiting by cutting out humans. The tech company bought Kiva, a robotics company that specializes in automated packaging. The investment and subsequent deployment of Kivas tech has decreased the time it takes from the moment the customer clicks to the moment the package ships down from about an hour to just 15 minutes. And thats with a boost to inventory capacity and a massive operating cost drop. With that kind of advantage, its not hard to see why rumblings of an Amazon-dominated retail future are so prevalent.

Its not quite all doom and gloom though. Netflix has dramatically improved their algorithms for helping users find movies they might like. The algorithm was already pretty great enough that I recall wanting to punch the sun anytime my roommates would like random shows or movies on my account. But that, and many other minor issues have been smoothed out, and Netflix projects that this has helped that save $1 billion in subscription cancellations annually.

I guess, though, if you think about it, Netflix is really just the replacement for brick-and-mortar movie rental shops and the clerks who would give you recommendations there. So I guess there is no silver lining. Were all going to be funemployed in a decade or two. Better hope we figure out our collective shit.

Visit link:

AI Is Already One of the Largest Industries on Earth and It's Going to End Us All - Geek

Researchers open-source state-of-the-art object tracking AI – VentureBeat

A team of Microsoft and Huazhong University researchers this week open-sourced an AI object detector Fair Multi-Object Tracking (FairMOT) they claim outperforms state-of-the-art models on public data sets at 30 frames per second. If productized, it could benefit industries ranging from elder care to security, and perhaps be used to track the spread of illnesses like COVID-19.

As the team explains, most existing methods employ multiple models to track objects: (1) a detection model that localizes objects of interest and (2) an association model that extracts features used to reidentify briefly obscured objects. By contrast, FairMOT adopts an anchor-free approach to estimate object centers on a high-resolution feature map, which allows the reidentification features to better align with the centers. A parallel branch estimates the features used to predict the objects identities, while a backbone module fuses together the features to deal with objects of different scales.

The researchers tested FairMOT on a training data set compiled from six public corpora for human detection and search: ETH, CityPerson, CalTech, MOT17, CUHK-SYSU, and PRW. (Training took 30 hours on two Nvidia RTX 2080 graphics cards.) After removing duplicate clips, they tested the trained model against benchmarks that included 2DMOT15, MOT16, and MOT17. All came from the MOT Challenge, a framework for validating people-tracking algorithms that ships with data sets, an evaluation tool providing several metrics, and tests for tasks like surveillance and sports analysis.

Compared with the only two published works that jointly perform object detection and identity feature embedding TrackRCNN and JDE the team reports that FairMOT outperformed both on the MOT16 data set with an inference speed near video rate.

There has been remarkable progress on object detection and re-identification in recent years, which are the core components for multi-object tracking. However, little attention has been focused on accomplishing the two tasks in a single network to improve the inference speed. The initial attempts along this path ended up with degraded results mainly because the re-identification branch is not appropriately learned, concluded the researchers in a paper describing FairMOT. We find that the use of anchors in object detection and identity embedding is the main reason for the degraded results. In particular, multiple nearby anchors, which correspond to different parts of an object, may be responsible for estimating the same identity, which causes ambiguities for network training.

In addition to FairMOTs source code, the research team made available several pretrained models that can be run on live or recorded video.

See more here:

Researchers open-source state-of-the-art object tracking AI - VentureBeat

Facebook Initiative Aims To Demystify AI By Crowdsourcing Ideas – Women Love Tech

Facebook recently announced its award recipients of the Ethics in AI Research Initiative for the Asia Pacific region. Among them are proposals from two Australian universities who will each receive funds to further their research in AI.

Their success follows a request for proposals submitted by Facebooks research division last year, which was made open to academic institutions, think tanks and research groups across the Asia Pacific region.

This is part of a wider initiative with Facebook in partnership with the Centre for Civil Society and Governance of The University of Hong Kong and the Privacy Commissioner for Personal Data, Hong Kong.

Through this regional outreach, Facebooks aims to simultaneously crowdsource the best local ideas and accountable practices.

As Raina Yeung, Facebooks Head of Privacy and Data Policy, Engagement, in the Asia Pacific region said, The latest advancements in AI bring transformational changes to society, and at the same time bring an array of complex ethical questions that must be closely examined.

Monash academic Professor Robert Sparrows approved proposal The uses and abuses of black box AI in emergency medicine highlights issues of concern surrounding AI. The issue, for instance, with black box AI is that it has internal rules and parameters which are opaque to their users. In the field of medicine, particularly emergency medicine, this lack of clarity is dangerous and must be correctly addressed. When decisions are made concerning human lives it is paramount for all involved that transparency exists as to how those choices are being made. For those in intensive care the prospect of receiving lesser attention due to the economic or genetic determinations made by a circuitboard is understandably concerning, as is the risk of technical malfunctions affecting ones diagnosis.

However one perceives the intrusion of AI into intellectual disciplines requiring tact and discretion, such as law or medicine, the process is ongoing and exponential. While such technologies may not currently match human performance, the constant rate of advancements in AI makes it essentially inevitable that they will do so. With this in mind the process of automation can be seen as something of a passing of the torch from humans to our AI counterparts, both in physical and intellectual fields.

The approved proposal of Doctor Sarah Bankins, of Macquarie University, AI decisions with dignity: promoting interactional justice perceptions, further highlights this shift. In this transitional stage particular care is necessary to ensure AI tools are applied in ways that are equitable and socially conscientious, as the knock-on effects of poor implementation will compound over time.

AI that can think and act for themselves, often referred to as General Intelligences, the holy grail for AI developers, are still a distant prospect. In the meantime AI researchers have vaulted smaller hurdles. Advances in machine learning, the ability of computer programs to improve autonomously without human input, have paved the way for bleeding edge technologies such as artificial language processing and driverless vehicles. These new tools boast impressive gains to productivity and, as they improve, have the potential to save human lives.

However, despite these advancing capacities such tools can not yet think or act independently, and it remains the role of conscientious human participants to dictate how and where theyre applied. By acting as custodians of our future selves and taking early steps to safeguard the infrastructure of AI against systematic inequity we can work to ensure a brighter future for all, as is Facebooks stated aim in foregrounding diverse, regional voices in the conversations of ethical practice around AI.

AI decisions with dignity: Promoting interactional justice perceptionsDr. Sarah Bankins, Prof. Deborah Richards, A/Prof. Paul Formosa, (Macquarie University), Dr. Yannick Griep (Radboud University)

The challenges of implementing AI ethics frameworks in the Asia PacificManju Lasantha Fernando, Ramathi Bandaranayake, Viren Dias, Helani Galpaya, Rohan Samarajiva (LIRNEasia)

Culturally informed pro-social AI regulation and persuasion frameworkDr. Junaid Qadir (Information Technology University of Lahore, Punjab, Pakistan), Dr. Amana Raquib (Institute of Business Administration Karachi, Pakistan)

Ethical challenges on application of AI for the aged careDr. Bo Yan, Dr. Priscilla Song, Dr. Chia-Chin Lin (University of Hong Kong)

Ethical technology assessment on AI and internet of thingsDr. Melvin Jabar, Dr. Ma. Elena Chiong Javier (De La Salle University), Mr. Jun Motomura (Meio University), Dr. Penchan Sherer (Mahidol University)

Operationalizing information fiduciaries for AI governanceYap Jia Qing, Ong Yuan Zheng Lenon, Elizaveta Shesterneva, Riyanka Roy Choudhury, Rocco Hu (eTPL.Asia)

Respect for rights in the era of automation, using AI and roboticsEmilie Pradichit, Ananya Ramani, Evie van Uden (Manushya Foundation), Henning Glasser, Dr. Duc Quang Ly, Venus Phuangkom (German-Southeast Asian Center of Excellence for Public Policy and Good Governance)

The uses and abuses of black box AI in emergency medicineProf. Robert Sparrow, Joshua Hatherley, Mark Howard (Monash University)

Women Love Tech would like to thank Nick Ouzas for his story.

Read the original post:

Facebook Initiative Aims To Demystify AI By Crowdsourcing Ideas - Women Love Tech

Has There Been A Second AI Big Bang? – Forbes

Aleksa Gordic, an AI researcher with DeepMind

The Big Bang in artificial intelligence (AI) refers to the breakthrough in 2012, when a team of researchers led by Geoff Hinton managed to train an artificial neural network (known as a deep learning system) to win an image classification competition by a surprising margin. Prior to that, AI had performed some remarkable feats, but it had never made much money. Since 2012, AI has helped the big technology companies to generate enormous wealth, not least from advertising.

Has there been a new Big Bang in AI, since the arrival of Transformers in 2017? In episodes 5 and 6 of the London Futurist podcast, Aleksa Gordic explored this question, and explained how todays cutting-edge AI systems work. Aleksa is an AI researcher at DeepMind, and previously worked in Microsofts Hololens team. Remarkably, his AI expertise is self-taught so there is hope for all of us yet!

Transformers are deep learning models which process inputs expressed in natural language and produce outputs like translations, or summaries of texts. Their arrival was announced in 2017 with the publication by Google researchers of a paper titled Attention is All You Need. This title referred to the fact that Transformers can pay attention simultaneously to large corpus of text, whereas their predecessors, Recurrent Neural Networks, could only pay attention to the symbols either side of the segment of text being processed.

Transformers work by splitting text into small units, called tokens, and mapping them onto high-dimension networks - often thousands of dimensions. We humans cannot envisage this. The space we inhabit is defined by three numbers or four, if you include time, and we simply cannot imagine a space with thousands of dimensions. Researchers suggest that we shouldnt even try.

For Transformer models, words and tokens have dimensions. We might think of them as properties, or relationships. For instance, man is to king as woman is to queen. These concepts can be expressed as vectors, like arrows in three-dimensional space. The model will attribute a probability to a particular token being associated with a particular vector. For instance, a princess is more likely to be associated with the vector which denotes wearing a slipper than to the vector that denotes wearing a dog.

There are various ways in which machines can discover the relationships, or vectors, between tokens. In supervised learning, they are given enough labelled data to indicate all the relevant vectors. In self-supervised learning, they are not given labelled data, and they have to find the relationships on their own. This means the relationships they discover are not necessarily discoverable by humans. They are black boxes. Researchers are investigating how machines handle these dimensions, but it is not certain that the most powerful systems will ever be truly transparent.

The size of a Transformer model is normally measured by the number of parameters it has. A parameter is analogous to a synapse in a human brain, which is the point where the tendrils (axons and dendrites) of our neurons meet. The first Transformer models had a hundred million or so parameters, and now the largest ones have trillions. This is still smaller than the number of synapses in the human brain, and human neurons are far more complex and powerful creatures than artificial ones.

A surprising discovery made a couple of years after the arrival of Transformers was that they are able to tokenise not just text, but images too. Google released the first vision Transformer in late 2020, and since then people around the world have marvelled at the output of Dall-E, MidJourney, and others.

The first of these image-generation models were Generative Adversarial Networks, or GANs. These were pairs of models, with one (the generator) creating imagery designed to fool the other into accepting it as original, and the second system (the discriminator) rejecting attempts which were not good enough. GANs have now been surpassed by Diffusion models, whose approach is to peel noise away from the desired signal. The first Diffusion model was actually described as long ago as 2015, but the paper was almost completely ignored. They were re-discovered in 2020.

Transformers are gluttons for compute power and for energy, and this has led to concerns that they might represent a dead end for AI research. It is already hard for academic institutions to fund research into the latest models, and it was feared that even the tech giants might soon find them unaffordable. The human brain points to a way forward. It is not only larger than the latest Transformer models (at around 80 billion neurons, each with around 10,000 synapses, it is 1,000 times larger). It is also a far more efficient consumer of energy - mainly because we only need to activate a small portion of our synapses to make a given calculation, whereas AI systems activate all of their artificial neurons all of the time. Neuromorphic chips, which mimic the brain more closely than classic chips, may help.

Aleksa is frequently surprised by what the latest models are able to do, but this is not itself surprising. If I wasnt surprised, it would mean I could predict the future, which I cant. He derives pleasure from the fact that the research community is like a hive mind: you never know where the next idea will come from. The next big thing could come from a couple of students at a university, and a researcher called Ian Goodfellow famously created the first GAN by playing around at home after a brainstorming session over a couple of beers.

See the rest here:

Has There Been A Second AI Big Bang? - Forbes

The Pentagon Wants AI-Driven Drone Swarms for Search and Rescue Ops – Nextgov

The Defense Departments central artificial intelligence development effort wants to build an artificial intelligence-powered drone swarm capable of independently identifying and tracking targets, and maybe even saving lives.

The Pentagons Joint Artificial Intelligence Center, or JAIC, issued a request for information to find out if AI developers and drone swarm builders can come together to support search and rescue missions.

Search and rescue operations are covered under one of the four core JAIC research areas: humanitarian aid and disaster relief. The program also works on AI solutions for predictive maintenance, cyberspace operations and robotic process automation.

The goal for the RFI is to discover whether industry can deliver a full-stack search and rescue drone swarm that can self-pilot, detect humans and other targets and stream data and video back to a central location. The potential solicitation would also look for companies or teams that can provide algorithms, machine training processes and data to supplement that provided by the government.

The ideal result would be a contract with several vendors that together could provide the capability to fly to a predetermined location/area, find people and manmade objectsthrough onboard edge processingand cue analysts to look at detections sent via a datalink to a control station, according to the RFI. Sensors shall be able to stream full motion video to an analyst station during the day or night; though, the system will not normally be streaming as the AI will be monitoring the imagery instead of a person.

The system has to have enough edge processing power to enable the AI to fly, detect and monitor without any human intervention, while also being able to stream live video to an operator and allow that human to take control of the drones, if needed.

The RFI contains a number of must-haves, including:

The RFI also notes all training data will be government-owned and classified. All development work will be done using government-owned data and on secure government systems.

Responses to the RFI are due by 11 a.m. Jan. 20.

See the rest here:

The Pentagon Wants AI-Driven Drone Swarms for Search and Rescue Ops - Nextgov

HoloLens 2 will have a custom AI chip designed by Microsoft – The Verge

Today, Microsoft announced that the next generation of its mixed reality HoloLens headset will incorporate an AI chip. This custom silicon a coprocessor designed but not manufactured by Microsoft will be used to analyze visual data directly on the device, saving time by not uploading it to the cloud. The result, says Microsoft, will be quicker performance on the HoloLens 2, while keeping the device as mobile as possible.

The announcement follows a trend among Silicon Valleys biggest tech companies, which are now scrambling to meet the computational demands of contemporary AI. Todays mobile devices, where AI is going to be used more frequently, simply arent built to handle these sorts of programs, and when theyre asked, the result is usually slower performance or a burned-out battery (or both).

But getting AI to run directly on devices like phones or AR headsets has a number of advantages. As Microsoft says, quicker performance is one of them, as devices dont have to upload data to remote servers. This also makes the devices more user-friendly, as they dont have to maintain a continuous internet connection. And, this sort of processing is more secure, as users data never leaves the device.

There are two main ways to facilitate this sort of on-device AI. The first is by building special lightweight neural networks that dont require as much processing power. (Both Facebook and Google are working on this.) The second is by creating custom AI processors, architectures, and software, which is what companies like ARM and Qualcomm are doing. Its rumored that Apple is also building its own AI processor for the iPhone a so-called Apple Neural Engine and now, Microsoft is doing the same for the HoloLens.

This race to build AI processors for mobile devices is running alongside work to create specialized AI chips for servers. Intel, Nvidia, Google, and Microsoft are all working on their own projects in this department. This sort of AI cloud power will service different needs to new mobile processors (itll primarily be sold directly to businesses), but from the viewpoint of designing silicon, the two goals are likely to be complementary.

Speaking to Bloomberg, Microsoft Research engineer Doug Burger said the company was taking the challenge of creating AI processors for servers very seriously, adding: Our aspiration is to be the number one AI cloud. Building out the HoloLens on-device AI capabilities could help with this goal, if only by focusing the companys expertise on chip architectures needed to handle neural networks.

For the second generation HoloLens, the AI coprocessor will be built into its Holographic Processing Unit or HPU Microsofts name for its central vision-processing chip. This handles data from all the devices on-board sensors, including the head-tracking unit and infrared cameras. The AI coprocessor will be used to analyze this data use deep neural networks, one of the principal tools of contemporary AI. Theres still no release date for the HoloLens 2, but its reportedly arriving in 2019. When it lands, AI will be even more central for everyday computing, and that specialized silicon will likely be in high demand.

Here is the original post:

HoloLens 2 will have a custom AI chip designed by Microsoft - The Verge

Is There a Clear Path to General AI? – CMSWire

PHOTO:John Lockwood

People frequently mix up two pairs of terms when talking about artificial intelligence: Strong vs. Weak AI, and General vs. Narrow AI. The key to understanding the difference lies in which perspective we want to take: are we aiming for a holy grail that, once found, will mean solving one of mankinds biggest questions or are we merely aiming to build a tool to make us more efficient at a task?

The Strong vs. Weak AI dichotomy is largely a philosophical one, made prominent in 1980 by American philosopher John Searle. Philosophers like Searle are looking to answer the question of whether we can theoretically and practically build machines that truly think and experience cognitive states, such as understanding, believing, wanting, hoping. As part of that endeavor, some of them examine the relationship between these states and any possibly corresponding physical states in the observable world of the human body: when we are in the state of believing something, how does that physically manifest itself in the brain or elsewhere?

Searle concedes that computers, the most prominent form of such machines in our current times, are powerful tools that can help us study certain aspects of human thought processes. However, he calls thatWeak AI, as its not the real thing. He contrasts that with "Strong AI as follows:But according to strong AI, the computer is not merely a tool in the study of the mind; rather, the appropriately programmed computer really is a mind, in the sense that computers given the right programs can be literally said to understand and have other cognitive states.

While this philosophical perspective is fascinating in and of itself, it remains largely elusive to modern day practical efforts in the field of AI. Philosophers are thinkers, meant to raise the right questions at the right time to help us think through the implications of our doings. They are rarely builders. The builders among us, the engineers, seek to solve practical problems in the physical world. Note that this is not a question of whose aims are more noble, but merely a question of perspective.

Engineers seeking to build systems that are of practical use today are more interested in the distinction of General vs. Narrow AI. That distinction is one of the applicability of a system at hand. We call something Narrow AI if it is built to perform one function, or a set of functions in a particular domain, and that alone. In reality, that is the only form of AI we have at our disposal today. All of the currently available systems are built for one task alone.

The biggest revelation for any non-expert here is that an AI system's performance in one task does not generalize. If you've built a system that has learned to play chess, your system cannot play the ancient Chinese game of Go, not even with some additional modifications. And if you have a system that plays Go better than any human, no matter how hard that task seemed before such a program finally got built in 2017, that system will NOT generalize to any other task. Just because a system performs one task well does not mean it will soon (a term used often by people writing and talking about technology in general) perform seemingly related tasks well, too. Each new task that is different in nature (and there are many of those different natures) is a tedious and laborious job for the engineers and designers who build these systems.

So if the opposite of Narrow AI is General AI, youre essentially talking about a system that can perform any task you throw at it. The original idea behind General AI was to build a system that could learn any kind of task through self-training, without requiring examples pre-labeled by humans. (Note that this is still different from Searles notion of Strong AI, in that you could theoretically build General AI without building true thinking it could still just be a simulation of the real thing.)

Related Article: Confused by AI Hope and Fear? You're Not Alone

Lets do a thought experiment (a common tool of any philosopher who wants to think through an idea or theory). What if we interconnected each and every narrow AI solution ever built on planet Earth? What if we essentially built an IoA, an Internet of AIs? There are companies out there that have built:

If we standardized the interfaces for all of these solutions, and those for the hundreds and thousands of other tasks we face in our lives, wouldnt we then essentially have built General AI? One AI system of systems that can solve whatever you throw at it?

Certainly not. A hodgepodge of backend systems that each accomplish one task in a proprietary way is certainly not the same as one system that is equipped with general learning capabilities and can thus self-teach any skill needed. It is also far from being the sort of Strong AI that philosophers have in mind, as humans are definitely not a conglomerate of differently built subcomponents for each and every task we can conduct.

But then again does it matter? Wouldn't such a readily available system of systems essentially give us an omnipotent tool to help us with any imaginable task we face? It certainly would! And to someone oblivious to its inner structure, it would even appear to be that long-sought magical AI weve been shown in books and movies for decades.

The problem is this: such an Internet of AIs will never become reality. Our worlds capitalist nature essentially prohibits the sharing of intellectual property at the scale needed for such an endeavor. For any of the systems mentioned above, there are probably dozens of firms out there that make money having re-solved the same problem over and over again. Googles translation engine does a fine job, but so too does Facebooks, Microsofts, IBMs, DeepLs, SysTrans, Yandexs, Babylons, Apertiums ... some of them use a common foundation that academic circles have produced over the years, but many dont. Humans are not wired to combine their forces to a common greater good of such majestic proportions we are observing that fateful trait of ours in matters both short-term (coronavirus) and long-term (global warming).

So until our very DNA changes, which would further a change of our societal systems, we are stuck with Narrow AI, which will continue to bring meaningful innovation to us and make us more efficient over time in each of the domains it tackles but the holy grails of Strong or General AI will remain a dream.

Tobias Goebel is a conversational technologist and evangelist with over 15 years of experience in the customer service and contact center technology space. He has held roles spanning engineering, consulting, pre-sales, product management, and product marketing, and is a frequent blogger and speaker on Customer Experience topics.

Continued here:

Is There a Clear Path to General AI? - CMSWire

Google Unveils an AI Investment Fund. It's Betting on … – Wired – WIRED

vF(h(O~('v<}{j" y3kyyy$PHsK"]{/Ox~-OMO8s'Zi.Kq$(R~$k#n/0Hxv O"!/5g}uG[[oa(]n<7Y^sNN4/z0_2h%}{=:|[frg'u%b"g-<$YXc-F%68~H9K/Z }o; t1v%7acp=h/)NVfwH>c_^_$#_,W!,buw=v":l0>C h>LGs5yfLsc08[8>pmXc~r&D wH@yp0fL2=-h:t#:b/7I|v})c:Fdl'xIsEjOqZh{ o/+-y9=P!lPFw+]I#Gi0<5jV!r62:,+_$^qrhz3-at6u:UunZo|0N8;&@!Bt84YMU Q*?0Ra0PDdii$UlsZVM~unoTgFqV$17?!kG`9p;#>'[Ke2s{c4eCN={EC?@#1oY-Of?04ND7o,o_NvNZ(gBF}m2-Nq3-?dvZ %GG'?'xaOom/|qA@@N)Wn1?E9t)Gb|V3n!b_(+-/[g}<{?ypX-$F~4{g/,Nj ]~1'?'

z{7d[w,'vwxHG" W4W|_J<9-#hy'_|rD,@`{Km,~YWZGJB(E=m%gA|&Rz"|xbOm'_E.id0'G )q{Q'%eX i[^9>7Fd?@9Jq^1/N&o%53ec[(Lwy.= +z$D/EASY'>wNz(Q@;nN,{:MC,]&he^>JZA@Zzv?&Wd{4]oxh3'f>++$~_knE0CKW=?ILD6F+(0._:XA*Mi,JYb%DX&A~'NRLP(0?si8aTri~Sq6*VBlon$Nav*F: YUABx!3d3N7ehwCBKuy[>0vph ?3jPs)5iVX+wh0)'t_c/T=;gg1[qF~w/O.y~jCH('!)]% 9nnNx"IX{7t2}BeK{5`ZPE^2l:UNkP+VH[q:Z'0s]`W{GLwcTQz:M"2z%{=HQD5xr|W;`*Ig+/'{|4K7+@-Y(']eUMF$m/qalU&m+VM{ 72j3N >j!tTosj:1q!wIp&6|u.ML1=`llj<1'[BV22n#wg{OUgv>fVq3sr1cb3oT&Y:f;Xd+v'M&ww:[iyL=R$u..wb72f4]/JMk%^WYn5Su8Qc6@:cp8'MFG-O f7h|z`I4Kuj$4H-TIuFJ@%9-a|Q8T4gN+^^c=zxEk77NmTF 79F{J]-Ptvup%$liRZwABa&T{g']/xP)Bg3}4)6j=wEW{#rY@=y!-2KBe0MZw!-c6ghfM{eTF>2wbL8VNB.b<|[]0 +G-d=Y);;.]7B?}:=(}p'F'J.1$=FG>'=s6C?Mu8R.Ty4=bf7pS#{7t;x-nh,p6rZo(&{nkHW)t((sq?{S7m_= 2`vbJNmsHA2K2pa%V-ZB:xR~h]_5:fXNue-"tLcs s/xWkIY$uyl>Jihws*|!YAwH({/q=P2R {vt4e 9k(=q]X:q:iE6)ifkd2o"hP#){A"TH3K?e:$&DbkNy[][B%9=i8l)6L9 t_^p&e}m& JOtN&>E?BC~,me-wB6<Bx`h4YC4nwJ{LM3gZ0vf=mCdZ83t0KegF77xva;N"pkhsI^`[]gQ/-le~t=xx.=eUm livmvYa?k>k.3.46]`O{mwo4zm;uYQ?k>~|eQ<#C^`[]g94v=G~~tQ6~tq2xz=k]q?vYi^`Oh,#K&>hi^`w~d2oywk~#]x25:4v [ON)FO9OFQOu']n%5;wzuJ3z3:=A?]V~Owm=6c~d;2l{NqOq'jf5Y{:j=j55.vf5t@~wI'V@'=yN=eR7r7.RfSDl~dO$Ya~wOzO]VZC~w=iNfi t6QZ{Nvv&=YZ2ZNfOOW% [Kl.|)4~aN[ 6OO=qnvwh.U?;q'ncv9{-4. z6;$5@~wV=A;a{?}7,5{:]x9w(h?hh @owUZ'CaQQK{Qiq-v$d;CE!yAq5$7KSOwGLurzYeK7a=d :~/V]|H#xszqTZ@B3**ap9X`5'y>?y<+k6zQcj Hps#^mO5|:V,n2uN}iWT 1]y;!,)IV"/]sI7*X nZJ%*T/G.oXofW/T g^ol^(+&7?Ln:N1r{%j^@tQo9Ad!1*$fWItcFi$h!Z BKTLFf -gioU~smb:Lnemj[1i0Lczp-|j[imiZ{V2LWC-P_mmf'i :T2-C{~uj[PmkmCC`?Tj[~u?Pm>m`?TC{~?j[PmmC`?T?j[BuoPmPmPmm?T:C?T:CCu(j[Pm` mC?T7j[CuoPmPmPmPmm }?T:CCu(j[=@u j[Pm` T2-j[U)i LO-u%7[~A/;-J{5}uMvrf&Ki3<35{x^me)Vs&u;c:5Vjdm ,Xk@l)MY SSXA b5,SFD.'y~4MZKU-XwxR8W' 0w._!mMhD$g,AWXb&x#yw:Q>Vhb.~91:kQz70)=C;+9S"0x z Y,9qSgYcp.`XE8J'n)X,fwIuF}&H9o&`b9eEB (>. (-5hfKN%(x b/E>[Pd2X/j`#]%[sF WJ&||e39a&'3(,-(x`YlMb9# (9^dyC?KgK$^XG$21Hu,'"Q!y6.hWVVw,I@kPl/'Kptb3"xBrwD,!m%7y,Z4@^ /V`=s QgP7-nPP0uNa7RFd5spt9 ehaeYT*TZl>/XaQoJ`p0Q"z a?4tqf]1J>~_x7a`/u=&gQI-]MUI,BGQ+QH&&eUmo[|0c }`F>a5i[]a.wPU~iWS01;TA7upA^K~-2a$l~.:bXvoe*]=xl1w3p0CBcj;? t^O:N{[`g]=q 5Z#p^%.w}qB,~ fmelGSVUc@%c>(7^?smXw0{1ZgxGgxTVWf>$0<&ICZ >O7g-xqorXxz8%OHkIcgb2f}9AviQ9!v]PE)`0cP#0@Ch[?)wrzNw*l7bK:<)'%[hlmXo=O}.UKq8[9uk|{2=mc>4'e/Zt "g>o:t:,VR 0d>P1T?{O!]Eb,x-?_#MpF`s{iy+id~CBF5h+Q%&{f0llBNOOe@>.s2/PA"ds>U]&kLzQD&p?i|*h"^6 ]7(6/?hj|ih4?=-bR heHY/$e>MHWX'8o_i"lOAXT O t.#}+M.HU'":,cysFG:Mzp>39ZPLQZm9{0;;FxO4o"/+.+R+kdk ^xqvV%j*4hs9|=wbb9.w@Zi}UgxD,X)#k-7>1XGV_a *# Vp)dJ M#}=HP:5:5N`u(;TEf@jQkOZbN"~F5V~i_4No_yxv"4M1>rvrGn:8syA)C($ y?aN*Bq'

Ce#Hvm6y;m6/0p'S*,g~jAnV/H>#D{y((Ti"8ihX1SU@3luub7^d[['EB#D$)$-Lkj1~WRIvWn7wQatF_xVL}q;bd {Lzg1'^lfkmiDCKW5S.X/aSLL=CcvcsE%V FX&g`Uy-c4 t/5ENYs2coCmv1 wq6EL78;P^g;Ex[wdXxmt )<9g>0#kCc?{ML1=lQQM{fY:nY0%vt]42V/|ike96Y5-J5N1SZ!T,dC/6dJ6?|Ld92OFh-6v"s%7YNd06}Pzltn(70ck1'R*d)w HIKfG^L^4Z.G&h6Mw3Fsh?1po_}?z.qhX$/qs?>f9H'7j5>?Y3&u^biFtDBPxJ&?AX0x6i8{RMk2c~r%s~z/a| [|]y@w>{=oqzY}p4q:Y@"_1gU}vPo,F*~A`h29Xsy77-vX&T"^+UR9kRr=; Jz]jW[S95j=-&WZsjwlLk nn`P=CO4(6L,p* k&Hy(LH4+2[3_MgC>[lDj%Lhd#s'6=q.>VT $x~2i8sr>B}xqS]7&r gA^K;f`GFX859x!wKU4e0@1)5V"uBx7kM71An'r=Wn&/a:4=x=Z0x`Lq-68 WjKGw]d0Hc|A2KFa`uD73c0K[Y+tK/k};=F}7<(>s*^zN5~-'1?GwD x>Cb@Y`{ nii?_WTuu(Nd0W%71k(4`)(C|`F|}kI<`8]G-sJ9R@fmxg=9kicq(?fGM|nuJ38|`*z=#{Qp)<}W.&l04o6[QwO408!/|+to9jaGP4<@<6x?` os>SDY8F)@ &'!5]0VRI37q7637 >L8 f9p2vL}tL{ >YcdL;Q'':#Zr9c,tI,#Mltg2Yl`M5mTInDQ KG*_fgMo3uA|B/>s3 ,~laV>$3fi[=M_ ]/]R;Ci(k9%$M{A|Q|k?cMepOry//zOl_'ZTMiQEG_O:~T'GyCn+-7n6%G`` y?R_y0'[BGggG_ktZz_?:}te;f{'2&"6t wHh"`T~KId^%G_m 777/^x7oW4G.GS^eiR^BG.gl#S^|>/3?Mk,s @bqd75mw[/xdem*K #U]3|=kqxT||?ed3bHD]'L8`&sc@$q@T0#T?18g:U7Ad0 3188,GC-O57}.%ZxL"sB^RMA]*vH"4gG+;!(O6]; S 2&t?-|6 bfc9u %4.^T%1DH^Lm@2""}-"!l~$` X0J--I Fpr45FKH S"qM^n`o@VFY]=*&I) #@@dbt7g"O;2c5HQ}==m`W Lp%h&N8K@1/nbRtms`;] /QBi"^95wacbP ZlVfux2;aeNN@ha-V8h,uiB1",>@~P@6o 3&M xDU1L}%VIHU*041Z<="/68@fs `<^ehap~/f1X[=~?f{3OlL NS'K0IpVO45mKbN)' 0yyv?1|G8InB?"Kh(!YI>.=Db1AMD@D{'r?<'SDP]Wqai=!c&D-CNx"yxjtnB`R`P9k1AAsreIg^{l`scTSQ|>Dh.#r(,^_" 6 %wa q2$m8,#k$U}N{ +AxwU)q+/s^^I"h@V(D MNV`CE+ OZrs^c@(k.(1)pPn"^R(a")3Cc"'S-3yREs?fV*:+L+]&c_6/=$T|qy;d}+3(Rbj7pp97W!F8Dr[^(.V$Ie.1}Z14x OU ,/p&n{cFM:OJ(-}98m8t2l;{jS{ 9ay[4[=[4[{%oV7Nd)7+Gi`}&"!NCv,zm-z(~[07K7 qg_s1 LJV4Yp&"SySBU #WI5x>rN.D1$|{[

See the original post here:

Google Unveils an AI Investment Fund. It's Betting on ... - Wired - WIRED

UNWTO And Telefnica Partner To Help Destinations Use Data And AI To Drive Tourism’s Sustainable Recovery – Hospitality Net

The World Tourism Organization (UNWTO) has strengthened its partnership with Telefnica, the Spanish multinational telecommunications company. As tourism restarts around the world, Telefonica deepens its collaboration with the United Nations specialized agency to advance market intelligence in order to accelerate the sector's recovery from the impact of COVID-19.

As it guides the sector through the challenge posed by the pandemic, UNWTO has prioritized innovation as a key means of growing tourism back stronger and better. Additionally, with the global community now left with less than 10 years to achieve the Sustainable Development Goals ("The Decade of Action"), UNWTO is also driving tourism's movement towards sustainability. This collaboration with Telefnica, which builds on an existing partnership, is designed to use digital transformation to support sustainable recovery and future growth.

UNWTO and Telefnica will work together to promote the effective use of Big Data and Artificial Intelligence across the tourism sector. This will help destinations better understand tourist behaviour, allowing them to market their products more effectively. Management of data will also help destinations better manage tourist flows within the context of the new health and safety protocols being rolled out in response to COVID-19.

UNWTO Secretary-General Zurab Pololikashvili said: "The digital transformation of tourism will allow the sector to grow back stronger from the standstill caused by COVID-19. As UNWTO leads tourism's restart, our partnership with Telefnica will allow us to provide Member States and the sector as a whole the tools they need to accelerate recovery, build trust by guaranteeing safety and promote sustainability."

Miguel Llopis, Industry Lead of Public Sector in IoT and Big Data at Telefnica, added: "Tourism will return with force but the sector will have to face a structural transformation where new digital technologies, such as IoT and Big Data, will be a differential factor of competitive advantage."

Telefnica and UNWTO have worked together to launch a series of visualization tools within the UNWTO Global Data Dashboard that allows for a better understanding of key performance indicators in tourism.

Also to mark the start of this new phases of collaboration, UNWTO joined Telefnica, Turismo de Portugal, the Tourism Authority of Buenos Aires and the Secretary of Tourism of Chile (SERNATUR) for a special virtual training session for destinations in the Americas. This focused on exploring how the use of Big Data can add value to the tourism sector and lead recovery.

The World Tourism Organization (UNWTO), a United Nations specialized agency, is the leading international organization with the decisive and central role in promoting the development of responsible, sustainable and universally accessible tourism. It serves as a global forum for tourism policy issues and a practical source of tourism know-how. Its membership includes 159 countries, 6 territories, 2 permanent observers and over 500 Affiliate Members.

Follow us on Facebook, Twitter, Instagram, YouTube, LinkedIn and Flickr.

Visit link:

UNWTO And Telefnica Partner To Help Destinations Use Data And AI To Drive Tourism's Sustainable Recovery - Hospitality Net

COPAN’s PhenoMATRIX Fuses the Power of Artificial Intelligence and Culture for Highly Sensitive GBS Detection Using Breakthrough Reading Algorithm -…

"COPAN's PhenoMATRIX not only was able to detect more true positive cultures than manual review of digital culture images, but it shows that chromogenic cultures together with artificial intelligence algorithms can detect GBS colonization with the same high sensitivity as molecular detection systems," said COPAN Diagnostics' Scientific Director Dr. Susan Sharp.

The study, which was published on October 21, 2020, evaluated the performance of the PhenoMatrix Chromogenic Detection Module digital imaging software's ability to detect GBS from LIM broth plated on bioMrieux's CHROMID Strepto Battwo clinical laboratories.

After 48 hours of incubation, the sensitivity of COPAN's PhenoMATRIX was similar to the BD MAX GBS molecular test 95.5% to 96.8% respectively and significantly higher than manual at 90.3%.2

Another noteworthy discovery was that COPAN's software never inaccurately called a culture that was determined to be a positive a negative, and it identified an additional eight true positive specimens that were missed by manual reading. This finding establishes that the innovative PhenoMATRIX AI, plus classic culture, is quite the powerful combination at a fraction of the cost of molecular testing.

"COPAN's AI software, along with the use of chromogenic agars has made our decades-old agar culture for the detection of pathogens 'new' again," Sharp added.

PhenoMATRIX is an advanced set of highly sophisticated AI that gives WASPLab users the power to automatically pre-assess and pre-sort culture plates, read, interpret and segregate bacterial cultures. By grouping negative cultures, which are the majority of the cultures screened, staff can quickly review up to 40 plates per computer screen and batch release negative results eliminating the need to review each plate manually saving time and freeing up technicians to focus on more complex tasks.

Contact us for more information about COPAN's state-of-the-art PhenoMATRIX software and how you can add these intelligent algorithms to your WASPLab system.

References: 1. CDC. Group B Strep (GBS) Fast Facts. https://www.cdc.gov/groupbstrep/about/fast-facts.html. Last reviewed June 11, 2020. Accessed October 27, 2020. 2. Baker J, et al.Digital image analysis for the detection of Group BStreptococcusfrom ChromID StreptoB Media using a PhenoMatrix Artificial Intelligence Software Algorithm. J Clin Microbiol. 2020;doi:10.1128/JCM.01902-19

About COPANWith a reputation for innovation, COPAN is the leading manufacturer of collection and transport systems in the world. COPAN's collaborative approach to pre-analytics has resulted in Flocked Swabs, ESwab, UTM Universal Transport Medium, and laboratory automation, WASP and WASPLab. COPAN carries a range of microbial sampling products, inoculation loops, and pipettes. For more information, visitwww.copanusa.com.

SOURCE COPAN Diagnostics, Inc.

Home page

Visit link:

COPAN's PhenoMATRIX Fuses the Power of Artificial Intelligence and Culture for Highly Sensitive GBS Detection Using Breakthrough Reading Algorithm -...

AI In The Enterprise: Reality Or Myth? – Forbes

Artificial intelligence (AI) is one of the most talked-about new technologies in the business world today.

It's estimated that enterprise AI usage has increased 270% since 2015. This has coincided with a massive spike in investment, with the enterprise AI industry expected to grow to $6.1 billion by 2022.

Along with the technology's very real ability to transform the job market, exaggerated myths have also become common. The hype surrounding this branch of technology has led to a number of myths:

Myth No. 1: More Data Is The Key To AI's Success

While it's true that AI needs data in order to learn and operate efficiently, the idea that more data equals better outcomes is misleading. Not all data is created equal.

If the information fed to an AI program is labeled incorrectly or isn't relevant, it poisons the data pool. The more information AI has access to, the more precise its models and predictions will be. If the data itself is of poor quality, the outcome will be precise but not necessarily based on business reality. This can result in poor decision-making.

The truth is that the data fed to an AI solution needs to be curated and analyzed beforehand. Prioritize quality over quantity.

Myth No. 2: Companies See Immediate Value From AI investments

The integration of AI into standard operating procedures doesn't happen overnight. As seen in Myth No. 1, the data the AI uses needs to be curated and checked for relevance beforehand. This may significantly reduce the amount of information the AI has access to.

To obtain truly valuable returns, it's essential to continuously provide relevant data. Like humans, AI solutions need to be given time to learn. There may be a significant lag between when an AI-based initiative begins and when you see a return on investment.

Myth No. 3: AI Will Render Humans Obsolete

The purpose of AI is not to replace all human workers. AI is a tool businesses can use to achieve their goals. It can automate mundane processes and pull interesting insights from large data sets. When used correctly, it augments and aids human decision-making. AI provides recommendations based on trends gleaned from mountains of information. It may even pose new questions that have never been considered. A human still needs to weigh the information provided and make a final decision based on risk analysis.

Pointing out these myths in no way indicates that AI won't deliver on its transformational promise. It's easy to forget that enterprise AI adoption is still in its infancy. Even still, a 2018 Deloitte survey reported that 82% of executives said their AI projects had already led to a positive ROI. Those now implementing AI projects will be the case studies of the near future.

While there are sure to be growing pains, being on the cutting edge of this exciting technology should be beneficial. There's little doubt about how important it will be for the businesses of tomorrow. Getting a head start now, ironing out the wrinkles and locking down efficient processes will pay dividends.

Visit link:

AI In The Enterprise: Reality Or Myth? - Forbes

AI Is Shaking The Oil And Gas Sector To Its Core | Articles | Chief Data Officer – Innovation Enterprise

Artificial intelligence is one of the mostexciting technological advancements to reshape our society in living memory,yet few people have a robust understanding of AI and the myriad of ways thatits changing our world. Nowhere is AI more important and disruptive than inthe energy sector, where professionals from a wide range of backgrounds arefinding it immensely helpful. Nevertheless, the role of AI in the oil and gassector is still largely misunderstood, and many potential entrants to theindustry have no idea where to begin brushing up on this complex topic.

Heres a breakdown of how AI is disruptingoil and gas, and why intelligent machines will be imperative to the future ofthe energy sector.

AIisnt coming its already here

If theres an easy way to describe the roleof AI in the oil and gas sector, its that this technology has already becomean ingrained part of how energy companies and professionals achieve theirobjectives. Oil and gas companies have historically been massive collectors ofdata; if well workers couldnt access huge treasure troves of data about theregion theyre operating in, for instance, they would never be able to succeedat their jobs while ensuring workplace safety and cost-effectiveness. Thismeans that the oil and gas sector was ripe for disruption by AI, which morethan anything else desperately needs massive volumes of information to workeffectively.

AI beganto take over the oil and gas sector in no small part because it was alreadyreplete with a tremendous amount of data surrounding ongoing drillingoperations and planned future initiatives. Predictive algorithms were capableof digesting huge volumes of previously collected data before generating newinsights that contemporary oil and gas professionals simply wouldnt have beencapable of producing without the assistance of intelligent machines. Itwill thus become imperative for future oil and gas workers to be familiar andcomfortable with computers if they want to remain successful in their field forvery long.

More than anything else, those workers whorely upon complex software to manage their responsibilities are findingthemselves disrupted by AI. This mostly isnt a negative process, however;while AI-led disruption may temporarily perplex workers, its not renderingthem obsolete. Many claims that AI and similar innovations would result inwidespread joblessness havent come true. This is mostly because many of thepeople making such predictions were critics of AI and related technologies andthus argued that job loss would occur after adopting it in an effort to preventits adoption.

The cat is out of the bag, however, andtheres no stopping AI now that its become a regular facet of the oil and gassector. As areport from EY makes clear, areas of the industry that are under siege bychanging market conditions can benefit from AI by relying on it to cut down onoperational costs while simultaneously catching errors that the human eye wouldnever notice.

Reservoirsarent so intimidating

One of the most impressive ways that AI hasdisrupted the oil and gas sector is by rendering reservoirs more accessiblethan ever before. Previously, companies shied away from drilling in certainareas because they were unsure of the probability of success. Now, however, simulatedprograms that are managed by AI can create impressiveknowledge graphs that incorporate the regions geophysics and otherreservoir project information. Companies that were once worried about payingfor expensive oiland gas training courses can thus avoid wasting their money on frivoloustraining procedures by using programs to determine whether a reservoir is worthpursuing in the first place.

Precision drilling is also ensuring thatreservoirs that were previously accessible can now be exploited to a fullerand more profitable extent. This means that many projects that oil and gasexecutives thought were winding down can instead be reinvigorated with the helpof AI-led drilling, which is far more precise and productive than thatexclusively managed by humans. Field surveillance, too, will be made mucheasier and far cheaper when its simultaneously managed by man and machineworking together rather than one of them operating by their lonesome.

Finally, AI is also making the oil and gassector safer than ever before. Smart helmets and other wearable technologythat workers carry with them will ensure that those who are stuck in trickysituations will enjoy closer monitoring from their peers. Thismeans that workers who find themselves imperiled will have outsiders aware ofthat trouble coming to rescue them sooner than ever before.

From project maintenance to worker safety,AI is disrupting and benefiting the oil and gas industry so much that itsalmost difficult to keep track of all the innovations its introducing to thefield. Before long, we can expect AI to become a normal and almost mundaneaspect of the oil and gas world.

Read the original:

AI Is Shaking The Oil And Gas Sector To Its Core | Articles | Chief Data Officer - Innovation Enterprise