Nuclear Fusion and Artificial Intelligence: the Dream of Limitless Energy – AI Daily

Ever since the 1930s when scientists, namely Hans Bethe, discovered that nuclear fusion was possible, researchers strived to initiate and control fusion reactions to produce useful energy on Earth. The best example of a fusion reaction is in the middle of stars like the Sun where hydrogen atoms are fused together to make helium releasing a lot of energy that powers the heat and light of the star. On Earth, scientists need to heat and control plasma, an ionised state of matter similar to gas, to cause particles to fuse and release their energy. Unfortunately, it is very difficult to start fusion reactions on Earth, as they require conditions similar to the Sun, very high temperature and pressure, and scientists have been trying to find a solution for decades.

In May 2019, a workshop detailing how fusion could be advanced using machine learning was held that was jointly supported by the Department of Energy Offices of Fusion Energy Science (FES) and Advanced Scientific Computing Research (ASCR). In their report, they discuss seven 'priority research opportunities':

'Science Discovery with Machine Learning' involves bridging gaps in theoretical understanding via identification of missing effects using large datasets; the acceleration of hypothesis generation and testing and the optimisation of experimental planning. Essentially, machine learning is used to support and accelerate the scientific process itself.

'Machine Learning Boosted Diagnostics' is where machine learning methods are used to maximise the information extracted from measurements, systematically fuse multiple data sources and infer quantities that are not directly measured. Classifcation techniques, such as supervised learning, could be used on data that is extracted from the diagnostic measurements.

'Model Extraction and Reduction' includes the construction of models of fusion systems and the acceleration of computational algorithms. Effective model reduction can result in shorten computation times and mean that simulations (for the tokamak fusion reactor for example) happen faster than real-time execution.

'Control Augmentation with Machine Learning'. Three broad areas of plasma control research would benefit significantly from machine learning: control-level models, real-time data analysis algorithms; optimisation of plasma discharge trajectories for control scenarios. Using AI to improve control mathematics could manage the uncertainty in calculations and ensure better operational performance.

'Extreme Data Algorithms' involves finding methods to manage the amount and speed of data that will be generated during the fusion models.

'Data-Enhanced Prediction' will help monitor the health of the plant system and predict any faults, such as disruptions which are essential to be mitigated.

'Fusion Data Machine Learning Platform' is a system that can manage, format, curate and enable the access to experimental and simulation data from fusion models for optimal usability when used by machine learning algorithms.

Read more:

Nuclear Fusion and Artificial Intelligence: the Dream of Limitless Energy - AI Daily

Why The Future of Cybersecurity Needs Both Humans and AI Working Together – Security Boulevard

As we look to the future of cybersecurity, we must consider the recent past and understand what the pandemic has taught us about our security needs.

Many cybersecurity platforms proved inadequate when a large percentage of the worlds workforce abruptly shifted to remote work in the Spring of 2020. Companies found themselves fighting against the limitations of their own cybersecurity platforms.

Modern systems enhanced with self-learning AI capabilities have fared best in the face of the pandemics impact on networking.

For others, immediate, manual interventions were the only thing standing between enterprise security and the bad actors who had been standing by waiting for a global event of this scale.

They swooped in almost immediately, targeting governments and hospital systems, and a wide swath of commercial enterprises. Everything from ransomware to DDOS to phishing schemes ramped up right alongside the upheaval so many companies were experiencing in the early days of the pandemic.

Many inadequate systems were enhanced with some form of AI, but relied on what employees had taught them. No one could have predicted such a dramatic shift in behavior, but systems that were trained to alert on unexpected behavior like a sudden rush of remote connections floundered.

Security analysts were unable to keep up with the constant stream of false positives. Threat hunting is time-consuming for teams under typical network conditions. The pandemic exacerbated this challenge.

Bad actors had been standing by, waiting for an event that would impact thousands of global networks all at once.

As companies examine their security systems, the question theyll need to answer isnt Should we bring AI on board, but rather What kind and how much AI do we need?

A recent WhiteHat Security survey revealed that more than 70 percent of respondents cited AI-based tools as contributing to more efficiency. More than 55 percent of mundane tasks have been replaced by AI, freeing up analysts for other departmental tasks.

Still, not all enterprises or employees are excited by the prospect of bringing more AI on board, especially AI that requires less intervention. This is an understandable response employees worry that AI will replace their jobs.

Multitalented human employees are not only part of the self-learning AI solution, they are integral. Respondents to the WhiteHat survey cited the importance of creativity and experience as critical for adequate security.

A combined approach appears to be the likeliest reliable cybersecurity approach going forward. Security teams that incorporate AI to handle mundane tasks and reduce overarching issues like false positives and focus on the human element will fare better.

Third-wave self-supervised AI platforms handle unusual network activity with more nuance. When the shift to remote work hit these networks, self-learning AI quickly reestablished a new normal. Instead of triggering hundreds or thousands of false positives, these systems rapidly adjusted and started looking for behavior that didnt mean the new frame of reference.

In the meantime, security analysts could focus on shoring up vulnerabilities created by the pandemic in other ways.

Creative problem solving has never been as crucial for teams facing the unprecedented challenges of today. Qualities like intuition and experience-based decision-making are invaluable, and even the most advanced AI cannot replace them.

What machines can do is augment the important, nuanced work that human security professionals do. Talented security analysts waste time sifting through false positives and handling many other mundane tasks while keeping a constant eye on the network.

Tools that reduce manual interventions also reduce errors and improve employee satisfaction.

Machines will never be able to entirely replicate or take over the work security professionals do, so its essential for companies to look for security platforms that underscore the talents of human security analysts. Security teams that view AI as one part of a complete, multi-faceted approach will benefit the most from these improvements.

Future-facing companies must evaluate their ability to weather the cybersecurity emergencies of tomorrow. Typical AI-enhanced platforms can help but are fundamentally limited. Without a complete understanding of your networks baseline and how it can change in response to unexpected events, no security platform can detect every threat.

MixModes third-wave AI solution develops an accurate, evolving baseline of network behavior and then responds smartly to aberrations and unexpected network behavior.

Reach out to our client service team today to set up a demo.

Our Q2 Top Cybersecurity Insights

NTA and NDR: The Missing Piece

The Problem with Relying on Log Data for Cybersecurity

The (Recent) History of Self-Supervised Learning

Guide: The Next Generation SOC Tool Stack The Convergence of SIEM, NDR and NTA

Redefining the Definition of Baseline in Cybersecurity

MixMode CTO Responds to Self-Supervised AI Hopes

Why Training Matters And How Adversarial AI Takes Advantage of It

Read more from the original source:

Why The Future of Cybersecurity Needs Both Humans and AI Working Together - Security Boulevard

DeepMind compares the way children and AI explore – VentureBeat

In a preprint paper, researchers at Alphabets DeepMind and the University of California, Berkeley propose a framework for comparing the ways children and AI learn about the world. The work, which was motivated by research suggesting childrens learning supports behaviors later in life, could help close the gap between AI and humans when it comes to acquiring new abilities. For instance, it might lead to robots that can pick and pack millions of different kinds of products while avoiding various obstacles.

Exploration is a key feature of human behavior, and recent evidence suggests children explore their surroundings more often than adults. This is thought to translate to more learning that enables powerful, abstract task generalization a type of generalization AI agents could tangibly benefit from. For instance, in one study, preschoolers who played with a toy developed a theory about how the toy functioned, such as determining whether its blocks worked based on their color, and they used this theory to make inferences about a new toy or block they hadnt seen before. AI can approximate this kind of domain and task adaptation, but it struggles without a degree of human oversight and intervention.

The DeepMind approach incorporates an experimental setup built atop DeepMind Lab, DeepMinds Quake-based learning environment comprising navigation and puzzle-solving tasks for learning agents. The tasks require physical or spatial navigation skills and are modeled after games children play. In the setup, children are allowed to interact with DeepMind Lab through a custom Arduino-based controller, which exposes the same four actions agents would use: move forward, move back, move left, and turn right.

During experiments approved by UC Berkeleys institutional review board, the researchers attempted to determine two things:

In one test, children were told to complete two mazes one after another each with the same layout. They explored freely in the first maze, but in the second they were told to look for a gummy.

The researchers say that in the no-goal condition the first maze the childrens strategies closely resembled that of a depth-first search (DFS) AI agent, which pursues an unexplored path until it reaches a dead-end and then turns around to explore the last path it saw. The children made choices consistent with DFS 89.61% of the time compared to the goal condition (the second maze), in which they made choices consistent with DFS 96.04% of the time. Moreover, children who explored less than their peers took the longest to reach the goal (95 steps on average), while those who explored more found the gummy in the least amount of time (66 steps).

The team notes that these behaviors are in contrast with the techniques used to train AI agents, which often depend on having the agent stumble upon an interesting area by chance and then encouraging it to revisit that area until it is no longer interesting. Unlike humans, which are prospective explorers, AI agents are retrospective.

In another test, children aged four to six were told to complete two mazes in three phases. In the first phase, they explored the maze in a no-goal condition, a sparse condition with a goal and no immediate rewards, and a dense condition with both a goal and rewards leading up to it. In the second phase, the children were tasked with once again finding the goal item, which was in the same location as during exploration. In the final phase, they were asked to find the goal item but with the optimal route to it blocked.

Initial data suggests that children are less likely to explore an area in the dense rewards condition, according to the researchers. However, the lack of exploration doesnt hurt childrens performance in the final phase. This isnt true of AI agents typically, dense rewards make agents less incentivized to explore and lead to poor generalization.

Our proposed paradigm [allows] us to identify the areas where agents and children already act similarly and those in which they do not, concluded the coauthors. This work only begins to touch on a number of deep questions regarding how children and agents explore In asking [new] questions, we will be able to acquire a deeper understanding of the way that children and agents explore novel environments, and how to close the gap between them.

More here:

DeepMind compares the way children and AI explore - VentureBeat

IBM, Salesforce Strike Global Partnership on Cloud, AI – Fortune

Are two clouds better than one?

How about two nerdily-named artificial intelligence platforms?

According to IBM and Salesforce , the answer to both of those questions is yes.

The two Fortune 500 companies on Monday afternoon revealed a sweeping global strategic partnership that aligns one iconic company's multiyear turnaround effort with another's staggering growth ambitions . According to the terms of the deal, IBM and Salesforce will integrate artificial intelligence platforms (Watson and Einstein, respectively) and some of their software and services (e.g. a Salesforce component to ingest The Weather Companys meteorological data). IBM will also deploy Salesforce Service Cloud internally in a sign of goodwill.

Why not go it alone? Fortune spoke on the phone with IBM CEO Ginni Rometty and Salesforce CEO Marc Benioff to get a better understanding of the motives behind the deal. What follows is a transcript of that conversation, edited and condensed.

Fortune : Hi, guys. So what's this all about?

Benioff : It's great to connect with you again. Artificial intelligence is really accelerating our customers' success and they're finding tremendous value in this new technology. The spring release of Salesforce Einstein has opened our eyes to what's possible. We now have thousands of customers who have deployed this next-generation artificial intelligence capability. I'll tell you, specifically with our Sales Cloud customers, it creates this incredible level of productivity. Sales executives are way more productive than ever beforethe ability to do everything from lead scoring to opportunity insights really opened my eyes that this is possible. So the more value in artificial intelligence we can provide our customers, the more successful they'll be, which is why we're doing this relationship with IBM.

We're able to give our customers the incredible capabilities of not only Einstein but Watson. When you look at the industries we cater toretail, financial services, healthcarethe data and insights that Watson can provide our customers is really incredible. And we're also thrilled that IBM has agreed to use Salesforce products internally as well. This is really taking our relationship to a whole new level.

Rometty : Andrew, thank you for taking the time. This announcement is both strategic and significant. I do think it's really going to take AI further into the enterprise. I think about 2017 as the year when we're going to see AI hit the world at scale. It's the beginning of an era that's going to run a decade or decades in front of us. Marc's got thousands of clients; by the end of this year we'll have a billion people touched by Watson. We both share that vision. An important part of it is the idea that every professional can be aided by this kind of technology. It takes all the noise and turns it into something on which they can take action. It isn't just a sales process; we're going to link other processes across a company. We're talking about being able to augment what everyone doesaugment human intelligence. Together, this will give us the most complete understanding of a customer anywhere.

For our joint customers, to me, this is a really big deal. Take an insurance companyMarc's got plenty of them as clients. You link to insights around weather, hook that into a particular region, tell people to move their cars inside because of hail. You might even change a policy. These two things together do really allow clients to be differentiated.

This is the beginning of a journey together.

I thought this was the brainiest deal I've ever heard of, with Watson and Einstein together.

Rometty : It's good comedy.

Like any two large tech companies, you compete in areas and collaborate in othersfrenemies. Why did you engage in this partnership? Any executive asks themselves: build, buy, or partner. Why partner this time?

Benioff : I'll give you my honest answer here, which is that I've always been a huge fan of IBMGinni knows that. When I look at pioneering values in businesscompanies that have done it right and really stuck to their principles over generationsI really look to IBM as a company that has deeply inspired me personally as I built Salesforce over the last 18 years. We're going to be 18 years old on March 8th. When I look at what we've gone through in the last two decades, I really think that it's our values that have guided us and how those values have been inspired by many of the things at IBM.

Number two is, Ginni made a strategic decision to acquire Bluewolf, which is a company that we had worked very hard to nurture and incubate over a very long period of time. It really demonstrated to me that the opportunity to form a strategic relationship with IBM was possible. We both have this incredible vision for artificial intelligence but we're coming at it from very different areas. [Salesforce is] coming at it from a declarative standpoint, expressed through our platform, for our customer relationship management system. IBM's approach, which is pioneering, especially when it comes to key verticals like retail or finance or healthcarethese are complements. These are the best of both worlds for artificial intelligence. These are the two best players coming together. We have almost no overlap in our businesses. We have really a desire to make our customers more successful.

Rometty : Beautifully said. And I'll only add a couple of points. Not only sharing values as companies but in terms of how we look at our customers. We share over 5,000 joint clients. But more importantly, think about this era of AI. There are different approaches you can take. What Marc's done with Einsteinthink of it as CRM as a process. What we've done with Watsonthink of it as an industry in depth. We do have very little overlap. Why we talk about Watson as a platform is to be integrated with things like what Marc's doing.

Let me ask you about AI. It's been in development for decades, but the current wave is nascent. How do you each see AI as part of the success of your companies? It's a capabilityno one goes to the store to buy AI. Hopefully it solves their problems. But AI can be anything.

Rometty : I view AI as fundamental to IBM. Watson on the IBM cloudthat's a fundamental and enduring platform. We've built platforms for ourselves before: mainframe, middleware, managed services. This is now the era of AI. It will be a silver thread through all of what IBM does.

Is it fair to say that you guys aren't trying to compete on AI? I don't mean between youI mean within the greater industry.

Rometty : We're absolutely complementary. Clients will make some architectural decisions here. Everyone's gonna pick some platforms to use. They will pick them around AI. By the way, there are stages: the most basic is machine learning, then AI, then cognitive [computing]. What we're doing with Marc goes all the way into cognitive. Just to be clear.

Benioff : I could not agree more. We brought our customers into the cloud, then into the social world, then into the mobile world. Now we're bringing them into the AI world.

This is really beyond my wildest dreams in terms of what's possible today. And by the waythat we're able to replace Microsoft's products [at IBM] is a bonus for us. (laughs)

Read more from the original source:

IBM, Salesforce Strike Global Partnership on Cloud, AI - Fortune

5G, AI & IoT : IBM and Verizon Business Close to Edge of "Virtually Mobile" – AiThority

IBM and Verizon Business are collaborating on a global scale to bring AI capabilities at Enterprise Edge for 5G networking. There has been a tectonic shift in the Enterprise Edge industry leveraging AI and the Internet of Things capabilities to push the bar higher for 5G networking. With Verizon Business, IBM intends to bring AI and IoT capabilities at the center of every Cloud-driven enterprise digital transformation.

Read More:AiThority Interview With Bob Lord, SVP Of Cognitive Applications At IBM

5G technology is the mother of all emerging technologies.

We are already witnessing a rise in the number of Edge devices used in fintech, healthcare and manufacturing services, and smart city architecture around the 5G networking platforms. These are all part of the larger global picture we call it the Future of Industry 4.0. AI, alone, couldnt have helped companies achieve all the promises that were made in the first two decades of the century. Its a different industry altogether with the exploding Big Data pushing hard on AI, Machine Learning, Telecom and Internet connectivity.

IBMs partnership with Verizon will bring A, IoT and 5G technologies to co-invent Multi-access Edge Compute (MEC) capabilities for a wide range of applications. These are tested and analyzed for efficiency, accuracy, quality, and availability at an industrial level.

Industrial customers would benefit from Verizons 5G MEC expertise and IBMs Cloud and AI capabilities. Next-gen high-speed internet connectivity with low latency underlines the future of Industrial Revolution 4.0.

Many industrial enterprises are today seeking ways to use edge computing to accelerate access to near real-time, actionable insights into operations to improve productivity and reduce costs.

Edge Computing ensures low latency with trust-worthy computing and storage capabilities. Edge Virtualization is one of the fastest-growing data center infrastructure management trends that have the power to completely take out traditional IT challenges. With AI and Sensors, these data centers can be managed better, with real-time analytics and predictive intelligence becoming the new gold-standards of future IT businesses.

Edge computings decentralized architecture brings technology resources closer to where data is generated i.e., where devicesare located inan industrial site.

Edge computing provides these benefits:

We see the role of 5G networking in Edge Computing very clearly. 5G is the most advanced standard in global wireless mobile networking. 5G holds the key to connecting every device mobile, car, objects, devices, (some say, even digital pacemakers and nano-brains) with a secured mobile network. To understand each of these interactions with the Edge device on 5G in real-time, we would need AI and the power of automation.

With 5Gs low latency, high download speeds and capacity, any user can increase the number of devices that can be supported within the same geographic area. IoT devices on the 5G network could amplify the pace at which organizations interact with these devices in near real-time, with computing power in the proximity of the device. The proximity factor of data storage and computing is what makes Edge and 5G combination so powerful.

This could mean that innovative new applications such asremote controlrobotics, near real-time cognitive video analysis and plant automation may now be possible.

IBM and Verizon Business are already eyeing their market with advanced Mobile Asset Tracking and management solutions. The mobile asset tracking solutions would help enterprises improve operations, optimize production quality, and help clients enhance worker safety from any location. This would be an advantage that could play out very well in any situation that arises from a COVID-19-like pandemic or catastrophes.

At the time of this announcement, Tami Erwin, CEO,VerizonBusiness said,

This collaboration (with IBM) is all about enabling the future of industry in the Fourth Industrial Revolution. Combining the high speed and low latency ofVerizons 5G UWB Network and MEC capabilities with IBMs expertise in enterprise-grade AI and production automation can provide industrial innovation on a massive scale and can help companies increase automation, minimize waste, lower costs, and offer their own clients a better response time and customer experience.

Virtually Mobile All the Way: The AI + IoT Roadmap for a 5G Future

IBM would leverage Verizons wireless networks, including Verizons 5G Ultra-Wideband (UWB) network, and Multi-access Edge Computing (MEC) capabilities. Verizon would also provide its ThingSpace IoT Platform and Critical Asset Sensor solution (CAS) to jointly develop new 5G-based IoT Edge networks with IBM.

These will be jointly offered with IBMs market-leading Maximo Monitor with IBM Watson and advanced analytics. The combined solutions could help clients detect, locate, diagnose and respond to system anomalies, monitor asset health, and help predict failures in near real-time.

IBM andVerizonare also working on potential combined solutions for 5G and MEC-enabled use cases such as near real-time cognitive automation for the industrial environment.

Bob Lord, Senior Vice President, Cognitive Applications, Blockchain and Ecosystems, IBM says

The industrial sector is undergoing an unprecedented transformation as companies begin to return to full-scale operations, aided by new technology to help reduce costs and increase productivity. Through this collaboration, we plan to build upon our longstanding relationship with Verizon to help industrial enterprises capitalize on joint solutions that are designed to be multi-Cloud ready, secured and scalable, from the data center all the way out to the enterprise Edge.

Verizonand IBM also plan to collaborate on potential joint solutions to address worker safety, predictive maintenance, product quality and production automation.

(To share your views on the role of AI, IoT and 5G on the Future of Enterprise IT, please write to us at sghosh@martechseries.com)

Share and Enjoy !

Visit link:

5G, AI & IoT : IBM and Verizon Business Close to Edge of "Virtually Mobile" - AiThority

Google Brains AI achieves state-of-the-art text summarization performance – VentureBeat

Summarizing text is a task at which machine learning algorithms are improving, as evidenced by a recent paper published by Microsoft. Thats good news automatic summarization systems promise to cut down on the amount of message-reading enterprise workers do, which one survey estimates amounts to 2.6 hours each day.

Not to be outdone, a Google Brain and Imperial College London team built a system Pre-training with Extracted Gap-sentences for Abstractive SUmmarization Sequence-to-sequence, or Pegasus that leverages Googles Transformers architecture combined with pretraining objectives tailored for abstractive text generation. They say it achieves state-of-the-art results in 12 summarization tasks spanning news, science, stories, instructions, emails, patents, and legislative bills, and that it shows surprising performance on low-resource summarization, surpassing previous top results on six data sets with only 1,000 examples.

As the researchers point out, text summarization aims to generate accurate and concise summaries from input documents, in contrast to executive techniques. Rather than merely copy fragments from the input, abstractive summarization might produce novel words or cover principal information such that the output remains linguistically fluent.

Transformers are a type of neural architecture introduced in a paper by researchers at Google Brain, Googles AI research division. As do all deep neural networks, they contain functions (neurons) arranged in interconnected layers that transmit signals from input data and slowly adjust the synaptic strength (weights) of each connection thats how all AI models extract features and learn to make predictions. But Transformers uniquely have attention. Every output element is connected to every input element, and the weightings between them are calculated dynamically.

The team devised a training task in which whole, and putatively important, sentences within documents were masked. The AI had to fill in the gaps by drawing on web and news articles, including those contained within a new corpus (HugeNews) the researchers compiled.

In experiments, the team selected their best-performing Pegasus model one with 568 million parameters, or variables learned from historical data trained on either 750GB of text extracted from 350 million web pages (Common Crawl) or on HugeNews, which spans 1.5 billion articles totaling 3.8TB collected from news and news-like websites. (The researchers say that in the case of HugeNews, a whitelist of domains ranging from high-quality news publishers to lower-quality sites was used to seed a web-crawling tool.)

Pegasus achieved high linguistic quality in terms of fluency and coherence, according to the researchers, and it didnt require countermeasures to mitigate disfluencies. Moreover, in a low-resource setting with just 100 example articles, it generated summaries at a quality comparable to a model that had been trained on a full data set ranging from 20,000 to 200,000 articles.

Read more here:

Google Brains AI achieves state-of-the-art text summarization performance - VentureBeat

The Impact Of AI On Call Centres – Forbes

Friendly robot in call centre

The pandemic is a severe stress test for the business continuity plans of global corporations. The operators of call centres are playing an important role in meeting that challenge, and it has not been easy. In normal times, if an earthquake hits Bangalore, you can switch capacity to your call centre in Manila. But what do you do when all the call centres around the world that serve your customers are hit all at the same time?

The big outsourcing call centre companies which serve corporate giants have hundreds of thousands of employees, and many of these people are working from home now. Their employers can make sure they have adequate computer equipment, but staff in developing countries are often handicapped by lack of good internet access, and the lack of a calm environment without interruptions.

The pandemic will prompt another round of discussions about re-shoring call centre jobs to places which are less vulnerable in that way, but cost will remain a huge barrier. The salary of one person in a corporations home country will often pay for three people in an offshore location. Or you could employ a graduate in India, China, or the Philippines for the cost of a school leaver in the US or the UK, and keep some change.

The pandemic is also reviving talk of automation slashing the number of humans working in call centres. In 2014, the CEO of Telstra, Australias largest mobile phone company, made headlines with a forecast that within five years there would be no people in its call centres. It didnt happen, of course. Peter Monk, GM for Australia of Concentrix, one of the two big global customer engagement companies offering contact centre services, says that employment in call centres has grown modestly in recent years, but that the really significant change has been the shift from voice to digital.

When customer interactions are simple, they can often be automated. A chatbot is perfectly adequate to handle a password change, or the provision of some basic information. And the digitally native younger generations prefer to interact digitally, ideally with short videos. But when there is significant value to be generated and exchanged, a call will often still be better.

Concentrix rival for the top spot in contact centre management is the French multinational Teleperformance, whose CEO said in a recent interview, When chatbots started to arrive about five years ago, I was depressed. But since then our company has never grown so fast. The chatbot does the rational part of the job and the customer expert manages the emotional part.

Peter Monk is sceptical about vendors claims to offer services with artificial intelligence. Most vendors offer clever natural language processing technology, but it is not yet real AI. Some exceptions are coming through, but the software is still mostly pre-programmed, using lookup tables and knowledge banks.

One of the more interesting early applications of NLP is systems which can detect the emotional state of a customer on the other end of a phone line, or tapping away on their keyboard. These systems are deployed alongside call centre staff, alerting them if a customer seems to be running out of patience, and suggesting variations on the script. The more sophisticated ones can discern the context of a word or sentence, referring to words and phrases from earlier in an exchange, or even from a previous conversation.

Josh Feast is co-founder and CEO of Cogito, whose AI coach helps customer service representatives be more emotionally intelligent on the phone. He thinks that most of us under-estimate the challenging job of handling numerous calls each day, working with customers varied circumstances and communication styles, and dealing with countless policies and procedures. This can cause cognitive overload. AI can help them recognize behavioural signals by providing contextual guidance. The focus has been on automation, so we have yet to realize AIs power to coach people, helping them reach their full potential.

Thanks to the increasing sophistication of NLP systems, IVR, or Interactive Voice Response, is making a comeback. When these systems were first introduced a few years ago, they were clunky and awkward, and the voice recognition software was not quite good enough, so they were quickly abandoned.

As with most industry sectors, the other big application of early AI systems is analytics. Phone conversations can be converted to text and analysed, so that companies can track how often each customer has been interacting with them, and what they are saying, with greater and greater richness and depth of understanding. This is a big area of investment for the large call centre operators.

The call centre industry is a big one, and what happens to jobs in it will be important. It employs many millions of people around the world, in countries both rich and poor. It began in the West when large telephone systems were developed, and gradually became a major global employer. In the UK, the Birmingham Press and Mail claims to have opened the first centre, but the call centre boom really got going in 1985, when Direct Line became the first company to sell insurance entirely by phone. Today the industry employs around 1.3 million people in the UK, and more than 6 million in the USA.

Entrepreneurs in the developing world soon realised that they could bring a massive cost advantage to the industry. India was the biggest player in this market for many years, but in 2011 the Philippines stole the crown. With no connection to the south-east Asian mainland, the Philippines had failed to attract the foreign investment in manufacturing that was improving living standards in Thailand and Vietnam, but its people speak excellent English, and 1.2 million of them now work in call centres.

Artificial intelligence and related technologies are driving two other significant trends within the call centre industry. One is real-time translation, which should accelerate global trade so long as it isnt derailed by the pandemic, and populist nationalism. Googles focus on B2C (business-to-consumer) applications leaves some space for other companies to play in the B2B space, and one of the leaders here is Unbabel, a Portuguese company.

The other is the application of gig economy business models to the call centre industry. Companies like Concentrix, through their Solv solution, enable individuals anywhere in the world to get themselves accredited to work on particular types of business, and then log on and log off to work whenever they like. As the support tools improve, there is less and less need for contact centre workers to know much about the products and services of the companies they are representing. This information can be accessed instantaneously from databases in the cloud. They are evaluated more on their client handling skills, their empathy, and their ability to work with continuously evolving technologies.

More fundamentally, younger companies are designing their business processes so that customers never or almost never need to contact a human to obtain their goods and services. The websites and logistics operations of digital disruptors aim to be so intuitive and user-friendly that customers never need to search for the Contact link. When this works it generates a tremendous cost advantage. When it fails, it generates huge frustration. The worst problem is when legacy companies, which lack the slick ergonomics of the disruptors websites, try to pull off the same trick, and hide their contact links. We consumers are not so easily fooled, and this behaviour will be the downfall of many once-great companies.

The Telstra CEOs remark about call centres going dark within five years was a classic case of Amaras Law, which observes that we over-estimate the impact of any given technology in the short term, and under-estimate it in the long term. Pre-virus, employment in call centres was growing in single-digit percentages a year. Post-pandemic, assuming the economy recovers, call volumes will probably remain stable, but their share of customer contacts will decline, and the call centre will become more and more a contact centre, handling many more exchanges digitally than by voice.

In the long run, it is a fairly good bet that humans will become as scarce in contact centres as they are becoming in warehouses. The question is, how long will this take. As Peter Monk, the GM of Concentrix Australia says, Of course, the endgame - in the not too distant future - is that many aspects of even my job can be done pretty much by a machine.

More:

The Impact Of AI On Call Centres - Forbes

‘Smarter AI can help fight bias in healthcare’ – Healthcare IT News

Leading researchers discussed which requirements AI algorithms must meet to fight bias in healthcare during the 'Artificial Intelligence and Implications for Health Equity: Will AI Improve Equity or Increase Disparities?' sessionwhich was held on 1 December.

The speakers were: Ziad Obermeyer, associate professor of health policy and management at the Berkeley School of Public Health, CA; Luke Oakden-Rayner, director of medical imaging research at the Royal Adelaide Hospital, Australia; Constance Lehman, professor of radiology at Harvard Medical School, director of breast imaging, and co-director of the Avon Comprehensive Breast Evaluation Center at Massachusetts General Hospital; and Regina Barzilay, professor in the department of electrical engineering and computer science and member of the Computer Science and AI Lab at the Massachusetts Institute of Technology.

The discussion was moderated by Judy Wawira Gichoya, assistant professor in the Department of Radiology at Emory University School of Medicine, Atlanta.

WHY IT MATTERS

Artificial intelligence (AI) may unintentionally intensify inequities that already exist in modern healthcare and understanding those biases may help defeat them.

Social determinants partly cause poor healthcare outcomes and it is crucial to raise awareness about inequity in access to healthcare, as ProfSam Shah, founder and director of Faculty of Future Health in London, explained in a keynote duringthe HIMSS& Health 2.0EuropeanDigital event.

Taking in the patient experience, conducting exploratory error analysis and building smarter and robust algorithms could help reduce bias in many clinical settings, such as pain management and access to screening mammography.

ON THE RECORD

Judy Wawira Gichoya, Emory University School of Medicine, said: "The data we use is collected in a social system that already has cultural and institutional biases. () If we just use this data without understanding the inequities then algorithms will end up habituating, if not magnifying, our existing disparities."

Ziad Obermeyer, Berkeley School of Public Health, talked about the pain gap phenomenon, where the pain of white patients is treated or investigated until a cause is found, while in other races it may be ignored or overlooked.

"Society's most disadvantaged, non-white, low income, lower educated patients () are reporting severe pain much more often. An obvious explanation is that maybe they have a higher prevalence of painful conditions, but that doesn't seem to be the whole story," he said.

Obermayer explained that listening to the patient, not just the radiologist could help to develop solutions to predict the experience of pain. He referenced an NIH-sponsored dataset that helped him experiment with a new type of algorithm, with which he found more than double the number of black patients with severe pain in their knees who would be eligible for surgery than before.

Luke Oakden-Rayner, Royal Adelaide Hospital, suggested conducting exploratory error analysis to look at every error case and find common threads, instead of just looking at the AI model and seeing that it is biased.

"Look at the cases it got right and those it got wrong. All the cases AI got right will have something in common and so will the ones it got wrong, then you can find out what the system is biased toward," he said.

Constance Lehman, Harvard Medical School, said: About two millionwomen will be diagnosed with BC and over 600,000will die in the US this year. But theres a marked discrepancy in the impact of BC on women of colour vs. caucasian women.

In the EU, onein eightwomen in will developbreast cancerbefore the age of 85 and an average of 20% ofbreast cancer casesoccur in women when they are younger than 50 years old, according to Europa Donna, a Europe-wide coalition of affiliated groups of women that facilitates the exchange of information concerning breast cancer.

Lehman presented an algorithm, which she developed with Regina Barzilay to help identify women's risk for breast cancerbased on their mammogram alone. The solution uses DL and an imaging coder that takes the four views of a standard digital analogue mammogram, without requiring access to family history, prior biopsies or reproductive history.

This imaging only model performs better than other models and supports equity across the races, she said.

Regina Barzilay, MIT Institute of Medical Engineering & Science, explained how to build robust AI to support equity in health. An image-based model that is trained on diverse population can very accurately predict risk across different populations in a very consistent way, she said.

The AI community is working hard on tools that can work robustly against bias, by making sure that models are trained to be robust in the presence of bias, which may come from the nuisance variation between devices used to take the imaging.

Humans who are ultimately responsible for making a clinical decision should understand what the machine is doing, to think of all possible biases that the machine can introduce. Models that can make their reasoning understandable to humans could help, she concluded.

Read more:

'Smarter AI can help fight bias in healthcare' - Healthcare IT News

UK govt’s 17.3m AI-boffinry cash injection is just ‘a token amount’ – The Register

AI is at the forefront of the UK governments digital strategy, and believed to be crucial to the nation's future post-Brexit.

A recent study by Accenture estimated artificial intelligent systems could add up to a whopping and borderline unbelievable 654bn (US$802bn) to the British economy by 2035.

Well, you've gotta spend money to make money. So, Blighty's government has announced it is looking into fostering a thriving AI industry in the UK and pledged 17.3m ($21.2m) to bankroll machine learning and robotics research at universities. But is that enough?

The funding is better than nothing and shows the government is at least thinking about how important these technologies are, said Nick Taylor, professor of computer science and deputy director of the Edinburgh Centre for Robotics at Heriot-Watt University in Scotland. However, its not a great amount, he added.

This funding indicates that the government recognises how important AI and robotics are to our future," he said. "AI and robotics are advancing so rapidly at the current time that we could easily exhaust any amount of research funding that was directed towards them."

That 17 million quid will go to the Engineering and Physical Sciences Research Council (EPSRC) and filter down to several UK universities for a range of projects including developing robots for surgery and nuclear environments.

Of those funds, 6.5m ($8.0m) will be pumped into the UK Robotics and Autonomous Systems (UK-RAS) network. Its a small amount, considering robots are particularly expensive. Not only does it require expertise but costs of hardware need to be factored in as well.

Zoubin Ghahramani, professor of information engineering at the University of Cambridge, told The Register that AI is thriving in the UK, with academic institutions, startups and big companies from DeepMind and Amazon to Apple and Microsoft being major investors. Ghahramanis own upstart, Geometric Intelligence, which he cofounded along with Gary Marcus, Doug Bemis and Ken Stanley, was acquired by Uber for an undisclosed amount.

While he applauds the UK governments investment, he told us: Its a relatively small step in the right direction, compared to the hundreds of millions invested by Canadian and US governments.

DARPA, the US defense research arm, often readily dishes tens of millions of dollars for individual robot projects. In 2009, it awarded $32m (26.1m) to develop the LS3 robot, and a further $10m (8.1m) to test it.

The LS3 robot looks like a giant robo bull, complete with four sturdy legs and a barrel-like body. Its designed to help US soldiers carry 400 lbs (181.4 kg) of gear on their missions, but was shelved in 2015 for being too noisy and not stealthy enough to use in reality.

A cash injection of just under 20m into the UK is dwarfed by the $1.1bn (900m) spent by the US government on AI in 2015.

The UK funding is a token amount that couldnt hope to put us on the same level as Google or Microsoft, Kate Devlin, senior lecturer in the department of computing, and sex robots expert at Goldsmiths University, told The Register.

The only bright side I can see is that the government is recognising the importance of AI research in academia rightly so as that's often how big corporations acquire their technology and their expertise, she added.

Leslie Smith, professor of computer science at the University of Stirling, agrees. It's not much in comparison with the spend of large companies and the US [Defense department] on these areas," he said.

To me the question is more like: how can we get the best leverage for this sort of sum of money? How can we get companies to work with UK academics to make the most of this investment? Not unrelated to this is the issue of ensuring that the money generated from the application of these technologies sticks partly to the Universities, and partly to the UK itself rather than being exploited [elsewhere].

View post:

UK govt's 17.3m AI-boffinry cash injection is just 'a token amount' - The Register

Is FaceApp Just a Gender Swapping Gimmick or an AI and Privacy Nightmare in the Making? – Wccftech

When FaceApp initially launched back in 2017, it took the world by storm because of its capabilities. Granted, we have seen several apps in the past that could make you look old or young but the accuracy and precision were not there. FaceApp, however, used artificial intelligence to do that, and the results were mind-boggling at best. Even when it was launched, a lot of security researchers raised concerns over the consequences of using this app. After all, you are uploading your pictures onto an app for it to use and run through its AI algorithm. But people still continued using it.

After almost 3 years, the app has exploded all over Twitter, Instagram, and Facebook again. However, this time, people are using the gender swap feature that uses AI to change a person's gender and present them with a picture that is very convincing, and at the same time, quite scary.

Apple Has a New Name for Its Next iOS Update; New Image Leak Also Shows iPhone Compatibility List for That Update

Now, there have been several apps in the past like that. Even Snapchat introduced a feature that would swap your gender. But the implementation here is different since it uses artificial intelligence, the very thing many people fear, in the first place. But it is not just artificial intelligence that we should be afraid of, it is the privacy policy of the app. if you head over to the recently updated privacy policy of the app, this is the highlighted text.

Now the clever thing here is that when you do visit the page, only the aforementioned lines are highlighted, which is more than enough to convince any user that this app is indeed safe. However, if you just take a minute and read beyond those two lines, you start becoming wary of just what is being stored and used. True, some would say that they are not worried about the app or what it does with the photos but keep in mind that this app has over 100 million downloads on Google Play Store alone. One could only imagine how many underage individuals are using this app for swapping their genders and potentially risking their pictures.

Now, if you are one of those people who believe that just deleting the app will get rid of the photos that are taken or used by FaceApp, that is not the case. However, it is not as easy as it may sound. For those who want to get their pictures removed, you will actually have to put in a request for it to happen, in the first place. In order to do that, you have to go to Settings >Support >Report a bug with the word "privacy" in the subject line, and then write a formal request. Which is convoluted to a point that most people will not go through it, in the first place.

To confirm just how convincing or borderline creepy this app can become, I asked a few of my friends to provide their pictures. Now, it was easy for me to tell the difference because I know them, but to an unsuspecting eye, it might not be the same case.

And privacy is just one concern that people have raised. On the other hand, we have a shockingly powerful AI in place which could very well be learning the patterns for a much stronger facial recognition pattern.

iOS Might Be Getting Rebranded to iPhoneOS, as Apple Might Be Looking to Streamline Its Operating System Names

In all honesty, the results are shocking at best. What is even more shocking is the amount of information we are knowingly handing away to an app just for the sake of shock value. Again, whether or not this app is going to have severe consequences or not is still yet to be seen. But as a word of warning, keep in mind that the FBI did issue a warning pertaining to the safety of the app, and this happened back in December 2019.

Calling FaceApp an imminent threat to privacy or an AI nightmare would be stretching it a bit too far. However, at the same time, we have to keep in mind that in a world where our privacy is among our most important assets, there are some questionable practices and activities that can easily take place if things go rogue. We can only say that the more we are protecting our privacy, the better it is in the longer run. Currently, you can download FaceApp on both iOS and Android for your amusement.

Continued here:

Is FaceApp Just a Gender Swapping Gimmick or an AI and Privacy Nightmare in the Making? - Wccftech

Could a new academy solve the AI talent problem? – FCW.com

Defense

Eric Schmidt speaks at a March 2020 meeting of the Defense Innovation Board in Austin, Texas. (DOD photo by EJ Hersom).

Defense technology experts think adding a military academy could be the solution to the U.S. government's tech talent gap.

"The canonical view is that the government cannot hire these people because they will get paid more in private industry," said Eric Schmidt, former Google chief and current chair of the Defense Department's Innovation Advisory Board, during a July 29 Brookings Institution virtual event.

"My experience is that people are patriotic and that you have a large number of people -- and this I think is missed in the dialogue -- a very large number of people who want to serve the country that they love. And the reason that they're not doing it is there's no program that makes sense to them."

Schmidt's comments come as the National Security Commission on Artificial Intelligence, of which he chairs, issued its second quarterly report with recommendations to Congress on how the U.S. government can invest in and implement AI technology.

One key recommendation: A national digital service academy, to act like the civilian equivalent of a military service academy to train technical talent. That institution would be paired with an effort to establish a national reserve digital corps to serve on a rotational basis.

Robert Work, former deputy secretary of defense who is now NSCAI's vice chair, said the academy would bring in people who want to serve in government and would graduate students to serve as full time federal employees at GS-7 to GS-11 pay grade. Members of the digital corps would five years at 38 days a year helping government agencies figure out how to best implement AI.

For the military, the commission wants to focus on creating a clear way to test existing service members' skills and better gauge the abilities of incoming recruits and personnel.

"We think we have a lot of talent inside the military that we just aren't aware of," Work said.

To remedy that, Work said the commission recommends a grading, via a programming proficiency test, to identify government and military workers that have software development experience. The recommendations also include a computational thinking component to the armed services' vocational aptitude battery to better identify incoming talent.

"I suspect that if we can convince the Congress to make this real and the president signs off hopefully then not only will we be successful but we'll discover that we need 10 times more. The people are there and the talent is available," Schmidt said.

About the Author

Lauren C. Williams is a staff writer at FCW covering defense and cybersecurity.

Prior to joining FCW, Williams was the tech reporter for ThinkProgress, where she covered everything from internet culture to national security issues. In past positions, Williams covered health care, politics and crime for various publications, including The Seattle Times.

Williams graduated with a master's in journalism from the University of Maryland, College Park and a bachelor's in dietetics from the University of Delaware. She can be contacted at [emailprotected], or follow her on Twitter @lalaurenista.

Click here for previous articles by Wiliams.

View original post here:

Could a new academy solve the AI talent problem? - FCW.com

AI Is Already One of the Largest Industries on Earth and It’s Going to End Us All – Geek

Major tech companies are investing in AI and machine learning at an alarming rate. According to a new report, companies spent between $26 and $39 billion on AI research (with giants like Google, Facebook, and Baidu contributing more than two-thirds of that) in 2016 alone. While thats not nearly on the scale of, say, global oil trade (which cracks a trillion most years), its still enough to make it one of the largest sub-industries on Earth. Thats put it well above many very large numbers like Hollywoods box office takings from last year ($11 billion) or the GDP of Iceland ($16 billion).

On its face, this isnt too surprising. Silicon Valleys wealth is among the greatest in the world, easily dwarfing whole regions (and its kinda messed up how that isnt even an exaggeration), of course, theyd invest a sliver of that money into hyper-advanced autonomous software. What this shows, though, is that the race for AI has officially kicked off. Contrast this figures, for example, to those from 2013, and we find that investment in AI has more than tripled. Its also shifted almost entirely to research, development, and, most importantly, deployment.

Those sectors adopting AI fastest are, of course, the automobile, tech, and telecom industries. The McKinsey Global Institute for Artificial Intelligence concludes that these are the industries that have the most to gain. Each of these areas (as well as finances and health care) benefit tremendously from AI adoption, with the earliest consumers of machine learning tech in those fields yielding profits that can be 10% or more higher than the industry average year-on-year.

Now, I know all that sounds super-boring, but, in short: smart businesses are making billions thanks to AI. And thats going to get faster and faster as things go. As those businesses employing AI out-compete their rivals over the next few years, were going to start seeing the beginning of the end of us. Higher profits are awesome, and all, but a good chunk of that is coming from human workers who are becoming obsolete. Obviously, we should keep using AI to make our lives awesome, but even the Patron Saint of wacky Silicon Valley entrepreneurs, Elon Musk, is certain that the day will soon come when we need to change up how we conceive of our entire economy if we want to keep the world intact.

Amazon might be the best example of a modern business profiting by cutting out humans. The tech company bought Kiva, a robotics company that specializes in automated packaging. The investment and subsequent deployment of Kivas tech has decreased the time it takes from the moment the customer clicks to the moment the package ships down from about an hour to just 15 minutes. And thats with a boost to inventory capacity and a massive operating cost drop. With that kind of advantage, its not hard to see why rumblings of an Amazon-dominated retail future are so prevalent.

Its not quite all doom and gloom though. Netflix has dramatically improved their algorithms for helping users find movies they might like. The algorithm was already pretty great enough that I recall wanting to punch the sun anytime my roommates would like random shows or movies on my account. But that, and many other minor issues have been smoothed out, and Netflix projects that this has helped that save $1 billion in subscription cancellations annually.

I guess, though, if you think about it, Netflix is really just the replacement for brick-and-mortar movie rental shops and the clerks who would give you recommendations there. So I guess there is no silver lining. Were all going to be funemployed in a decade or two. Better hope we figure out our collective shit.

Visit link:

AI Is Already One of the Largest Industries on Earth and It's Going to End Us All - Geek

Researchers open-source state-of-the-art object tracking AI – VentureBeat

A team of Microsoft and Huazhong University researchers this week open-sourced an AI object detector Fair Multi-Object Tracking (FairMOT) they claim outperforms state-of-the-art models on public data sets at 30 frames per second. If productized, it could benefit industries ranging from elder care to security, and perhaps be used to track the spread of illnesses like COVID-19.

As the team explains, most existing methods employ multiple models to track objects: (1) a detection model that localizes objects of interest and (2) an association model that extracts features used to reidentify briefly obscured objects. By contrast, FairMOT adopts an anchor-free approach to estimate object centers on a high-resolution feature map, which allows the reidentification features to better align with the centers. A parallel branch estimates the features used to predict the objects identities, while a backbone module fuses together the features to deal with objects of different scales.

The researchers tested FairMOT on a training data set compiled from six public corpora for human detection and search: ETH, CityPerson, CalTech, MOT17, CUHK-SYSU, and PRW. (Training took 30 hours on two Nvidia RTX 2080 graphics cards.) After removing duplicate clips, they tested the trained model against benchmarks that included 2DMOT15, MOT16, and MOT17. All came from the MOT Challenge, a framework for validating people-tracking algorithms that ships with data sets, an evaluation tool providing several metrics, and tests for tasks like surveillance and sports analysis.

Compared with the only two published works that jointly perform object detection and identity feature embedding TrackRCNN and JDE the team reports that FairMOT outperformed both on the MOT16 data set with an inference speed near video rate.

There has been remarkable progress on object detection and re-identification in recent years, which are the core components for multi-object tracking. However, little attention has been focused on accomplishing the two tasks in a single network to improve the inference speed. The initial attempts along this path ended up with degraded results mainly because the re-identification branch is not appropriately learned, concluded the researchers in a paper describing FairMOT. We find that the use of anchors in object detection and identity embedding is the main reason for the degraded results. In particular, multiple nearby anchors, which correspond to different parts of an object, may be responsible for estimating the same identity, which causes ambiguities for network training.

In addition to FairMOTs source code, the research team made available several pretrained models that can be run on live or recorded video.

See more here:

Researchers open-source state-of-the-art object tracking AI - VentureBeat

Facebook Initiative Aims To Demystify AI By Crowdsourcing Ideas – Women Love Tech

Facebook recently announced its award recipients of the Ethics in AI Research Initiative for the Asia Pacific region. Among them are proposals from two Australian universities who will each receive funds to further their research in AI.

Their success follows a request for proposals submitted by Facebooks research division last year, which was made open to academic institutions, think tanks and research groups across the Asia Pacific region.

This is part of a wider initiative with Facebook in partnership with the Centre for Civil Society and Governance of The University of Hong Kong and the Privacy Commissioner for Personal Data, Hong Kong.

Through this regional outreach, Facebooks aims to simultaneously crowdsource the best local ideas and accountable practices.

As Raina Yeung, Facebooks Head of Privacy and Data Policy, Engagement, in the Asia Pacific region said, The latest advancements in AI bring transformational changes to society, and at the same time bring an array of complex ethical questions that must be closely examined.

Monash academic Professor Robert Sparrows approved proposal The uses and abuses of black box AI in emergency medicine highlights issues of concern surrounding AI. The issue, for instance, with black box AI is that it has internal rules and parameters which are opaque to their users. In the field of medicine, particularly emergency medicine, this lack of clarity is dangerous and must be correctly addressed. When decisions are made concerning human lives it is paramount for all involved that transparency exists as to how those choices are being made. For those in intensive care the prospect of receiving lesser attention due to the economic or genetic determinations made by a circuitboard is understandably concerning, as is the risk of technical malfunctions affecting ones diagnosis.

However one perceives the intrusion of AI into intellectual disciplines requiring tact and discretion, such as law or medicine, the process is ongoing and exponential. While such technologies may not currently match human performance, the constant rate of advancements in AI makes it essentially inevitable that they will do so. With this in mind the process of automation can be seen as something of a passing of the torch from humans to our AI counterparts, both in physical and intellectual fields.

The approved proposal of Doctor Sarah Bankins, of Macquarie University, AI decisions with dignity: promoting interactional justice perceptions, further highlights this shift. In this transitional stage particular care is necessary to ensure AI tools are applied in ways that are equitable and socially conscientious, as the knock-on effects of poor implementation will compound over time.

AI that can think and act for themselves, often referred to as General Intelligences, the holy grail for AI developers, are still a distant prospect. In the meantime AI researchers have vaulted smaller hurdles. Advances in machine learning, the ability of computer programs to improve autonomously without human input, have paved the way for bleeding edge technologies such as artificial language processing and driverless vehicles. These new tools boast impressive gains to productivity and, as they improve, have the potential to save human lives.

However, despite these advancing capacities such tools can not yet think or act independently, and it remains the role of conscientious human participants to dictate how and where theyre applied. By acting as custodians of our future selves and taking early steps to safeguard the infrastructure of AI against systematic inequity we can work to ensure a brighter future for all, as is Facebooks stated aim in foregrounding diverse, regional voices in the conversations of ethical practice around AI.

AI decisions with dignity: Promoting interactional justice perceptionsDr. Sarah Bankins, Prof. Deborah Richards, A/Prof. Paul Formosa, (Macquarie University), Dr. Yannick Griep (Radboud University)

The challenges of implementing AI ethics frameworks in the Asia PacificManju Lasantha Fernando, Ramathi Bandaranayake, Viren Dias, Helani Galpaya, Rohan Samarajiva (LIRNEasia)

Culturally informed pro-social AI regulation and persuasion frameworkDr. Junaid Qadir (Information Technology University of Lahore, Punjab, Pakistan), Dr. Amana Raquib (Institute of Business Administration Karachi, Pakistan)

Ethical challenges on application of AI for the aged careDr. Bo Yan, Dr. Priscilla Song, Dr. Chia-Chin Lin (University of Hong Kong)

Ethical technology assessment on AI and internet of thingsDr. Melvin Jabar, Dr. Ma. Elena Chiong Javier (De La Salle University), Mr. Jun Motomura (Meio University), Dr. Penchan Sherer (Mahidol University)

Operationalizing information fiduciaries for AI governanceYap Jia Qing, Ong Yuan Zheng Lenon, Elizaveta Shesterneva, Riyanka Roy Choudhury, Rocco Hu (eTPL.Asia)

Respect for rights in the era of automation, using AI and roboticsEmilie Pradichit, Ananya Ramani, Evie van Uden (Manushya Foundation), Henning Glasser, Dr. Duc Quang Ly, Venus Phuangkom (German-Southeast Asian Center of Excellence for Public Policy and Good Governance)

The uses and abuses of black box AI in emergency medicineProf. Robert Sparrow, Joshua Hatherley, Mark Howard (Monash University)

Women Love Tech would like to thank Nick Ouzas for his story.

Read the original post:

Facebook Initiative Aims To Demystify AI By Crowdsourcing Ideas - Women Love Tech

Has There Been A Second AI Big Bang? – Forbes

Aleksa Gordic, an AI researcher with DeepMind

The Big Bang in artificial intelligence (AI) refers to the breakthrough in 2012, when a team of researchers led by Geoff Hinton managed to train an artificial neural network (known as a deep learning system) to win an image classification competition by a surprising margin. Prior to that, AI had performed some remarkable feats, but it had never made much money. Since 2012, AI has helped the big technology companies to generate enormous wealth, not least from advertising.

Has there been a new Big Bang in AI, since the arrival of Transformers in 2017? In episodes 5 and 6 of the London Futurist podcast, Aleksa Gordic explored this question, and explained how todays cutting-edge AI systems work. Aleksa is an AI researcher at DeepMind, and previously worked in Microsofts Hololens team. Remarkably, his AI expertise is self-taught so there is hope for all of us yet!

Transformers are deep learning models which process inputs expressed in natural language and produce outputs like translations, or summaries of texts. Their arrival was announced in 2017 with the publication by Google researchers of a paper titled Attention is All You Need. This title referred to the fact that Transformers can pay attention simultaneously to large corpus of text, whereas their predecessors, Recurrent Neural Networks, could only pay attention to the symbols either side of the segment of text being processed.

Transformers work by splitting text into small units, called tokens, and mapping them onto high-dimension networks - often thousands of dimensions. We humans cannot envisage this. The space we inhabit is defined by three numbers or four, if you include time, and we simply cannot imagine a space with thousands of dimensions. Researchers suggest that we shouldnt even try.

For Transformer models, words and tokens have dimensions. We might think of them as properties, or relationships. For instance, man is to king as woman is to queen. These concepts can be expressed as vectors, like arrows in three-dimensional space. The model will attribute a probability to a particular token being associated with a particular vector. For instance, a princess is more likely to be associated with the vector which denotes wearing a slipper than to the vector that denotes wearing a dog.

There are various ways in which machines can discover the relationships, or vectors, between tokens. In supervised learning, they are given enough labelled data to indicate all the relevant vectors. In self-supervised learning, they are not given labelled data, and they have to find the relationships on their own. This means the relationships they discover are not necessarily discoverable by humans. They are black boxes. Researchers are investigating how machines handle these dimensions, but it is not certain that the most powerful systems will ever be truly transparent.

The size of a Transformer model is normally measured by the number of parameters it has. A parameter is analogous to a synapse in a human brain, which is the point where the tendrils (axons and dendrites) of our neurons meet. The first Transformer models had a hundred million or so parameters, and now the largest ones have trillions. This is still smaller than the number of synapses in the human brain, and human neurons are far more complex and powerful creatures than artificial ones.

A surprising discovery made a couple of years after the arrival of Transformers was that they are able to tokenise not just text, but images too. Google released the first vision Transformer in late 2020, and since then people around the world have marvelled at the output of Dall-E, MidJourney, and others.

The first of these image-generation models were Generative Adversarial Networks, or GANs. These were pairs of models, with one (the generator) creating imagery designed to fool the other into accepting it as original, and the second system (the discriminator) rejecting attempts which were not good enough. GANs have now been surpassed by Diffusion models, whose approach is to peel noise away from the desired signal. The first Diffusion model was actually described as long ago as 2015, but the paper was almost completely ignored. They were re-discovered in 2020.

Transformers are gluttons for compute power and for energy, and this has led to concerns that they might represent a dead end for AI research. It is already hard for academic institutions to fund research into the latest models, and it was feared that even the tech giants might soon find them unaffordable. The human brain points to a way forward. It is not only larger than the latest Transformer models (at around 80 billion neurons, each with around 10,000 synapses, it is 1,000 times larger). It is also a far more efficient consumer of energy - mainly because we only need to activate a small portion of our synapses to make a given calculation, whereas AI systems activate all of their artificial neurons all of the time. Neuromorphic chips, which mimic the brain more closely than classic chips, may help.

Aleksa is frequently surprised by what the latest models are able to do, but this is not itself surprising. If I wasnt surprised, it would mean I could predict the future, which I cant. He derives pleasure from the fact that the research community is like a hive mind: you never know where the next idea will come from. The next big thing could come from a couple of students at a university, and a researcher called Ian Goodfellow famously created the first GAN by playing around at home after a brainstorming session over a couple of beers.

See the rest here:

Has There Been A Second AI Big Bang? - Forbes

The Pentagon Wants AI-Driven Drone Swarms for Search and Rescue Ops – Nextgov

The Defense Departments central artificial intelligence development effort wants to build an artificial intelligence-powered drone swarm capable of independently identifying and tracking targets, and maybe even saving lives.

The Pentagons Joint Artificial Intelligence Center, or JAIC, issued a request for information to find out if AI developers and drone swarm builders can come together to support search and rescue missions.

Search and rescue operations are covered under one of the four core JAIC research areas: humanitarian aid and disaster relief. The program also works on AI solutions for predictive maintenance, cyberspace operations and robotic process automation.

The goal for the RFI is to discover whether industry can deliver a full-stack search and rescue drone swarm that can self-pilot, detect humans and other targets and stream data and video back to a central location. The potential solicitation would also look for companies or teams that can provide algorithms, machine training processes and data to supplement that provided by the government.

The ideal result would be a contract with several vendors that together could provide the capability to fly to a predetermined location/area, find people and manmade objectsthrough onboard edge processingand cue analysts to look at detections sent via a datalink to a control station, according to the RFI. Sensors shall be able to stream full motion video to an analyst station during the day or night; though, the system will not normally be streaming as the AI will be monitoring the imagery instead of a person.

The system has to have enough edge processing power to enable the AI to fly, detect and monitor without any human intervention, while also being able to stream live video to an operator and allow that human to take control of the drones, if needed.

The RFI contains a number of must-haves, including:

The RFI also notes all training data will be government-owned and classified. All development work will be done using government-owned data and on secure government systems.

Responses to the RFI are due by 11 a.m. Jan. 20.

See the rest here:

The Pentagon Wants AI-Driven Drone Swarms for Search and Rescue Ops - Nextgov

HoloLens 2 will have a custom AI chip designed by Microsoft – The Verge

Today, Microsoft announced that the next generation of its mixed reality HoloLens headset will incorporate an AI chip. This custom silicon a coprocessor designed but not manufactured by Microsoft will be used to analyze visual data directly on the device, saving time by not uploading it to the cloud. The result, says Microsoft, will be quicker performance on the HoloLens 2, while keeping the device as mobile as possible.

The announcement follows a trend among Silicon Valleys biggest tech companies, which are now scrambling to meet the computational demands of contemporary AI. Todays mobile devices, where AI is going to be used more frequently, simply arent built to handle these sorts of programs, and when theyre asked, the result is usually slower performance or a burned-out battery (or both).

But getting AI to run directly on devices like phones or AR headsets has a number of advantages. As Microsoft says, quicker performance is one of them, as devices dont have to upload data to remote servers. This also makes the devices more user-friendly, as they dont have to maintain a continuous internet connection. And, this sort of processing is more secure, as users data never leaves the device.

There are two main ways to facilitate this sort of on-device AI. The first is by building special lightweight neural networks that dont require as much processing power. (Both Facebook and Google are working on this.) The second is by creating custom AI processors, architectures, and software, which is what companies like ARM and Qualcomm are doing. Its rumored that Apple is also building its own AI processor for the iPhone a so-called Apple Neural Engine and now, Microsoft is doing the same for the HoloLens.

This race to build AI processors for mobile devices is running alongside work to create specialized AI chips for servers. Intel, Nvidia, Google, and Microsoft are all working on their own projects in this department. This sort of AI cloud power will service different needs to new mobile processors (itll primarily be sold directly to businesses), but from the viewpoint of designing silicon, the two goals are likely to be complementary.

Speaking to Bloomberg, Microsoft Research engineer Doug Burger said the company was taking the challenge of creating AI processors for servers very seriously, adding: Our aspiration is to be the number one AI cloud. Building out the HoloLens on-device AI capabilities could help with this goal, if only by focusing the companys expertise on chip architectures needed to handle neural networks.

For the second generation HoloLens, the AI coprocessor will be built into its Holographic Processing Unit or HPU Microsofts name for its central vision-processing chip. This handles data from all the devices on-board sensors, including the head-tracking unit and infrared cameras. The AI coprocessor will be used to analyze this data use deep neural networks, one of the principal tools of contemporary AI. Theres still no release date for the HoloLens 2, but its reportedly arriving in 2019. When it lands, AI will be even more central for everyday computing, and that specialized silicon will likely be in high demand.

Here is the original post:

HoloLens 2 will have a custom AI chip designed by Microsoft - The Verge

Is There a Clear Path to General AI? – CMSWire

PHOTO:John Lockwood

People frequently mix up two pairs of terms when talking about artificial intelligence: Strong vs. Weak AI, and General vs. Narrow AI. The key to understanding the difference lies in which perspective we want to take: are we aiming for a holy grail that, once found, will mean solving one of mankinds biggest questions or are we merely aiming to build a tool to make us more efficient at a task?

The Strong vs. Weak AI dichotomy is largely a philosophical one, made prominent in 1980 by American philosopher John Searle. Philosophers like Searle are looking to answer the question of whether we can theoretically and practically build machines that truly think and experience cognitive states, such as understanding, believing, wanting, hoping. As part of that endeavor, some of them examine the relationship between these states and any possibly corresponding physical states in the observable world of the human body: when we are in the state of believing something, how does that physically manifest itself in the brain or elsewhere?

Searle concedes that computers, the most prominent form of such machines in our current times, are powerful tools that can help us study certain aspects of human thought processes. However, he calls thatWeak AI, as its not the real thing. He contrasts that with "Strong AI as follows:But according to strong AI, the computer is not merely a tool in the study of the mind; rather, the appropriately programmed computer really is a mind, in the sense that computers given the right programs can be literally said to understand and have other cognitive states.

While this philosophical perspective is fascinating in and of itself, it remains largely elusive to modern day practical efforts in the field of AI. Philosophers are thinkers, meant to raise the right questions at the right time to help us think through the implications of our doings. They are rarely builders. The builders among us, the engineers, seek to solve practical problems in the physical world. Note that this is not a question of whose aims are more noble, but merely a question of perspective.

Engineers seeking to build systems that are of practical use today are more interested in the distinction of General vs. Narrow AI. That distinction is one of the applicability of a system at hand. We call something Narrow AI if it is built to perform one function, or a set of functions in a particular domain, and that alone. In reality, that is the only form of AI we have at our disposal today. All of the currently available systems are built for one task alone.

The biggest revelation for any non-expert here is that an AI system's performance in one task does not generalize. If you've built a system that has learned to play chess, your system cannot play the ancient Chinese game of Go, not even with some additional modifications. And if you have a system that plays Go better than any human, no matter how hard that task seemed before such a program finally got built in 2017, that system will NOT generalize to any other task. Just because a system performs one task well does not mean it will soon (a term used often by people writing and talking about technology in general) perform seemingly related tasks well, too. Each new task that is different in nature (and there are many of those different natures) is a tedious and laborious job for the engineers and designers who build these systems.

So if the opposite of Narrow AI is General AI, youre essentially talking about a system that can perform any task you throw at it. The original idea behind General AI was to build a system that could learn any kind of task through self-training, without requiring examples pre-labeled by humans. (Note that this is still different from Searles notion of Strong AI, in that you could theoretically build General AI without building true thinking it could still just be a simulation of the real thing.)

Related Article: Confused by AI Hope and Fear? You're Not Alone

Lets do a thought experiment (a common tool of any philosopher who wants to think through an idea or theory). What if we interconnected each and every narrow AI solution ever built on planet Earth? What if we essentially built an IoA, an Internet of AIs? There are companies out there that have built:

If we standardized the interfaces for all of these solutions, and those for the hundreds and thousands of other tasks we face in our lives, wouldnt we then essentially have built General AI? One AI system of systems that can solve whatever you throw at it?

Certainly not. A hodgepodge of backend systems that each accomplish one task in a proprietary way is certainly not the same as one system that is equipped with general learning capabilities and can thus self-teach any skill needed. It is also far from being the sort of Strong AI that philosophers have in mind, as humans are definitely not a conglomerate of differently built subcomponents for each and every task we can conduct.

But then again does it matter? Wouldn't such a readily available system of systems essentially give us an omnipotent tool to help us with any imaginable task we face? It certainly would! And to someone oblivious to its inner structure, it would even appear to be that long-sought magical AI weve been shown in books and movies for decades.

The problem is this: such an Internet of AIs will never become reality. Our worlds capitalist nature essentially prohibits the sharing of intellectual property at the scale needed for such an endeavor. For any of the systems mentioned above, there are probably dozens of firms out there that make money having re-solved the same problem over and over again. Googles translation engine does a fine job, but so too does Facebooks, Microsofts, IBMs, DeepLs, SysTrans, Yandexs, Babylons, Apertiums ... some of them use a common foundation that academic circles have produced over the years, but many dont. Humans are not wired to combine their forces to a common greater good of such majestic proportions we are observing that fateful trait of ours in matters both short-term (coronavirus) and long-term (global warming).

So until our very DNA changes, which would further a change of our societal systems, we are stuck with Narrow AI, which will continue to bring meaningful innovation to us and make us more efficient over time in each of the domains it tackles but the holy grails of Strong or General AI will remain a dream.

Tobias Goebel is a conversational technologist and evangelist with over 15 years of experience in the customer service and contact center technology space. He has held roles spanning engineering, consulting, pre-sales, product management, and product marketing, and is a frequent blogger and speaker on Customer Experience topics.

Continued here:

Is There a Clear Path to General AI? - CMSWire