From page building to apps to game design to AI, this programming training package covers it all – Boing Boing

Learning to code can be intimidating. It always takes some time and attention to develop any new skill, but for one with as many approaches as programming, it can be particularly nerve-wracking, especially when youve never dipped into those murky waters before.

Even if you fall into that absolute beginner category, the package of training in The 2020 Comprehensive Programming Collection offers up some solid primers on the basic languages, tools, and methods for building websites, creating apps and basically becoming a one-person digital content machine.

Ask a hiring manager the no. 1 skill they want to see on the resume of an IT job applicant and its knowledge of JavaScript. So this collection of nine courses starts with The Complete Beginner's JavaScript Course, introducing you to the basics of this core programming language.

While JavaScript serves as the backbone of web page building, its also instrumental in creating apps. Its training that will serve you well in the four bundle courses centered around creating your own working apps. iOS App Development for Beginners and Intro to Java for Android Development will get you familiar with Android Studio and Swift, the two primary platforms for tailoring an app specifically to Android and iOS users.

Youll expand that foundation with Discover React for Web Applications as you use the React JavaScript library for building cool interactive user experiences; and Develop an AR App for the Retail Industry, where you actually build a working augmented reality app that will show you how virtual furniture will look in your very real-world settings.

Since game development is bound to capture nearly any new programmers interests, the Mobile Game Development for Beginners gets into coding on the Unity game engine as you start building your own mobile games for Android and iOS devices. Then the Build a Micro-Strategy Game explores Unity further as you create a game about building and managing a colony on Mars.

Finally, this training concludes with instruction in a pair of avenue-expanding new fields, machine learning, and artificial intelligence. In Machine Learning for Beginners with Tensorflow, youll learn with a practical hands-on project building a program where computers run through a series of tasks, spot patterns, then react according to your instructions.

Then, Convolutional Neural Networks (CNNs) for Image Classification will have you crafting CNNs of your own, the image recognition technology behind self-driving cars, facial recognition, fingerprint matching and more.

A $1,650 package of training, you can get all nine courses now at less than $4 per course, only $29.99.

The death toll in Italys coronavirus outbreak today passed 1,000. Schools throughout Italy are completely shut down, which is reportedly driving a surge in internet traffic as bored kids forced to stay indoors turn to online games.

Facebooks default settings facilitated the disclosure of personal information, including sensitive information, at the expense of privacy.

Before Clearview Became a Police Tool, It Was a Secret Plaything of the Rich. Thats the title of the New York Times piece, and thats the horrifying reality of how artificial intelligence and facial recognition are already being used in ways that violate your expectations of privacy in the world.

Lets go ahead and assume you never thought youd own a lamp that can chill a bottle of Albario. And if that lamp thats also a cooler were also a Bluetooth speaker, its even less likely the idea crossed your mind, but here we are. The Dutch company Kooduu designed a multifunction accent piece that []

The great thing about living your best life is that it means something different to each individual. So while you may not need new socks or a fishing camera, other people are clamoring for it. To prove the point, weve assembled 25 of the best deals we were able to scrounge up this week. Some []

Heres a cool factoid about company branding and the importance of iconic imagery that you probably never considered. Brands that do a poor job of branding themselves to the public definitely pay the price. In fact, its a literal priceas in, salaries 10 percent higher than their better-branded competitors. Not only does the lack of []

Visit link:

From page building to apps to game design to AI, this programming training package covers it all - Boing Boing

Can Artificial Intelligence Help Students Work Better Together? According to Research, the Answer is Yes. – WPI News

Once the AI Partners are integrated in these classrooms, Whitehill and his team will be able to collect data on how students interact with them, and then iteratively make them more intelligent and effective. Initially, the AI Partner might be controlled by a human teacher in a backroom (Wizard of Oz-style interaction), but over time, it can learn from its human controller what to do when and thereby become more autonomous. Whitehill and his team anticipate that the particular form that the Partner takes is likely also important.

Students might find an embodied robot creepy, but they might like interacting with an animated avatar on a touchscreen, he says.

This project represents a shift in how researchers envision AI in the classroom. While earlier work in this field sought to fully automate the teaching process, which Whitehill considers to be infeasible, this project is about human-AI teaming, and how humans and teachers possess complementary abilities. AI Partners can help to magnify teachers existing strengths by increasing the number of students in the classroom who receive the real-time feedback they need for optimal learning.

Whitehill also says that this research will be greatly informative even during the COVID-19 pandemic, when many school districts across the country are participating in remote learning. In fact, he says testing agent-student interactions over platforms like Zoom have certain advantages over in-person interactions.

With Zoom, each student and teacher in the classroom is cleanly separated from each other, and all their audiovisual inputs are channeled through a common software interface. This makes it much easier to analyze their speech, gestures, language, and interactions with each other, Whitehill says. In contrast, in normal, in-person classrooms, the interactions are much messier, since students often sit in all kinds of different positions, might be touching their faces, and work in a noisy environment, which makes it more challenging for the Partner to observe and analyze.

By the end of this research, Whitehill says he hopes to find practical teaching and coaching strategies that AI Partners can execute that work well with students. Its not clear at all that the way humans teach would work well for a computer, robot, or avatar, he says.

While the computational challenges of the projectsignal processing in extremely noisy and cluttered settings, real-time control in an uncertain environment, and human-computer interaction for a novel settingare formidable, Whitehill says the potential rewards make it worth the effort.

The exciting thing about this project is that we get to completely rethink the role of AI in the classroom, he says. My hope is that, through next-generation educational AI, we will be able to stimulate deeper critical thinking and collaboration among students to help them learn better and achieve more.

Jessica Messier

The rest is here:

Can Artificial Intelligence Help Students Work Better Together? According to Research, the Answer is Yes. - WPI News

Google’s DeepMind Creates Dataset With 300,000 YouTube Clips to … – The Daily Dot

Even the most advanced artificial intelligencealgorithms in the world have trouble recognizing the actions of Homer Simpson.

DeepMind, the Google-owned artificial intelligence lab best known for defeating the worlds greatest Go players, created a new dataset of YouTube clips to help AI find and learn patterns so it can better recognize human movement. The massive sample set consists of 300,000 video clips and 400 different actions.

AI systems are now very good at recognizing objects in images, but still have trouble making sense of videos, aDeepMind spokesperson told IEEE Spectrum.One of the main reasons for this is that the research community has so far lacked a large, high-quality video dataset.

According to IEEE Spectrum, early testing of the Kinetics Human Action Video Dataset showed mixed results. The deep learning algorithm was up to 80 percent accurate in classifying actions like playing tennis, crawling baby, cutting watermelon, and bowling. But its accuracy dropped to 20 percent or less when attempting to identify some of the activities and habits associated with Homer: drinking beer, eating doughnuts, and yawning.

Video understanding represents a significant challenge for the research community, and we are in the very early stages with this, a DeepMind spokesperson said in a statement. Any real-world applications are still a really long way off, but you can see potential in areas such as medicine, for example, aiding the diagnosis of heart problems in echocardiograms.

DeepMind got some help from Amazons Mechanical Turk, a crowdsourcing service that companies can use to enlist other humans in completing a task. In this case, thetask was labeling actions in thousands of 10-second YouTube clips.

After discovering the effectiveness of its dataset, the U.K.-based company ran tests to see if it had any gender imbalance. Past tests showed that the contents of certain datasets resulted in AI that was unsuccessful recognizing certain ethnics groups. Preliminary results showed this particular set of video clips did not present those problems. In fact, DeepMind found that no single gender dominated within 340 of 400 action classes. The actions that did not pass the test included shaving a beard, cheerleading, and playing basketball.

We found little evidence that the resulting classifiers demonstrate bias along sensitive axes, such as across gender, researchers at DeepMind wrote in a paper.

The company will now work with outside researchers to grow its dataset and continue to develop AI so it can better recognize what is going on in videos. The research could lead to uses ranging from suggesting relevant YouTube video to users to diagnosing heart problems.

We have reached out to DeepMind to learn more about why Homer Simpson is causing such problems.

Update June 9, 5pm:A DeepMind spokesperson clarified that dataset didnt actually include videos ofThe Simpsons characterjust actions hes widely associated with. Doh! Weve updated our article accordingly.

H/T IEEE Spectrum

The rest is here:

Google's DeepMind Creates Dataset With 300,000 YouTube Clips to ... - The Daily Dot

Facebook wants you to trust AI, and it’s hiring for a Research group to get you to do just that – Thinknum Media

Beginning on August 20, 2019, Facebook ($NASDAQ:FB) began listing job openings for a new "Research"job categoryat the company that appears to be hiring scientists to research everything from blockchain in user experience, to augmented reality, to the way that humans think about artificial intelligence. So far, the company has listedmore than 100 positions for the group, with the apex of hiring in September.

The openings are spread throughout the Facebook organization, with a particular focus on artificial intelligence. Of the 70 job titles, 22 are focused on AI, 17 on UX (user experience), and three on AR/VR.

A "Visiting Scientist - Trust in AI" listing mentions a "Trust-in-AI-Research" team that is looking for a Ph.D. in machine learning and AI to "contribute research that can be applied to Facebook product development". Other AI positions are listed in the table below.

Title

Category

Chercheur en Intelligence Artificielle (Diplm de l'universit)

Research

Chercheur Invit, Intelligence Artificielle (Diplm de l'universit)

Research

Ingnieur de Recherche, Intelligence Artificielle (Diplm de l'universit)

Research

Postdoctoral Researcher, Artificial Intelligence (PhD)

Research

Program Manager, AI Programs

Research

Research Engineer, Artificial Intelligence (University Grad)

Research

Research Scientist, (Conversational AI Group)

Research

Research Scientist, AI

Research

Research Scientist, AI (EMEA)

Research

Research Scientist, AI (Montreal)

Research

Research Scientist, AI Research

Research

Visiting Researcher, Artificial Intelligence (PhD)

Research

Visiting Scientist - AI-Infra (US)

Research

Visiting Scientist - Facebook AI Applied Research (EMEA)

Research

Visiting Scientist - Facebook AI Applied Research (US)

Research

Visiting Scientist - Facebook AI Research (EMEA)

Research

Visiting Scientist - Facebook AI Research (Montreal)

Research

Visiting Scientist - Facebook AI Research (US)

Research

Visiting Scientist - Trust in AI (US)

Research

Visiting Scientist, AI

Research

Visiting Scientist, AI (EMEA)

Research

Visiting Scientist, AI (Montreal)

Research

Another curious hire is for an "Epitaxy Engineering Manager" for the new group's Augmented / Virtual Reality (AR/VR) team. Epitaxy involves the growth of crystals on the substrate,and this listing makes mention of advancing LED Expitaxy Materials Systems for AR Displays, an indication that Facebook and its Oculus group are creating new display technology.

Hiring for the new Research group appears spread across several Facebook offices, including its Menlo Park Headquarters, New York, Montreal, Redmond, Pittsburg, San Francisco, Seattle, Boston, Cork, and London.

Facebook's interest in the future of AI isn't surprising, and it already has a deep foothold in Virtual Reality. Understanding how humans perceive and interact with these technologies appears to be the first step in approachinghow the company will handleproduct releases that will use the technology.

Thinknum tracks companies using the information they post online - jobs, social and web traffic, product sales and app ratings - andcreates data sets that measure factors like hiring, revenue and foot traffic. Data sets may not be fully comprehensive (they only account for what is available on the web), but they can be used to gauge performance factors like staffing and sales.

Read this article:

Facebook wants you to trust AI, and it's hiring for a Research group to get you to do just that - Thinknum Media

Leveraging AI to reduce COVID-19 risk: ‘It’s not enough to rely on test and trace’ – FoodNavigator.com

The UK Government has said that businesses must update their risk assessments to factor in the dangers associated with coronavirus. This means that, in order to remain compliant and avoid any future liability issues, businesses need to take action to mitigate the impact of the virus on their workforce.

The government has clearly warned that any food business which fails to complete a risk assessment that takes COVID-19 into account could be in a breach of health and safety law. Employers therefore need to prioritise managing the risks properly. They need to consider the wider context of the workforce to ensure there are no weaknesses in procedures that may put them and their employees at risk, Will Cooper, CEO and founder at Delfin Health, explained.

Digital health tech companies Delfin Health and DocHQ have created a new tool that leverages artificial intelligence to predict, monitor and test the health and safety of the diverse workforce that operates in the food sector.

The very nature of food production means there are many different functions and roles within food manufacturing, Cooper observed. And, because these workers are directly involved in food processing and the handling of food production, employers are required by law to follow specific government guidelines.

Dubbed Klarity, the AI can give food businesses a real-time clinical understanding of the health of their workers across various job functions, from food inspectors, food handlers, packers, managers and cleaners, to maintenance contractors and delivery workers.

Tools like Klarity can both mitigate any potential employer liability risks and provide a long-term solution to a health crisis.

This could become particularly important for essential food businesses if there is a second spike in COVID-19 that results in further lockdowns, either locally or nationally.

It can help manufacturers stay operational during a potential second lockdown. Due to food being an essential industry, we have already seen them continue to operate during the first wave of COVID-19, albeit in a limited way which is putting their key workers at risk. If these are to remain open, they need to be able to monitor the health and safety of their staff in the most efficient way.

While food businesses have remained largely operational during the various national lockdowns, certain facilities have had to be shuttered due to localised outbreaks. Cases in the meat sector - from Germany and the US to the UK and the Netherlands - have highlighted issues that Cooper believes everyone operating in the food industry would do well to take heed of.

Its not enough for employers to simply rely on people using the test and trace government solution which tests only symptomatic people. There are cases of the virus spreading rapidly throughout food manufacturing units in the US and Germany and no doubt elsewhere due to the conditions of the facilities which typically involve close contact. Its also highly likely these units relied on just testing or returning home symptomatic people. It requires a systematic process of regular testing.

Cooper does not believe it is possible to simply test the entire workforce due to cost and capacity restraints. Digital AI platform Klarity takes a different approach.

One of the important roles in COVID-19 transmission in this pandemic, especially at this stage, is being played by asymptomatic individuals. Although theoretically speaking the easiest solution might be to apply systematic testing to general population, doing that would become technically unfeasible due to lack of resources and sky rocketing expenditures.

We have developed a tailored solution that guarantees a consistent testing methodology developed to filter infected asymptomatic employees among large taskforce pools. Thus, our solution can meet the requirements of different sectors, reducing the number of tests, decreasing uncertainty in the workplace and potentially mitigating future outbreaks.

How does Klarity work?

Cooper elaborated: The explainable AI that we have developed asks a series of questions about a persons health history, family health history and current lifestyle. The algorithm within Klarity has been developed using data from one of the largest patient datasets in the world containing over 6.5 million patient years of both medical and lifestyle data.

We use this information to predict the mortality risk of a patient if they contract COVID-19. Our technology further combines optional daily symptom checking and live virus and antibody testing methodologies to track asymptomatic patients before they transfer this disease to people around them, he told FoodNavigator.

The various testing methodologies, which include group and randomised testing, allow employers to reduce the amount of testing required and minimise the risk of an outbreak in the currently active workforce, in particular by identifying asymptomatic cases.

The testing process is guided by healthcare professionals who also interpret the results based on World Health Organisation protocols and third-party validation including Polymerase Chain Reaction (PCR), Enzyme-Linked Immunosorbent Assay (ELISA) and, where relevant, rapid antibody tests.

Our solution allows the reduction of testing yet, through group and randomised testing, identifies the virus quickly.

Cooper believes that, as well as reducing the risk of outbreaks in a food business, the use of Klarity would also serve to reassure employees that they are safe at work.

Not everyone will feel comfortable sharing details of their medical history and lifestyle choices, particularly with a programme provided through their employer. For this reason, Cooper stressed, data protection is key.

Data privacy is one of our utmostpriorities as we are processing sensitive and private patient information. Patient have complete control of their data, they can share and revoke consent. Moreover, no data ever leaves the platform. We dont share sensitive personal health data but only aspects necessary to help employers keep employees safe. Our platform keeps up to date with the ever-changing policies and regulations so that the companies dont have to worry about GDPR rules and employee rights, he told us.

In terms of encryption, we use a highly secure (Quantum resistant), distributed and highly configurable storage mechanism; allowing citizens the ability to source, store and share (by record or down to individual fields) their data.

More here:

Leveraging AI to reduce COVID-19 risk: 'It's not enough to rely on test and trace' - FoodNavigator.com

The Future of Productivity: AI and Machine Learning – Entrepreneur

The productivity and project management market is booming, and its continuing to evolve in new and exciting ways. I wanted to know what the future of artificial intelligence in project management would look like, so I reached out to founders, productivity expertsand futurists who work in this space every dayto ask what their predictions are for the next five and 10 years. Their answers were enlightening.

David Allen, the inventor of Getting Things Done, believes,Systems will get better at presenting the relevant data to optimize our experience in every situation -- at the right place, at the right time. We need to think of productivity systems as supporting systems for our decision process.

Related: How to Prepare Employees to Work With AI

So, we wont yet be using AI to eliminate our decisions and automate them, but to enhance the ability to make a decision in any situation. I also think this is the next likely step for AI. Most people wouldnt trust a computer to make decisions for them, but they do look for information to help make those decisions.

Allen goes on to say, Neither these new presentation forms nor trends like A.I. will make a decision for you. That wont work. I see that A.I. can support your decisions but we still will use our heads to make decisions.

When most people think of AI, they think of robots doing manual labor to get things done for us, whether its Rosie from The Jetsons or a robot on the assembly line at Ford. The issue I have with this is that this view is too limiting. Sure, robots will become intelligent, but so will everything else. The team behind the online video seriesIn a Nutshellreleased a great explainer video about this.

Right now, an app that will be able to recognize your motivation level and give you tasks that fit that level is not out of the question. Bots that can answer simple service questions and learn from the responses are already around, working with some success and some big failures.

Related: Rethinking Chatbots: They're Not Just for Customers

Mark Mader, CEO at Smartsheet, thinks that thinking of AI as roving robots is missing the point, saying, Looking further out, theres no doubt that automation -- dont think robots, think removing mundane and unproductive work steps from your day -- will increase. Machine learning will be able to predict what workers are trying to do and make their work easier. How? By automatically gathering the information they need to complete a task, populating formsand sharing them with the appropriate people.

I see a combination of both. Specifically in project management, I see a future where machines will be able to predict a change using real-time data and make changes accordingly. Its the combination of bots and machine learning that holds the key: Think of an assembly line system pushing out barbecue equipment. The system will be able to predict that demand will increase due to an upcoming holiday and automatically tell the bots on the line to increase production. Im not sure if there will be a human decision between them, but I think as we become more comfortable with the machines decisions, well give themmore control of the process.

AI is far from being human, let alone superhuman. As I pointed out, weve released machine learning bots with some pretty terrible results. Machines are not yet good at understanding context or sarcasm, so when we let them learn, they usually miss the mark.

Using machines to help with a decision, however, seems like the only way forward for the moment. As a form of decision support, productivity expertCarl Pulleinthinks that machine learning and artificial intelligence [will move]towards creating productivity tools that can schedule your meetings and tasks for you and to be able to know what needs to be done based on your context, where you are and what needs to be done.

Related: Why Small Business Should Be Paying Attention to Artificial Intelligence

Machine learning enabled tools like Grammarly are already on the market, but as these decision-making aides become more well-known, they are moving into more complex areas. Think of it as your Facebook timeline algorithm or your spam filter, but for your to-do list.

To get a sense of where we are now, think of systems that are on and learning all of the time. Your computer browser talks to Google and Facebook and any number of companies, where what youre doing, clicking, buyingand beyond, is stored. Now with the recent trend of IoT, these things are moving away from our computers and cellphones and becoming part of our everyday lives. And gaining access to data along the way. Your smart fridge might be able to tell the local supermarket how much water you drink per week. If everyone in the local area has a smart fridge, that same supermarket would be able to make a better decision about how much water to keep stocked.

Productivity psychologistMelissa Gratias sees this system working for us in our workplaces too. Most apps and programs require the user to purposefully interface with the tool in order to use it. We will see more smart homes, smart carsand voice-activated entry points that allow the tool to be always available to the user, no matter where she is. She wont have to stop what shes doing to, for example, add something to her task list.

Related: Will a Robot Take My Job?

So, not only will the supermarket know what to stock, but your fridge will know what to add to your shopping list for the next week. Automatically, through learned behavior.

All the experts I spoke with agreed that decision support islikely the only way forward in the short term. Machines are being taught the decision preferences of humans, because they cant discern context on their own. So, we need to educate machines on what its like to think like a human. Companies like Alphabet are already working on it, with projects like DeepMind at the forefront of AI and machine learning technologies. Others, like Elon Musks OpenAI, are working to make sure that humanitys fears of a malevolent AI will never be realized. As we learn to trust these systems, adoption will quickly follow. And since they're so universal, they will surely touch allindustries.

Martin Welker is the founder and CEO of collaboration platformZenkit. After finishing his studies in computer science at KIT in Germany, he established Axonic, where he created one of the leading AI-driven engines for document analysi...

Continue reading here:

The Future of Productivity: AI and Machine Learning - Entrepreneur

Researchers Rank Deepfakes as the Biggest Crime Threat Posed by AI – Adweek

While science fiction is often preoccupied with the threat of artificial intelligence successfully imitating human intelligence, researchers say a bigger danger right now is people using the technology to imitate one another.

A recent survey from the University College of London ranked deepfakes as the most worrying application of machine learning in terms of potential for crime and terrorism. According to 31 AI experts, the video fabrication technique could fuel a variety of crimesfrom discrediting a public figure with fake footage to extorting money through video call scams impersonating a victims loved onewith the cumulative effect leading to a dangerous mistrust of audio and visual evidence on the part of society.

The experts were asked to rank a list of 20 identified threats associated with AI, ranging from driverless car attacks to AI-authored phishing messages and fake news. The criteria for the ranking included overall risk, ease of use, profit potential and the level of difficulty in how hard they are to detect and stop.

Deepfakes are worrying on all of these counts. They are easily made and are increasingly hard to differentiate from real video. Advertised listings are easily found on corners of the dark web, so the prominence of the targets and variety of possible crimes means that there could be a lot of money at stake.

While the threat of deepfakes was once confined to celebrities, politicians and other prominent figures with enough visual data to train an AI, more recent systems have been proven to be effective when trained on as little data with a couple of photos.

People now conduct large parts of their lives online and their online activity can make and break reputations, said the reports lead author, UCL researcher Matthew Caldwell, in a statement. Such an online environment, where data is property and information power, is ideally suited for exploitation by AI-based criminal activity.

Despite the abundance of possible criminal applications of deepfakes, a report last fall found that they are so far primarily used by bad actors to create fake pornography against the subjects consent.

Not all uses for deepfakes are nefarious, however. Agencies like Goodby, Silverstein & Partners and R/GA have used them in experimental ad campaigns, and the underlying generative technology is helping fuel different types of AI creativity and art.

Link:

Researchers Rank Deepfakes as the Biggest Crime Threat Posed by AI - Adweek

Is China Winning the AI Race? by Eric Schmidt & Graham Allison – Project Syndicate

The COVID-19 pandemic has revealed the capabilities of the United States and China to deploy artificial intelligence in solving real-world problems. While America's performance hasn't exactly inspired confidence, it maintains some important competitive advantages.

CAMBRIDGE COVID-19 has become a severe stress test for countries around the world. From supply-chain management and health-care capacity to regulatory reform and economic stimulus, the pandemic has mercilessly punished governments that did not or could not adapt quickly.

The virus has also pulled back the curtain on one of this centurys most important contests: the rivalry between the United States and China for supremacy in artificial intelligence (AI). The scene that has been revealed should alarm Americans. China is not just on a trajectory to overtake the US; it is already surpassing US capabilities where it matters most.

Most Americans assume that their countrys lead in advanced technologies is unassailable. And many in the US national-security community insist that China can never be more than a near-peer competitor in AI. In fact, China is already a full-spectrum peer competitor in terms of both commercial and national-security AI applications. China is not just trying to master AI; it is mastering AI.

The pandemic has offered a revealing early test of each countrys ability to mobilize AI at scale in response to a national-security threat. In the US, President Donald Trumps administration claims that it deployed cutting-edge technology as part of its declared war on the coronavirus. But, for the most part, AI-related technologies have been used mainly as buzzwords.

Not so in China. To stop the spread of the virus, China locked down the entire population of Hubei province 60 million people. That is more than the number of residents in every state on the US East Coast from Florida to Maine. China maintained this massive cordon sanitaire by using AI-enhanced algorithms to track residents movements and scale up testing capabilities while massive new health-care facilities were being built.

The COVID-19 outbreak coincided with the Chinese New Year, a high-travel period. But top Chinese tech companies responded quickly by creating apps with health status codes to track citizens movements and determine whether individuals needed to be quarantined. AI then played a critical role in helping Chinese authorities enforce quarantines and perform extensive contract tracing. Owing to Chinas large-scale datasets, the authorities in Beijing succeeded where the government in Washington, DC, failed.

Enjoy unlimited access to the ideas and opinions of the world's leading thinkers, including weekly long reads, book reviews, and interviews; The Year Ahead annual print magazine; the complete PS archive; and more all for less than $2 a week.

Subscribe Now

Over the past decade, Chinas advantages in size, data collection, and strategic determination have allowed it to close the gap with Americas AI industry. Chinas edge begins with its population of 1.4 billion, which affords an unparalleled pool of talent, the largest domestic market in the world, and a massive volume of data collected by companies and government in a political system that always places security before privacy. Because a primary asset in applying AI is the quantity of high-quality data, China has emerged as the Saudi Arabia of the twenty-first centurys most valuable commodity.

In the context of the pandemic, Chinas ability and willingness to deploy these technologies for strategic value has strengthened its hard power. Like it or not, real wars in the future will be AI-driven. As Joseph Dunford, then the Chairman of the US Joint Chiefs of Staff, put it in 2018, Whoever has the competitive advantage in artificial intelligence and can field systems informed by artificial intelligence, could very well have an overall competitive advantage.

Is China destined to win the AI race? With a population four times the size of the US, there is no question that it will have the largest domestic market for AI applications, as well as many times more data and computer scientists. And because Chinas government has made AI mastery a first-order priority, it is understandable why some in the US would be pessimistic.

Nonetheless, we believe that the US can still compete and win in this critical domain but only if Americans wake up to the challenge. The first step is to recognize that the US faces a serious competitor in a contest that will help to decide the future. The US cannot hope to be the biggest, but it can be the smartest. In pursuing the most advanced technologies, it is arguably the brightest 0.0001% of individuals who make the decisive difference. While China can mobilize 1.5 billion Chinese speakers, the US can recruit and leverage talent from all 7.7 billion people on Earth, because it is an open, democratic society.

Moreover, while competing vigorously to sustain the US lead in AI, we also must acknowledge the necessity of cooperation in areas where neither the US nor China can secure its own minimum vital national interests without the others help. COVID-19 is a case in point. The pandemic threatens all countries national interests, and neither the US nor China can resolve it alone. In developing and widely deploying a vaccine, some degree of cooperation is essential, and it is worth considering whether a similar principle applies to the unconstrained development of AI.

The idea that countries could compete ruthlessly and cooperate intensely at the same time may sound like a contradiction. But in the world of business, this is par for the course. Apple and Samsung are intense competitors in the global smartphone market, and yet Samsung is also the largest supplier of iPhone parts. Even if AI and other cutting-edge technologies suggest a zero-sum competition between the US and China, coexistence is still possible. It may be uncomfortable, but it is better than co-destruction.

See the original post here:

Is China Winning the AI Race? by Eric Schmidt & Graham Allison - Project Syndicate

After Go, developers are now building AI to beat us at soccer – CNET

AI beat us at chess, and now they're looking to defeat us in soccer too.

Look out, Messi. After Google's AlphaGo artificial intelligence bested our best Go player, South Korea is now setting its sights on making AI that can play soccer.

Hosted by the Korea Advanced Institute of Science & Technology (KAIST), the AI World Cup will see university students across South Korea developing AI programs to compete in a series of online games, reportedThe Korea Times. The prelims will begin in November.

"The football matches will be conducted in a five on five tournament," a KAIST spokesperson told the publication on Tuesday. "Each of the five AI-programmed players in such positions as striker, defender and goalkeeper will compete with their counterparts."

That's not all though, as competing students will also build AI experts that can provide post-game analysis.

It's not the first time researchers are putting their tech developments to the test using soccer. The first Robot World Cup soccer games (or RoboCup), an annual international robotics competition that aims to advance robotics and AI research, put competitive soccer-playing robots in the field a decade ago. But soccer isn't the only thing that tech can do -- in the same year, IBM's computer, Deep Blue, defeated Garry Kasparov in a game of chess.

While the competition is only limited to university students in South Korea this time, it will be opened to international teams "in the first half of 2018," Kim Jong-hwan, president of the AI World Cup committee said in the statement.

Tech Enabled: CNET chronicles tech's role in providing new kinds of accessibility.

Batteries Not Included: The CNET team reminds us why tech is cool.

Read more:

After Go, developers are now building AI to beat us at soccer - CNET

How to build any AI-driven smart service – ZDNet

CXOs: Participate in Constellation's digital transformation survey by Aug.18, 2017 and receive a summary of the results.

The combination of machine learning, deep learning, natural language processing, and cognitive computing will change the ways that humans and machines interact. AI-driven smart services will sense one's surroundings, learn one's preferences are from past behavior, and subtly guide people and machines through their daily lives in ways that will truly feel frictionless. This quest to deliver AI-driven smart services across all industries and business processes will usher the most significant shift in computing and business this decade and beyond.

Organizations can expect AI-driven smart services to impact future of work flows, IoT services, customer experience journeys, and synchronous ledgers (blockchain). Success requires the establishment of AI outcomes (see Figure 1). Once the outcomes are established, organizations can craft AI-driven smart services that orchestrate, automate, and deliver mass personalization at scale.

The disruptive nature of AI comes from the speed, precision, and capacity of augmenting humanity. When AI is defined through seven outcomes, the business value of AI projects gain meaning and can easily show business value through a spectrum of outcomes:

Because AI-driven smart services require offloading the decision-making responsibility to atomic driven smart services, the foundation of any AI-driven smart service is trust. Below an explanation of how the five key components of an AI-driven smart service orchestrate trust.

Fears of robots taking over the world have been overblown. Successful AI-driven smart services will augment human intelligence just as machines augmented physical capabilities. AI driven smart services play a key role in defining business models for synchronous ledger technologies (blockchain), Internet of Things, customer experience, and future of work by reducing errors, improving decision-making speed, identifying demand signals, predicting outcomes, and preventing "disasters".

Are you a CXO designing an AI-driven smart service? Participate in Constellation's digital transformation survey by Aug. 18, 2017 and receive a summary of the results.

See the original post:

How to build any AI-driven smart service - ZDNet

University of Kansas Medical Center to Deploy OMNIQ’s Artificial Intelligence Cloud Based Access Control Parking and Security System – GlobeNewswire

SALT LAKE CITY, June 10, 2020 (GLOBE NEWSWIRE) -- OMNIQ, Inc. (OTCQB:OMQS) (OMNIQor the Company), announces that it has received an order to for its new PERCSTM cloud-based permitting, citation, access control and parking collection system from the University of Kansas Medical Center.

Based on OMNIQs AI-Machine Vision technology for License Plate and Vehicle Recognition, PERCSTM improves safety, increases revenue, improves parking operations and provides a wide range of benefits, including: seamless account management for users and administrators, increased efficiency and time management for operators and enforcement officers, as well as enhanced revenue generation and elevated customer satisfaction levels. The system incorporates parking access and revenue enforcement capabilities within a single cloud-based platform, using Machine Vision Vehicle Recognition technology, so that the administrator can manage access control parking and track revenue from one web portal, using a dashboard for the monitoring of all activity and transactions for visitors and transient parkers. PERCSTM enables the virtual management of permitting, citations, occupancy and access control, enhancing efficiencies and safety for customers including municipalities, universities, medical centers and public parking operations across the U.S.

Shai Lustgarten, CEO of OMNIQ commented: Were pleased that KUMC has selected our PERCSTM platform to improve the efficiencies and safety of its facility access and parking operations. Our Machine Vision technology enables the virtual management of permitting and enforcement, access control and automated parking, which allows operators to actively monitor and collect revenue from their parking structures while reducing overhead expenses. Likewise, the adoption of our state-of-the-art technology allows security officers to more effectively enforce access control, check permits, and issue citations. Our Machine Vision technology has been implemented for public safety on school campuses, parking management and control as well as in sensitive areas for homeland security purposes.

Mr. Lustgarten concluded, It is always gratifying to win a new customer and we are especially pleased to continue the momentum weve been experiencing, as evidenced by our recently announced contract for PERCSTM deployment with the city of San Mateo, California and the selection of our Quest Shield solution by the Talmudic Academy in Baltimore. There is growing recognition in the public safety/smart city and automated parking verticals around the value of AI-Machine Vision based technology, and we believe we are well positioned to capitalize on the interest and opportunitieswere seeing for our technology and solutions.

About OMNIQ, Corp.OMNIQ Corp. (OMQS) provides computerized and machine vision image processing solutions that use patented and proprietary AI technology to deliver data collection, real time surveillance and monitoring for supply chain management, homeland security, public safety, traffic & parking management and access control applications. The technology and services provided by the Company help clients move people, assets and data safely and securely through airports, warehouses, schools, national borders, and many other applications and environments.

OMNIQs customers include government agencies and leading Fortune 500 companies from several sectors, including manufacturing, retail, distribution, food and beverage, transportation and logistics, healthcare, and oil, gas, and chemicals. Since 2014, annual revenues have grown to more than $50 million from clients in the USA and abroad.

The Company currently addresses several billion-dollar markets, including the Global Safe City market, forecast to grow to $29 billion by 2022, and the Ticketless Safe Parking market, forecast to grow to $5.2 billion by 2023.

Information about Forward-Looking StatementsSafe Harbor Statement under the Private Securities Litigation Reform Act of 1995. Statements in this press release relating to plans, strategies, economic performance and trends, projections of results of specific activities or investments, and other statements that are not descriptions of historical facts may be forward-looking statements within the meaning of the Private Securities Litigation Reform Act of 1995, Section 27A of the Securities Act of 1933 and Section 21E of the Securities Exchange Act of 1934.

This release contains forward-looking statements that include information relating to future events and future financial and operating performance. The words anticipate, may, would, will, expect, estimate, can, believe, potential and similar expressions and variations thereof are intended to identify forward-looking statements. Forward-looking statements should not be read as a guarantee of future performance or results, and will not necessarily be accurate indications of the times at, or by, which that performance or those results will be achieved. Forward-looking statements are based on information available at the time they are made and/or managements good faith belief as of that time with respect to future events, and are subject to risks and uncertainties that could cause actual performance or results to differ materially from those expressed in or suggested by the forward-looking statements. Important factors that could cause these differences include, but are not limited to: fluctuations in demand for the Companys products particularly during the current health crisis , the introduction of new products, the Companys ability to maintain customer and strategic business relationships, the impact of competitive products and pricing, growth in targeted markets, the adequacy of the Companys liquidity and financial strength to support its growth, the Companys ability to manage credit and debt structures from vendors, debt holders and secured lenders, the Companys ability to successfully integrate its acquisitions, and other information that may be detailed from time-to-time in OMNIQ Corp.s filings with the United States Securities and Exchange Commission. Examples of such forward looking statements in this release include, among others, statements regarding revenue growth, driving sales, operational and financial initiatives, cost reduction and profitability, and simplification of operations. For a more detailed description of the risk factors and uncertainties affecting OMNIQ Corp., please refer to the Companys recent Securities and Exchange Commission filings, which are available at http://www.sec.gov. OMNIQ Corp. undertakes no obligation to publicly update or revise any forward-looking statements, whether as a result of new information, future events or otherwise, unless otherwise required by law.

Investor Contact: John Nesbett/Jen BelodeauIMS Investor Relations203.972.9200jnesbett@institutionalms.com

Go here to see the original:

University of Kansas Medical Center to Deploy OMNIQ's Artificial Intelligence Cloud Based Access Control Parking and Security System - GlobeNewswire

Emotion recognition technology should be banned, says an AI research institute – MIT Technology Review

Theres little scientific basis to emotion recognition technology, so it should be banned from use in decisions that affect peoples lives, says research institute AI Now in its annual report.

A booming market: Despite the lack of evidence that machines can work out how were feeling, emotion recognition is estimated to be at least a $20 billion market, and its growing rapidly. The technology is currently being used to assess job applicants and people suspected of crimes, and its being tested for further applications, such as in VR headsets to deduce gamers emotional states.

Further problems: Theres also evidence emotion recognition can amplify race and gender disparities. Regulators should step in to heavily restrict its use, and until then, AI companies should stop deploying it, AI Now said. Specifically, it cited a recent study by the Association for Psychological Science, whichspent two years reviewing more than 1,000 papers on emotion detection and concluded its very hard to use facial expressions alone to accurately tell how someone is feeling.

Other concerns: In its report, AI Now called for governments and businesses to stop using facial recognition technology for sensitive applications until the risks have been studied properly, and attacked the AI industry for its systemic racism, misogyny, and lack of diversity. It also called for mandatory disclosure of the AIs industry environmental impact.

See the original post:

Emotion recognition technology should be banned, says an AI research institute - MIT Technology Review

How the Army Is Really Using AI – AI Daily

It has always been controversial for the army to implement artificial Intelligence within their task force (particularly with the use of drones) because ethics of it, however artificial intelligence is being implemented within the army in other less controversial ways forget the movie Terminator, artificial intelligence is performing many tedious and manual tasks that would use a lot of time and resources and providing assistance to recognition and conversation systems to predictive analytics pattern matching and autonomous systems.

The army leverage artificial intelligence by (for a more detail example) predicting when vehicle parts may need to be replaced which saves a lot of time, money and increase operational safety. Some programs that leverage data using machine learning include Project Maven that retrieves data from drones and aids to automate some of the work that analysts do again saving time and money. As you can see artificial intelligence is being used to improve the armys efficiency.

See the article here:

How the Army Is Really Using AI - AI Daily

AI Wants to Be Your Personal Stylist – PCMag

Artificial intelligence is helping people find their style on their phones, in stores, and even in their very own closets.

A smart stylist is like a good therapist: it takes a keen observer of the human condition to do the job right, and the results can be life-changing. But stylists are expensivewhich is where artificial intelligence comes in.

Fashion AI is subtle enough that shoppers are likely to bump into a dressed-up algorithm without knowing it. Sometimes it's a soft sell on an e-commerce site, other times it's trying to suss out how shoppers feel about items using in-store facial recognition. Amazon is even deploying Alexa to customers' closets via the Look camera, which will critique your outfit choices.

Technology has long been chipping away at the rarefied, exclusive fashion industry, from bloggers replacing fashion editors in front rows and social media stars getting backstage access at shows to street-style stars outshining supermodels and earning hefty incomes on Instagram.

Now the industry needs all the help it can get, as shoppers ditch department store credit cards for Amazon Prime memberships. Here's how AI might help you experience fashion online, at home, on your phone, and in stores.

Since consumers are rarely without their mobile phones, you would think business would be booming for online fashion retailers. But as The Washington Post reports, it can be difficult to compete for shoppers' eyeballs.

Despite some setbacks, subscription-box services saw a 3,000 percent increase in site visits from 2013 to 2016. Stitch Fix, for example, calls itself "your online personal stylist"; customers fill out a style questionnaire so that its stylists can build a wardrobe for shoppers. The Ask an Expert Stylist feature also delivers fast responses to style dilemmas.

The information customers send to Stitch Fix, howeverincluding personal notesfirst gets dissected by AI. A team of people then use the data to select items, Harvard Business Review reports. The AI learns from the choices made by stylists, but it also monitors the stylists themselves, judging whether their recommendations are well-received by customers and figuring out what information and how much is needed for stylists to make quick and effective style choices. One measure of Stitch Fix's success will be its closely watched steps toward an IPO.

Similarly, Propulse works to identify the qualities shoppers are drawn to as they browse items on fashion retailer sites like Frank and Oak. The company was founded by Eric Brassard, who formerly worked in database marketing at Saks, and his platform adapts results to the cut, colors, and patterns that customers prefer.

"If you have history because you shop that shop, assuming that it's a real store and you bought a few things, we create a personalized page with products you've never seen that match the taste of what you browsed and what you bought," Brassard says.

For sales associates who are new to the field or a store, Propulse has an in-store component that lets them input customer preferences and matches those with products.

A hovering salesperson might not be the only one monitoring your in-store activity. Cloverleaf's AI system, dubbed shelfPoint, scans customers via sensors that assess the age, gender, ethnicity, and emotional response of shoppers and then communicates targeted sales messages at them through an LCD.

ShelfPoint is found mostly in grocery stores, but Cloverleaf CEO Gordon Davidson says the company has had discussions with retailers that sell groceries and apparel in their stores. It's also a good way to collect data without requiring shoppers to download an app, take a survey, or otherwise interact with a gadget, Davidson says.

The future of shelfPoint partly lies in turning the information it gathers into recommendations for shoppers. "Now what we're looking at is, how do we start providing more benefit to the shopper? It knows that I'm picking up blue jeans as an example and it may come up and say, 'Hey, have you considered a new brown belt?'" Davidson suggests.

Davidson isn't ready to give up on physical stores. "In reality, when you look at the research Gartner came up with earlier this year, 80 percent of sales still happen in brick-and-mortar, especially in the fashion side of things," he said. "Brick-and-mortar are still going to be around some time."

It's one thing to get advice when you're shopping online or browsing in a store. But when you wake up, get dressed, and face that mood where nothing looks right, there's nothing like a second opinion to set you straight so you can walk out the door. The Amazon Echo Look is just that. The camera-centric version of the Echo's main feature is Style Check. It uses AI and stylists to choose between two outfits based on trends and what it finds flattering on you.

Amazon will not divulge what information goes into the algorithm behind Style Check but the artificial intelligence doesn't work solo. Style Check also uses fashion specialists on its own staff who have backgrounds in fashion, retail, editorial, and styling. An Amazon spokesperson said they focus on fit, color, styling, and current trends. Though Style Check customers can expect a response in about a minute, every verdict includes input from a human stylist. But there are some tasks that the Echo Look handles without a human co-worker.

The Echo Look goes above and beyond what an in-store stylist would do for you and goes full celebrity stylist in two ways: it creates a lookbook of what you've worn and it takes flattering full-length photos that are super shareable. This means that not only is technology coming for the job of stylists, but Instagram husbands better watch out, too.

Chandra is senior features writer at PCMag.com. She got her tech journalism start at CMP/United Business Media, beginning at Electronic Buyers' News, then making her way over to TechWeb and VARBusiness.com. Chandra's happy to make a living writing, something she didn't think she could do and why she chose to major in political science at Barnard College. For her tech tweets, it's ChanSteele. More

Read more here:

AI Wants to Be Your Personal Stylist - PCMag

How AI may become a game-changer for the Indian legal industry – Business Standard

Artificial Intelligence (AI) is fast becoming the new buzzword amidst the Indian legal sector. The adoption of AI technology has already gained marked prominence in the global legal market and Indian firms, too, are looking into this potential treasure trove of innovative assistance to aid in their own activities in the coming days.

Michio Kaku, noted physicist, theorist and futurist once said that the job market in the future will consist of those activities that robots cannot perform. Taking heed of this premonition, Indias top legal service providers seem to be moving past the traditional model of technological insularity and setting their sights on riding the AI wave.

Cyril Amarchand Mangaldas, one of the leading law firms in the country, has been the first to make a public announcement, stating their intention to adopt AI technology for certain legal process activities. In an agreement with Kira Systems, a Canada based technology company, the firm intends on using the Kira machine-learning software for greater automation of its due diligence and transactional practices, which they say will usher in a new era of efficiency and accuracy for the benefit of their clients.

The application of AI in the legal sector is a growing global trend and allows lawyers to devote their time and skill towards more specialised tasks. The system is proven to be fast and accurate, allowing a firm to move up the value chain and attain greater international competitiveness, says Cyril Shroff, managing partner, Cyril Amarchand Mangaldas.

The use of software has been prevalent in first tier firms in India for some time already. Organisations such as Nishith Desai Associates already engage data management, knowledge management and bandwidth management systems, in addition to a variety of public and in-house applications. The eventual adoption of dedicated AI platforms will incorporate this use of technology to more data intensive tasks such as analytics, contract review, document scrutiny and regulatory compliance in the near future. The implementations of these adaptive systems are also pitted to provide greater efficiency and risk mitigation, while also providing large time based advantages.

Globally, the advent of AI systems for the legal sector came as a furtherance of the Legal Process Outsourcing (LPO) model to reduce costs of high volume-low value activities and cater to the changing needs of society. With the demand for lower costs in process driven activities and a move away from the high rate, billable hours and partner tribute model of legal services, the focus shifted towards investing into the previously ignored areas of innovation and technology.

AI platforms are now being used internationally in a multitude of legal tasks such as automated challenges to car parking tickets, which apart from time and hassle, would also require a party to incur larger legal fees than the cost of the infraction itself.

DoNotPay is one such service in New York and London, which provides such a service free of cost and has managed to achieve a 64 per cent success rate.

Apart from these and the wide variety of transactional usages mentioned, AI systems like IBM Watson and Kira are also being used in other complex legal matters and for litigations involving US federal patent cases with great success. In recent times, the technology has also been implemented in judgment predictions and risk assessments.

There is also large potential for the applicability of AI in cross-border contract drafting, negotiations and decision-making exercises undertaken by law firms and their clients. AI also has the potential to impact the retail legal market dramatically, by proving greater reach and lowering of costs, which in turn will significantly better access to justice issues and in turn benefit millions of individuals.

While most legal professionals agree that AI does possess certain definite advantages, some are still sceptical of its applicability to more complex tasks requiring value addition. Many have highlighted that AI platforms require a comprehensive database, which is still in its nascent stages in the Indian judicial scenario. The integration of continually developing information is another area of concern.

The assimilation of constantly evolving data such as regulatory updates or judicial precedents into an AI platform poses a challenge. Feeding in large volumes of information into the system is also a time intensive process. But as the software is used, it gets smarter and more efficient, notes Huzefa Tavawalla, head, International Commercial Law Practice, Nishith Desai Associates.

According to Sitesh Mukherjee, partner, Trilegal, AI is slowly permeating into the Indian legal market, particularly with regard to tasks such as due diligence, but its applicability in other avenues will still take time to gain acceptance.

Critique aside, the adoption of AI technology in the Indian legal sphere in the near future is merely a question of degree rather than probability. And, this assimilation of AI will certainly enhance process management and eventually change the role of legal professionals in our swiftly evolving business environment.

The commercial applicability of AI software in the legal industry is just 12 to 18 months old. With the increasing use of technology in law firms, we can expect to see greater adoption of these kinds of platforms in days to come, says Shroff.

Though a recent Deloitte study says that AI is expected to automate around 1,14,000 legal jobs in the UK by 2020, its impact on the requirement of Indian legal professionals should be far from alarming. Rather, it is expected that in times to come, these industry players will increasingly work alongside AI platforms to provide greater human resource utility and better legal solutions for society as a whole.

See the original post here:

How AI may become a game-changer for the Indian legal industry - Business Standard

CFPB Explains How AI Interacts with Adverse Action of FCRA – ESR NEWS

Written By ESR News Blog Editor Thomas Ahearn

On July 7, 2020, the Consumer Financial Protection Bureau (CFPB) a government agency that helps businesses comply with federal consumer financial law published a blog addressing industry concerns about how the use of Artificial Intelligence (AI) interacts with the existing regulatory framework, specifically the adverse action notice requirements in the federal Fair Credit Reporting Act (FCRA) that regulates background checks.

The CFPB helps ensure that consumer financial products and services operate transparently and efficiently. Financial institutions are starting to deploy AI across a range of functions, including as virtual assistants that can fulfill customer requests, in models to detect fraud or other potential illegal activity, or as compliance monitoring tools. One additional area where AI may have a profound impact is in credit underwriting, the blog stated.

One area of innovation the CFPB is monitoring in consumer financial products and services is AI, particularly a subset of AI called Machine Learning (ML). However, industry uncertainty about how AI fits into the existing regulatory framework may be slowing its adoption, especially for credit underwriting. One important issue is how complex AI models address the adverse action notice requirements in the FCRA, according to the blog.

FCRA also includes adverse action notice requirements. For example, when adverse action is based in whole or in part on a credit score obtained from a consumer reporting agency (CRA), creditors must disclose key factors that adversely affected the score, the name and contact information of the CRA, and additional content. These notice provisions serve important anti-discrimination, educational, and accuracy purposes, the blog stated.

There may be questions about how institutions can comply with these requirements if the reasons driving an AI decision are based on complex interrelationships. Industry continues to develop tools to accurately explain complex AI decisions These developments hold great promise to enhance the explainability of AI and facilitate use of AI for credit underwriting compatible with adverse action notice requirements, the blog concluded.

In the wake of the Financial Crisis of 2007-2008, Congress established the CFPB, an independent regulatory agency tasked with ensuring that consumer debt products are safe and transparent, as part of the 2010 Dodd-Frank Wall Street Reform and Consumer Protection Act. Congress transferred the administration of 18 existing federal statutes to the CFPB, including the FCRA. The CFPB enforces the FCRA.

Employers using automated hiring platforms powered by AI in the belief that they are less biased and discriminatory than processes performed by humans will discover using AI technology in background screening will remain a work in progress even though it has potential, according to the ESR Top Ten Background Check Trends for 2020 compiled by leading global background check firm Employment Screening Resources (ESR).

AI has potential with screening, but is unlikely to be used as quickly as predicted. Between the myriad of federal, state, and local laws regulating screening, as well as discrimination and privacy concerns, the reality is going to be much different than many people predict from a purely technological viewpoint, explained Attorney Lester Rosen, founder and chief executive officer (CEO) of Employment Screening Resources (ESR).

Background checks impact the highly regulated area of employment that requires accuracy specific to the individual. Technology helps to some degree, but each individual is entitled to an individualized assessment, and the law and court cases weigh heavily against the mass processing and categorizing of people for employment, said Rosen, the author of The Safe Hiring Manual, a comprehensive guide to background checks.

Employment Screening Resources (ESR) a leading global background check provider offers fast, accurate, and affordable background checks that comply with FCRA regulations that are enforced by the CFPB. In November 2019, ESR was named as one of the top pre-employment screening services for enterprise-level businesses by HRO Today Magazines Bakers Dozen. To learn more about ESR, visit http://www.esrcheck.com.

NOTE: Employment Screening Resources (ESR) does not provide or offer legal services or legal advice of any kind or nature. Any information on this website is for educational purposes only.

2020 Employment Screening Resources (ESR) Making copies of or using any part of the ESR News Blog or ESR website for any purpose other than your own personal use is prohibited unless written authorization is first obtained from ESR.

The rest is here:

CFPB Explains How AI Interacts with Adverse Action of FCRA - ESR NEWS

Google brings its AI-powered SmartReply feature to YouTube – TechCrunch

Googles SmartReply, the four-year-old, A.I.-based technology that helps suggest responses to messages in Gmail, Androids Messages, Play Developer Consoleand elsewhere, is now being made available to YouTube Creators. Google announced today the launch of an updated version of SmartReply built for YouTube, which will allow creators to more easily and quickly interact with their fans in the comments.

The feature is being rolled out to YouTube Studio, the online dashboard creators use to manage their YouTube presence, check their stats, grow their channels and engage fans. From YouTube Studios comments section, creators can filter, view and respond to comments from across their channels.

For creators with a large YouTube following, responding to comments can be a time-consuming process. Thats where SmartReply aims to help.

Image Credits: Google

Instead of manually typing out all their responses, creators will be able to instead click one of the suggested replies to respond to comments their viewers post. For example, if a fan says something about wanting to see whats coming next, the SmartReply feature may suggest a response like Thank you! or More to come!

Unlike the SmartReply feature built for email, where the technology has to process words and short phrases, the version of SmartReply designed for YouTube has to also be able to handle a more diverse set of content like emoji, ASCII art or language switching, the company notes. YouTube commenters also often post using abbreviated words, slang and inconsistent use of punctuation. This made it more challenging to implement the system on YouTube.

Image Credits: Google

Google detailed how it overcame these and other technical challenges in a post on its Google AI Blog, published today.

In addition, Google said it wanted a system where SmartReply only made suggestions when its highly likely the creator would want to reply to the comment and when the feature is able to suggest a sensible response. This required training the system to identify which comments should trigger the feature.

At launch, SmartReply is being made available for both English and Spanish comments and its the first cross-lingual and character byte-based version of the technology, Google says.

Because of the approach SmartReply is now using, the company believes it will be able to make the feature available to many more languages in the future.

See original here:

Google brings its AI-powered SmartReply feature to YouTube - TechCrunch

Narrow AI: Not as Weak as It Sounds – G2

If you use a smartphone, you probably have come across narrow AI.

It's the level of artificial intelligence we currently have access to, and probably the only one we will have for a few years or decades more.

But don't be deceived by the term narrow. Although this AI has a limited set of abilities and merely tries to mimic the human brain, it's pretty adept when it comes to performing the single task it's designed to do.

It makes self-driving cars a reality, recommends the products you're more likely to buy and movies you'll be interested in watching, and even keeps your email inbox free of spam messages.

Narrow AI is defined as a type of artificial intelligence system capable of handling single or limited tasks, like playing chess, facial recognition, analyzing big data, or acting as a virtual assistant. It is also known as artificial narrow intelligence (ANI) or weak AI and focuses on performing one narrow task at a time.

Narrow AI systems can analyze and interpret data with remarkable accuracy and better timing than humans. It can help us make better data-driven decisions, and more importantly, relieve us from numerous monotonous tasks.

Although this type of machine intelligence lacks consciousness or the ability to reason, it strives to improve all aspects of human life with automation and is an indispensable, ingenious technology of the 21st century.

of company representatives feel that investing in artificial intelligence can bring more competitive advantages over their rivals.

Source: Statista

Simply put, ANI operates within a pre-defined range and can't think for itself. It performs just the specific tasks it's designed to do and never attempts to achieve anything beyond them. But this won't be the case when AI technologies progress and we create machines with human-level intelligence.

Artificial general intelligence, or general AI, is an AI system capable of learning, comprehending, and functioning just like human beings. Such an AI agent will have artificial consciousness (the state of being aware of and responsive to ones surroundings) and be able to solve unfamiliar problems.

General AI is also known as full AI or strong AI and will have human intelligence. However, we're years away from creating such artificial intelligence systems, and their estimated time of arrival ranges from decades to centuries, depending on the AI researcher you ask.

One of the most significant differences between narrow AI and general AI is that the former lacks consciousness or the ability to reason. Additionally, general AI has a wide range of cognitive abilities (brain-based skills which are essential for communicating, acquiring knowledge, and reasoning) similar to humans while weak AI has none.

The core components of narrow AI, such as machine learning, natural language processing, artificial neural networks, and deep learning, would still be leveraged by AGI, but probably their advanced versions or along with soon-to-be discovered technologies.

AGI systems can perform virtually any intellectual task humans could ever do. As mentioned earlier, they can think and act like humans and can most likely beat us at our own game as they don't feel tiredness or emotions like fear or sadness unless they're programmed to do so.

Narrow artificial intelligence systems like Siri, Google Assistant, or Cortana will stutter if you ask questions such as "how would the universe end?" or "what is the meaning of life?" unless there's an article on the internet that explains the same. These virtual assistants could be seen as well-performing natural language processing robots as they process our speech and input them into a search engine.

However, a strong AI would be able to come up with plausible answers, just like humans, and can put their imagination into play. They could even ask questions we've never heard before, and would probably have the ability to lie. This also means that during the tests for assessing an AI like the Turing Test, the machine could intentionally act dumb to not reveal its intelligence.

General AI could also be our key to achieving artificial super intelligence (ASI), which is the third and ultimate level of AI and could surpass human intelligence and decision-making abilities by a million times. Achieving ASI let alone AGI is viewed as an existential threat by many, including Elon Musk and Stephen Hawking.

But there's little to no reason to be worried right now as we're years away from achieving either, and some experts claim we might never achieve them. That's because it's virtually impossible to model the human brain, which is critical for attaining features of the human mind, including consciousness and sentience.

However, it would be entirely pessimistic and naive to think that there's not even the slightest chance as the pace of technological progress is tremendous, and we're unlocking and utilizing new technologies like quantum computing.

It's easier to spot narrow AI examples as the technology has impacted almost all spheres of human life. Here are a few of the real-world applications of narrow AI that you probably have come across, especially if you're hooked on the internet.

Hours rarely pass without you interacting with your smartphone's voice assistant. As you might have guessed, voice assistants like Siri and Google Assistant are the most common examples of narrow AI.

However, they aren't the best examples as the majority of their tasks are speech recognition-related.

Driven by machine learning and AI technologies, IBM Watson is a question-answering machine used extensively in the healthcare industry. When doctors may take weeks or months to look through documents, Watson can do the same in seconds. Interestingly, Watson was initially developed to answer questions on the television show Jeopardy!.

Watson can also assist companies with risk and fraud management and significantly lowering business costs. It can also help in identifying areas of a business requiring process improvement by analyzing vast amounts of unstructured data sets.

AlphaGo is an AI-based computer program developed by DeepMind an artificial intelligence company acquired by Google in 2014. It's the first computer program to beat a professional human Go player and was initially a research project aimed at testing the competence of a neural network at Go.

AlphaGo has numerous advanced versions, including AlphaStar and AlphaZero the latter being succeeded by an advanced version called MuZero, which can learn without being taught the rules. Creating AI systems that can improve without much training or assistance from humans is critical for achieving AGI, and so, AlphaGo is a huge win for the AI community.

AlphaGo uses deep learning and artificial neural networks to identify the best moves with the highest winning percentages. In terms of competitiveness, AlphaZero is incredibly superior to the earlier versions of AlphaGo and is currently one of the world's top players in Go and chess.

Narrow artificial intelligence makes it possible for self-driving cars to navigate through traffic, sense obstacles in a lane, and make sure passengers and pedestrians are safe. The enormous volumes of data generated by the cameras, sensors, and GPS fitted on the vehicle are analyzed and processed with the help of ANI.

More precisely, weak AI enables self-driving cars to see, hear, and think. But do note that self-driving cars aren't made possible by a single ANI rather through a collection of ANI systems. Although fully autonomous vehicles are still in their infancy, we're only a couple of years away from being driven around by drivers who don't get tired.

One of the biggest reasons you feel the urge to check out Netflix or YouTube is their recommendation systems. Such a system is powered by AI algorithms and suggests the shows and videos with impressive accuracy to keep you engaged and entertained.

The system continually learns from your responses to each piece of recommended content. In a way, these algorithms understand your preferences more than anyone else, including yourself.

Chatbots are software applications powered by narrow AI. They're capable of simulating conversations with humans and can give customers the impression that a company's customer support is online 24/7.

Chatbots learn more as they communicate with customers and can solve basic issues without requiring human assistance.

Narrow AI robots aren't the ones you could talk with for hours on end, nor can they perform tasks beyond their scope. But they can outperform humans at the specific tasks they're designed to do and can eliminate numerous tedious tasks. Narrow AI is where we are right now, and strong AI is our next destination.

Want to know more about how artificial intelligence is being utilized to make the internet a better place? Then check out the role of AI in checking plagiarism.

See more here:

Narrow AI: Not as Weak as It Sounds - G2

Is AI the new spokesperson for your business? – Computer Business Review

Add to favorites

If AI is rolled out to rapidly and irresponsibly its benefits could actually cause far reaching consequences for millions.

AI has been named as one of the most importantemerging trendsin business technology for 2017 in a report by Accenture, but are we ready for it?

The report, Technology Vision, is a yearly look atthe biggest trends in business for the coming year. The 2017 iteration has named AI as one of the principle changes that will shape the landscape in business for years to come.

AI itself has become increasingly more prevalent in the world and as we move closer towards self driving cars and responsive voice commands for our smart homes, its importance cannot be understated. The more integrated artificial intelligence becomes in our daily lives, the more businesses should strive to adopt it into their operations, though they should be mindful of its potential consequences.

Accenture believe thatAI has the potential to become a spokesperson for a business, even becoming more recognisable than the brand itself. As AI replaces interfaces and reshapes interactions with customers it will become a representative of businesses image as a whole.

Amazon Echo is a great example. Now used in more than 3 million homes, the Smart Speaker utilises an AI known as Alexa in an effort to streamline home living. The device is capable of enhancing customer experiences by giving access to weather and traffic reports as well as offering the ability to order much of Amazons listings just by talking to it.

Paul Daugherty, Accenture chief technology & innovation officer said: As technology transforms the way we work and live, it raises important societal challenges and creates new opportunities. Ultimately, people are in control of creating the changes that will affect our lives, and were optimistic that responsive and responsible leaders will ensure the positive impact of new technologies.

The Technology Vision Survey of over 5,000 IT and business executives revealed that over 79% believe that AI will help accelerate technology adoption, with a further 85% stating that they would be making extensive investments into the technology over the next three years.

Current examples of AI being utilised outside of consumer applications include the Rhizabot, an AI program which can translate complex business analysis questions, alleviating the need to create simplified and easily translatable phrases that may not be sufficient. In agriculture also, farmers have been using AI crop management tools that allow machines to learn how best to tend crops, whether through increased water and fertiliser supply or removing sprouts that will hinder the growth of others.

Accenture estimates that in 5 years customers will be less impressed with brands and place much more stock in how well the AI interface functions. In 7 years most interfaces will have moved beyond screens and become integrated into daily tasks, and in 10 years they predict that AI assistants will be so intertwined with day to day tasks that they will maintain constant productivity through outlets such as creating video summaries immediately aftermeetings.

However, this rapid integration and expansion of AI throughout the business world could have a serious domino effect. For instance, if certain positions can be phased out in favour of increasingly more advanced AI, it has the potential to create serious economic inequality. Because theAIrequires no compensation for its time, the potential profit for its service will be disseminated throughout a smaller number of people.

Recently, Bill Gates, founder of Microsoft, suggested that should robots supplant humans in the workplace they should be taxed as a human would.

In an interview with Quartz, Gates said:Right now, the human worker who does, say, $50,000 worth of work in a factory, that income is taxed and you get income tax, Social Security tax, all those things. If a robot comes in to do the same thing, youd think that wed tax the robot at a similar level.

The billionaire philanthropist believes that the income tax made from robots could be used to support social systems and train others in more skilled labour in response to growing automation.

As more and more tasks become automated it seems inevitable that an AI could eventually be used to replace a customer service line, or a call centre. Amazons Alexa is just one of many AIs that use continual physical interactions and cloud learning to improve its own speech patterns and usability constantly. During initial tests Alexas response time was three seconds, too long for a sufficient conversation, since then Amazon has managed to reduce that time to under 1.5 seconds.

Will we see rapid growth in unemployment if Artificial Intelligence is rolled out irresponsibly? If self driving technology can ever be applied to heavy goods vehicles it could very well put millions of truck drivers out of work worldwide.

Emma McGuigan at Accenture said that lorry drivers are a long way from being replaced, so that is not an immediate concern. She believes that AI will cause a disruption in the business world, but that disruption will be a net positive. Accenture introduced AI into its India business but rather than leave many unemployed it actually allowed 20,000 people to be redeployed to other tasks where a human touch is necessary,

She said: I dont think the pace of AI will affect work, I think it will actually increase it.

For many the concept of integrated Artificial Intelligence seems to be a question of when and not if, so as the business world moves forward its important to ensurethat this technology is implemented in a safe and secure way that maximises benefit for businesses, employee, and consumer alike.

Continued here:

Is AI the new spokesperson for your business? - Computer Business Review

What Chess Can Teach Us About the Future of AI and War – War on the Rocks

This article was submitted in response to the call for ideas issued by the co-chairs of the National Security Commission on Artificial Intelligence, Eric Schmidt and Robert Work. It addresses the first question (part a.), which asks how will artificial intelligence affect the character and/or the nature of war.

***

Will artificial intelligence (AI) change warfare? Its hard to say. AI itself is not new the first AI neural network was designed in 1943. But AI as a critical factor in competitions is relatively novel and, as a result, theres not much data to draw from. However, the data that does exist is striking. Perhaps the most interesting examples are in the world of chess. The game has been teaching military strategists the ways of war for hundreds of years and has been a testbed for AI development for decades.

Military officials have been paying attention. Deputy Defense Secretary Robert Work famously used freestyle (or Centaur) chess to promote the third offset strategy, where humans and computers work together, combining human strategy and computer speed to eliminate blunders while allowing humans to focus on the big picture. Since then, AI and supercomputers have continued to reshape how chess is played. Technology has helped to level the playing field the side with the weaker starting position is no longer at such a disadvantage. Likewise, intimidation from the threat of superhuman computers has occasionally led to some unorthodox behaviors, even in human-only matches.

The experience of AI in the chess world should be instructive for defense strategists. As AI enters combat, it will first be used just in training and in identifying mistakes before they are made. Next, improvements will make it a legitimate teammate, and if it advances to superhuman ability in even narrow domains of warfighting, as it has in chess then it could steer combat in directions that are unpredictable for both humans and machines.

What Does Chess Say About AI-Human Interaction?

Will AI replace soldiers in war? The experience of using AI and machine learning in chess suggests not. Even though the best chess today is played by computers alone, humans remain the focus of the chess world. The world computer chess championship at the International Conference on Machine Learning in Stockholm attracted a crowd of only three when I strolled by last year. In contrast, the human championship was streamed around the globe to millions. In human-only chess though, AI features heavily in the planning process, the results of which are called prep. Militaries are anticipating a similar planning role for AI, and even automated systems without humans rely on a planning process to provide prep for the machines. The shift toward AI for that process will affect how wars are fought.

To start, computers are likely to have an equalizing effect on combat as they have had in chess. The difference in ability among the top competitors in chess has grown smaller, and the advantage of moving first has become less advantageous. That was evident in last years human-only chess championship where competitors had the closest ratings ever in a championship, and the best-of-12 match had 12 straight draws for the first time. There have been more draws than wins in every championship since 2005, and though it is not exactly known why, many believe it is due to the influence of superhuman computers aiding underdogs, teaching defensive play, or simply perfecting the game.

AI is likely to level the military playing field because progress is being driven by commercial industry and academia which will likely disseminate their developments more widely than militaries. That does not guarantee all militaries will benefit equally. Perhaps some countries could have better computers or will be able to pay for more of them, or have superior data to train with. But the open nature of computing resources makes cutting-edge technology available to all, even if that is not the only reason for equalization.

AI Favors the Underdog and Increases Uncertainty

AI seems to confer a distinct benefit to the underdog. In chess, black goes second and is at a significant disadvantage as a result. Fabiano Caruana, a well-known American chess player, claimed that computers are benefiting black. He added that computer analysis helps reveal many playable variations and moves that were once considered dubious or unplayable. In a military context, the ways to exert an advantage can be relatively obvious, but AI planning tools could be adept at searching and evaluating the large space of possible courses of action for the weaker side. This would be an unwelcome change for the United States, which has benefited from many years of military superiority.

Other theories exist for explaining the underdogs improvement in chess. It may be that computers are simply driving chess toward its optimum outcome, which some argue is a tie. In war it could instead be that perfect play leads to victory rather than a draw. Unlike chess, the competitors are not constrained to the same pieces or set of moves. Then again, in a limited war where mass destruction is off the table, both sides aim to impose their will while restricting their own pieces and moves. If perfect play in managing escalation does lead to stalemate, then AI-enhanced planning or decision-making could drive toward that outcome.

However, superhuman computers do not always drive humans toward perfect play and can in fact drive them away from it. This happened in a bizarre turn in last years chess world championship, held in London. The Queens Gambit Declined, one of the most famous openings that players memorize, was used to kick off the second of the 12 games in the London match, but on the tenth move, the challenger, Caruana, playing as black, didnt choose either of the standard next moves in the progression. During planning, his computers helped him find a move that past centuries had all but ignored. When the champion Magnus Carlsen, who is now the highest-rated player in history, was asked how he felt upon seeing the move, he recounted being so worried that his actual response cant be reproduced here.

It is not so much that Caruana had found a new move that was stronger than the standard options. In fact, it may have even been weaker. But it rattled Carlsen because, as he said, The difference now is that Im facing not only the analytical team of Fabiano himself and his helpers but also his computer help. That makes the situation quite a bit different. Carlsen suddenly found himself in a theater without the aid of electrical devices, having only his analytical might against what had become essentially a superhuman computer opponent.

His response might presage things to come in warfare. The strongest moves available to Carlsen were ones that the computer would have certainly analyzed and his challenger would have prepared for. Therefore, Carlsens best options were either ones that were certainly safe or ones that were strange enough that they would not have been studied by the computer.

When asked afterward if he had considered a relatively obvious option that he didnt chose seven moves later in the game, Carlsen joked that Yeah, I have some instincts I figured that [Caruana] was still in prep and that was the perfect combination. Fear of the computer drove the champion, arguably historys best chess player, to forego a move that appeared to be the perfect combination in favor of a safer defensive position, a wise move if Caruana was in fact still in prep.

In war, there will be many options for avoiding the superhuman computing abilities of an adversary. A combatant without the aid of advanced technology may choose to withdraw or retreat upon observing the adversary doing something unexpected. Alternatively, the out-computed combatant might drive the conflict toward unforeseen situations where data is limited or does not exist, so as to nullify the role of the computer. That increases uncertainty for everyone involved.

How Will the U.S. Military Fare in a Future AI World?

The advantage may not always go the competitor with the most conventional capabilities or even the one that has made the most computing investment. Imagine the United States fighting against an adversary that can jam or otherwise interfere with communications to those supercomputers. Warfighters may find themselves, like Carlsen, in a theater without the aid of their powerful AI, up against the full analytical might of the adversary and their team of computers. Any unexpected action taken by the adversary at that point (e.g., repositioning their ground troops or launching missile strikes against unlikely locations) would be cause for panic. The natural assumption would be that adversary computers found a superior course of action that had accounted for the most likely American responses many moves into the future. The best options then, from the U.S. perspective, become those that are either extremely cautious, or those that are so unpredictable that they would not have been accounted for by either side.

AI-enabled computers might be an equalizer to help underdogs find new playable options. However, this isnt the only lesson that chess can teach us about the impact of AI-enabled supercomputers and war. For now, while humans still dominate strategy, there will still be times where the computer provides advantages in speed or in avoiding blunders. When the computer overmatch becomes significant and apparent, though, strange behaviors should be expected from the humans.

Ideally, humans deprived of their computer assistants would retreat or switch to safe and conservative decisions only. But the rules of war are not as strict as the rules of chess. If an enemy turns out to be someone aided by feckless computers, instead of superhuman computers aided by feckless humans, it may be wise to anticipate more inventive perhaps even reckless human behavior.

Andrew Lohn is a senior information scientist at the nonprofit, nonpartisan RAND Corporation. His research topics have included military applications of AI and machine learning. He is also co-author of How Might Artificial Intelligence Affect the Risk of Nuclear War? (RAND, 2018).

Image: U.S. Marine Corps (Photo by Lance Cpl. Scott Jenkins)

Follow this link:

What Chess Can Teach Us About the Future of AI and War - War on the Rocks