Here’s Why Microsoft Is Making Its Own AI Chip for Hololens – nwitimes.com

This week,Microsoft (NASDAQ: MSFT) stepped further into the artificial intelligence (AI) market when the company said that its next version of its augmented reality glasses, called Hololens, will come with an AI coprocessor.

It might not seem like that big of a deal at face value. After all, the first version of Hololens isn't exactly an integral part of the company's business -- it's only available to developers right now -- but the announcement is more of a bet on Microsoft's potential in the broader AI market. Simply put, Alphabet's (NASDAQ: GOOG) (NASDAQ: GOOGL) Google and Apple (NASDAQ: AAPL) already have, or are working on, their own AI chips and Microsoft can't afford to be left behind.

Image source: Getty Images.

Microsoft's new AI chip is focused on processing images and understanding text. Those are two very important aspects for an AI system because, just like for people, images and speech provide lots of contextual information.

The company's AI processor will work alongside Microsoft's Holographic Processing Unit (HPU) and will help process information on the device faster, and with less battery drain, than off-loading it to cloud-based servers.

Aside from these benefits for Hololens, the AI coprocessor helps Microsoft keep pace with Google. Last year, Google introduced its own AI chip, the Tensor Processing Unit (TPU), which it uses mainly for its own cloud servers. But the company recently announced at its I/O conference that its second version of its AI chip will be available for companies and developers to tap into as well so they can both run deep neural networks and train them as well.

Google's chips are made for devices like Microsoft's AI chip is, but the fact that Google has already developed the second version of its own AI chip was likely an incentive for Microsoft to pursue its own.

Even more similar to Microsoft's new Hololens AI chip is Apple's rumored AI coprocessor. The company is allegedly working on an AI processor called the Apple Neural Engine, which will be used for facial and speech recognition. It would likely use the AI chip in its iPhone and iPad, though it could bring it to its other devices as well. It also introduced a new tool for developers called Core ML, which allows them to add machine learning to apps. Some took that as a sign that an Apple AI chip is just around the corner.

The broad artificial intelligence market is expected to be worth $47 billion by 2020. Already, Microsoft and Google are competing in the cloud computing space and pursuing new kinds of AI processors that could give them an advantage in this market. Microsoft's latest coprocessor may not help its servers, but it underscores its commitment to keeping pace with its competitors.Additionally, Microsoft could eventually use an AI coprocessor in its own line of tablets in order to better compete with Apple's devices.

Don't expect any revenue from a Microsoft AI chip or for it to drive sales of Hololens. Investors should instead think of it as yet another way Microsoft is beefing up its long-term AI prospects and ensuring that it doesn't get left in the wake of Google's and Apple's own chips.

10 stocks we like better than Microsoft

When investing geniuses David and Tom Gardner have a stock tip, it can pay to listen. After all, the newsletter they have run for over a decade, Motley Fool Stock Advisor, has tripled the market.*

David and Tom just revealed what they believe are the 10 best stocks for investors to buy right now... and Microsoft wasn't one of them! That's right -- they think these 10 stocks are even better buys.

*Stock Advisor returns as of July 6, 2017

Suzanne Frey, an executive at Alphabet, is a member of The Motley Fool's board of directors. Teresa Kersten is an employee of LinkedIn and is a member of The Motley Fool's board of directors. LinkedIn is owned by Microsoft. Chris Neiger has no position in any stocks mentioned. The Motley Fool owns shares of and recommends Alphabet (A shares), Alphabet (C shares), and Apple. The Motley Fool has a disclosure policy.

Visit link:

Here's Why Microsoft Is Making Its Own AI Chip for Hololens - nwitimes.com

AI Weekly: GoodAI aims to fund research on fundamental AI challenge – VentureBeat

Elevate your enterprise data technology and strategy at Transform 2021.

Theres a growing need for investment in foundational AI technologies. With deep learning potentially approaching computational limits and subfields like natural language running up against intractable technical barriers, novel AI and machine learning techniques have arguably never been in higher demand.

NYU psychologist Gary Marcus, Google software engineer Francois Chollet, and Facebook head of AI Jerome Pesenti, among others, have argued that the lack of progress isnt surprising, as researchers face challenges both algorithmic and scientific. Even the most sophisticated AI models can suffer from catastrophic forgetting, or a tendency to abruptly forget previously learned information, in addition to a lack of reproducibility, explainability, stability, and reliability.

Thats why Marek Rosa, a Slovakian entrepreneur and computer programmer, founded GoodAI, a company dedicated to the research and development of general artificial intelligence (AGI). Hes the CEO and founder of Keen Software House, an independent video game design studio with a headquarters in Prague, the capital of the Czech Republic.

Rosa founded GoodAI in 2014 with a $10 million investment, then announcing the company publicly and its first research roadmap in 2015 and 2016, respectively. In 2017, he founded the General AI Challenge, pledging $5 million in prize money to tackle critical research problems in human-level AI development.

GoodAI now employs around 20 researchers and engineers. Its newest endeavor is the GoodAI Grants Initiative, which aims to fund efforts in areas like curiosity and continual learning. To date, the GoodAI Grants Initiative has awarded over $650,000 all from Rosa to nine projects that GoodAI considers a part of its roadmap to general AI.

What makes us different [from other grant organizations] is our openness and flexibility and our willingness to work with potential grantees in creating a fitting proposal, GoodAI PR manager Will Millership told VentureBeat in an email interview. We really dont want to be limited in who we work with by bureaucracy and therefore we work with individual scientists, groups of researchers, private companies, and even individual students. We do a lot of work to make sure that all the intellectual property from the projects is shared but this doesnt necessarily mean completely open. Each agreement in place aims to respect the academic and business interests of both GoodAI and the receivers of the grants.

In December 2019, Rosa and the GoodAI team published Badger, a unifying AI architecture defined by a principle GoodAI calls modular lifelong learning. Badger, which outlines the direction of GoodAIs research, seeks to create a system of AI agents capable of adapting to a growing, open-ended range of tasks while remaining able to reuse knowledge acquired in previous tasks.

Our aim is to develop safe general AI as fast as possible to help humanity and understand the universe, Millership said. We see the creation of human-level AI as the biggest challenge to mankind and a task far beyond that of an individual researcher or research group. Thats why we believe collaboration and not competition is the best way forward.

Among GoodAIs grant recipients is Deepak Pathak, an assistant professor at Carnegie Mellon University whos taking inspiration from developmental psychology and particularly how curiosity drives humans early developmental learning. Another is Ferran Alet, a Ph.D. student at MITs Computer Science and Artificial Intelligence Laboratory, whos aiming to make an AI model that generalizes to new tasks in new environments from small amounts of data and previous experiences.

GoodAIs ambition AGI, or the hypothetical intelligence of a machine with the capacity to understand or learn from any task has its detractors. Facebook chief AI scientist Yann LeCun believes that it cant exist, because theres no such thing as general intelligence. He argues that even human intelligence is very specialized, requiring many different systems to accomplish different individual tasks.

In something of a rebuttal to this, GoodAI recently released its latest research roadmap, which spotlights some of the technical challenges related to creating human-level or general AI. GoodAI asserts that AGI must learn to learn and engage in lifelong learning, both continuously and at a gradual cadence. It also believes that AGI should be able to engage in open-ended exploration and self-invent goals as well as generalize out of distribution and extrapolate to new problems.

Each of these features reflects the ways in which humans learn throughout their lifetime and therefore we see them as key to creating AI thats able to generalize to new problems in different environments, much like humans do, Millership said. We [plan to] work closely with the grantees during their projects, offering support if they need it, and [put] on a seminar in the summer, where all grantees can share their ideas and projects. Were trying to create an international community of researchers crossing the boundaries of academia and industry.

Despite recentbreakthroughsin solving barriers to AGI, its clear the road to more humanlike AI will be long and winding. However, efforts like GoodAI, along with nonprofit organizations and open communities like ContinualAI and EleutherAI, look to accelerate progress by tapping into the broader pool of AI and machine learning expertise.

For AI coverage, send news tips toKyle Wiggers and be sure to subscribe to the AI Weekly newsletterand bookmark our AI channel,The Machine.

Thanks for reading,

Kyle Wiggers

AI Staff Writer

Read the original here:

AI Weekly: GoodAI aims to fund research on fundamental AI challenge - VentureBeat

Here’s How CEO’s Can Harness the Full Potential of AI – Analytics Insight

Artificial intelligence (AI) stands apart as a transformational innovation of our digital age and its practical application all through the economy is developing apace.

Artificial intelligence, automation and complementary technologies already play a critical role in how organizations work. According to a survey, 54% of executives state that AI solutions have increased profitability in their organizations, and that number is sure to develop in the coming years. But as opposed to many press reports, increased efficiency doesnt really bring about lost jobs. As business activities become smarter, with more AI incorporated in them, these tools will be utilized less to replace individuals and more to augment them. Innovation will help build human capabilities over a scope of jobs, functions and business units.

Numerous CEOs feel they have to bring AI into their company. Theres this fear factor that if youre not on the AI temporary fad, at that point you will miss out to contenders that will be eating your market, since theyre utilizing technologies to settle on decisions faster and superior to you.

They may ask the chief information officer, What are we doing in AI? And the CIO will at that point hire or try to procure data scientists, whose work speaks to a sort of intermediary for AI. In any case, data scientists have a particular sort of skill. They see how to utilize statistics and machine learning to discover patterns in information. Theyre not really acceptable at building production-grade systems that can settle on decisions or that can adapt themselves.

CEO needs to define a vision for the ways in which automation and AI will drive the organizations business system. A key issue here is the scope and desire of that vision, how extensively and how quick the company should implement these innovations. Will it be an AI pioneer or just a fast devotee in its industry? Similarly,CEOs have to distinguish explicit business issues or challenges and figure out where AI can help. The hype encompassing some rising uses of AI can be overpowering to the point that organizations are enticed to chase them, launching pilot after pilot without a reasonable methodology for scaling up successes or tying initiatives to more extensive strategic goals.

The greatest friction may originate from having loads of expenses related to utilizing employees. Or on the other hand, the organization may have frictions related to customer experience. Or, if they have a lot of analysts reading lots of reports and, at that point trying to integrate those reports into data, they can get machine learning to do that better.

A key part of ensuring you have the correct procedures and teams set up is considering data in the correct ways. You should begin by distinguishing where you need to make value, then take a look at what data resources you as of now have and which ones you have to get that going. Without the capacity to extract information from different systems or to ensure that the right individuals approach the correct information when they need it, AI cant in any way, possibly deliver targeted benefits.

CEOs likewise need to have an exceptionally clear comprehension about the competitive landscape. Most organizations dont simply have direct competition; they likewise have indirect competition from the likes of Google, Facebooks and Alibaba. Loads of those large organizations can enter practically any market and shake it up. So, organizations should be taking a look at indirect competitors and evaluating what these contenders could do, given every one of the information that theyre as of now sitting on, in light of the fact that once they make sense of how to mobilize that data, they can tear apart those business sectors.

Further, CEOs also need to harness employees intrinsic motivation to learn. Similarly, as you develop your own development outlook, you ought to encourage employees at all levels to do likewise, while imparting plainly all through the company how integrating certain technologies may affect individuals jobs so they see how they can add to the organizations prosperity as well as their own.

Go here to read the rest:

Here's How CEO's Can Harness the Full Potential of AI - Analytics Insight

93% of security operations centers employing AI and machine learning tools to detect advanced threats – Security Magazine

93% of security operations center employing AI and machine learning tools to detect advanced threats | 2020-10-30 | Security Magazine This website requires certain cookies to work and uses other cookies to help you have the best experience. By visiting this website, certain cookies have already been set, which you may delete and block. By closing this message or continuing to use our site, you agree to the use of cookies. Visit our updated privacy and cookie policy to learn more. This Website Uses CookiesBy closing this message or continuing to use our site, you agree to our cookie policy. Learn MoreThis website requires certain cookies to work and uses other cookies to help you have the best experience. By visiting this website, certain cookies have already been set, which you may delete and block. By closing this message or continuing to use our site, you agree to the use of cookies. Visit our updated privacy and cookie policy to learn more.

Read this article:

93% of security operations centers employing AI and machine learning tools to detect advanced threats - Security Magazine

Engineer-turned-photographer eyes switch to digital field with AI skills – The Straits Times

Artificial intelligence (AI) has not taken over the world yet, but former engineer Zack Wong is preparing himself for this brave new future.

Mr Wong, 43, recently did a tech immersion course at Republic Polytechnic to pick up AI skills. He has also learnt programming and coding.

This is a far cry from how he started his career - working as an engineer dealing with the repair development of aircraft engine components.

He then switched to supply chain operations within the same industry and oversaw the coordination of material supply between the operations, inventory and purchasing departments.

Mr Wong left that behind to branch out into photography at a boutique creative agency. He produced images that were used in editorials, commercials and advertisements.

But both the creative industry and his old aviation sector were hit by the coronavirus pandemic.

"The Covid-19 situation has... forced many to look for other jobs to sustain (themselves). I was just one of the many. I was looking to return to the aerospace industry but it was also going down due to the cut in flights and lockdowns," Mr Wong told The Straits Times.

"I wanted to upgrade and reskill myself with new knowledge in the growing industry of AI. With the downscaling of my company and the loss of projects, I found myself at a crossroads - whether to continue to be a resident photographer or upgrade with new skills, especially in this digital era of cloud and AI."

Mr Wong hopes that these skills will help him fulfil his long-term plan of working in the digital field, particularly in relation to computer vision and imaging, although he acknowledged there will be challenges.

"It is especially difficult for mid-career switchers like me, especially when we have only less than three months of experience and knowledge, and companies are reluctant to give us opportunities," he said.

"I have attended quite a few courses and webinars as well in order to keep myself updated on the current job market requirements."

Read more here:

Engineer-turned-photographer eyes switch to digital field with AI skills - The Straits Times

Google’s AI got highly aggressive when competition got stressful in a fruit-picking game – Quartz

Lets pretend you care, very much, about winning a game. The competition heats up, your oppositions closing in, and youre at risking of losing. What do you do? If your competitive streak is alive and well, then you get aggressive. Forget decorum, focus on the prize, and shove your opponent out of the way to claim your victory.

Turns out, Googles DeepMind artificial intelligence does much the same. The more intelligent the AI network is, the quicker it is to get aggressive in competitive situations where such aggression will pay off. The behavior raises questions about the link between intelligence and aggression, what it means for AI to mimic human-like emotional responses, and, if youre worried about potential robotic overlords, what we need to do to keep AI aggression in check.

In a study published online (but not yet in a peer-reviewed journal), Deep Mind researchers had AI agents compete against each other in 40 million rounds of a fruit-gathering computer game. In the game, each agent had to collect as many apples as possible. They also could temporarily knock an opponent out of the game by hitting them with a laser beam. Heres a video of the game:

When apples were abundant, the two agents were happy to collect their fruit without targeting each other. But in more scarce scenarios with fewer apples around, the agents became more aggressive. The researchers also found that the greater the cognitive capacity of the agent, the more frequently they attacked their opponent. This makes sense, as in this scenario attacking an opponent is more complex behavior and so requires greater intelligence.

However, the AI also learned to display cooperative behavior when that brought a benefit. In a second game, two agents acted as wolves while a third was the prey. If the two wolf agents worked together to catch their prey, they received a higher reward. When the two wolves capture the prey together, they can better protect the carcass from scavengers and hence receive a higher reward, the researchers explained in their paper. In this game, the more intelligent agents were less competitive and more likely to cooperate with each other.

The DeepMind researchers believe that as their studies of how AI agents compete become more complex, they could be used to better understand how humans learn to collaborate en masse. This model also shows that some aspects of human-like behavior emerge as a product of the environment and learning, lead author Joel Weibo told Wired. Say you want to know what the impact on traffic patterns would be if you installed a traffic light at a specific intersection. You could try out the experiment in the model first and get a reasonable idea of how an agent would adapt.

Unnervingly, this suggests that human responses to competitive scenarios arent so different to learned AI responses. While a losing sport teams cutthroat tactics may seem like a deeply human response, this behavior is much the same as AI computer characters programmed to compete.

As for whether aggressive AI fits into doomsday scenarios of robots overthrowing humans, well, robots dont need to show emotions to be a threat. While AI has fairly limited intelligence and is focused on fruit-picking, aggressive behavior isnt much to worry about. For now, at least, the biggest threat is how the humans behind the AI decide to program their robots.

Follow this link:

Google's AI got highly aggressive when competition got stressful in a fruit-picking game - Quartz

US intelligence agencies are beginning to build AI spies – Quartz

A US intelligence director says a lot of espionage is more boring than you might think, and much of it could be handed over to artificial intelligence.

A significant chunk of the time, I will send [my employees] to a dark room to look at TV monitors to do national security essential work, Robert Cardillo, head of the National Geospatial-Intelligence Agency told reporters including Foreign Policy. But boy is it inefficient.

Cardillo calls out recent advances in artificial intelligence, giving algorithms the ability to analyze vast amounts of images and video to find patterns, give data about the landscape, and identify unusual objects. This kind of work is critical for assessing national security concerns like foreign missile-silo activity, or even just to check in on North Korean volleyball games.

Cardillo has hired a former tech CEO, Anthony Vinci, to lead development of this machine-learning technology. Vinci previously founded Findyr, a company that crowdsources data for companies, like pictures of how products are displayed on shelves or infrastructure development progress.

But the US government is already trailing behind Silicon Valley on this pursuit. Facebook, deep into its crusade to connect the world, was able to apply machine learning to satellite data last year, analyzing buildings likely to contain wireless internet down to five-meter accuracy. Those data were intended to be used to guide Facebooks internet drone, Aquila. (Aviation might prove a more difficult project than machine learning: Facebooks head of global aviation policy indicated that the company doesnt have a timeline to even get one drone in service after last years test.)

Google has also touted its ability to discern details in satellite imagery for accuracy-critical uses, like defense and aviation. Stanford University has used satellite imagery to map poverty.

Cardillos initiatives arent the first use of AI by intelligence or defense agencies. DARPA and IARPA, US defense and intelligence research agencies, have been funding AI research for decades, and the Central Intelligence Agencys venture arm is supporting efforts to apply AI analysis to satellite imagery.

Read this article:

US intelligence agencies are beginning to build AI spies - Quartz

Researchers were about to solve AI’s black box problem, then the lawyers got involved – The Next Web

AI has a black box problem. We cram data in one side of a machine learning system and we get results out the other, but were often unsure what happens in the middle. Researchers and developers nearly had the issue licked, with explainable algorithms and transparent AI trending over the past few years. Then came the lawyers.

Black box AI isnt as complex as some experts make it out to be. Imagine you have 1,000,000 different spices and 1,000,000 different herbsand you only have a couple of hours to crack Kentucky Fried Chickens secret recipe. Youre pretty sure you have all the ingredients but youre not sure which eleven herbs and spices you should use. You dont have time to guess, and it would take billions of years or more to manually try every combination. This problem cant realistically be solved using brute force, at least not under normal kitchen paradigms.

But imagine if you had a magic chicken fryer that did all the work for you in seconds. You could pour all your ingredients into it and then give it a piece of KFC chicken to compare against. Since a chicken fryer cant taste chicken, it would rely on your taste-buds to confirm whether itd managed to recreate the Colonels chicken or not.

It spits out a drumstick, you take a bite and tell the fryer whether the piece youre eating now tastes more or less like KFCs than the last one you tried. The fryer goes back to work, tries more combinations, and keeps going until you tell it to stop once it has the recipe right.

Thats basically how black box AI works. You have no idea how the magic fryer came up with the recipe maybe it used 5 herbs and 6 spices, maybe it used 32 herbs and 0 spices but, it doesnt matter. All we care about is using AI as a way to do something humans could do, but much faster.

This is fine when were using blackbox AI to determine whether something is a hotdog or not, or when Instagram uses it to determine if youre about to post something that might be offensive. Its not fine when we cant explain why an AI sentenced a black man with no priors to more time than a white man with a criminal history for the same offense.

The answer is transparency. If there is no black box, then we can tell where things went wrong. If our AI sentences black people to longer prison terms than white people because its over-reliant on external sentencing guidance, we can point to that problem and fix it in the system.

But theres a huge downside to transparency: If the world can figure out how your AI works, it can figure out how to make it work without you. The companies making money off of black box AI especially those like Palantir, Facebook, Amazon, and Google who have managed to entrench biased AI within government systems dont want to open the black box anymore than they want their competitors to have access to their research. Transparency is expensive and, often, exposes just how unethical some companies use of AI is.

As legal expert Andrew Burt recently wrote in Harvard Business Review:

To start, companies attempting to utilize artificial intelligence need to recognize that there are costs associated with transparency. This is not, of course, to suggest that transparency isnt worth achieving, simply that it also poses downsides that need to be fully understood. These costs should be incorporated into a broader risk model that governs how to engage with explainable models and the extent to which information about the model is available to others.

The AI gold rush of the 2010s led to a Wild West situation where companies can package their AI any way they want, call it whatever they want, and sell it in the wild without regulation or oversight. Companies that have made millions or billions selling products and services related to biased, black box AI have managed to entrench themselves in the same position as the health insurance and fossil fuel industries. Their very existence is threatened by the idea that they may be regulated against doing harm to the greater good.

Simply put: No. The lawyers will make sure well never know any more about why a commercial system is biased, even if we develop fully transparent algorithms, than if these systems remain in black boxes. As Axios Kaveh Waddell recently wrote:

Companies are tightening access to their AI algorithms, invoking intellectual property protections to avoid sharing details about how their systems arrive at critical decisions.

The calculus for the AI industry is the same as the private healthcare industry in the US. Extricating biased black box AI from the world would probably put dozens of companies out of business and likely result in hundreds of billions of dollars lost. The US industrial law enforcement complex runs on black box AI were unlikely to see the government end its deals with Microsoft, Palantir, and Amazon any time soon. So long as the lawmakers are content to profit from the use of biased, black box AI, itll remain embedded in society.

And we also cant rely on businesses themselves to end the practice. Our desire to extricate black box systems simply means companies cant blame the algorithm anymore, so theyll hide their work entirely. With transparent AI, well get opaque developers. Instead of choosing not to develop dual use, or potentially dangerous AI, theyll simply lawyer up.

As Burt puts it in his Harvard Business Review article:

Indeed, this is exactly why lawyers operate under legal privilege, which gives the information they gather a protected status, incentivizing clients to fully understand their risks rather than to hide any potential wrongdoings. In cybersecurity, for example, lawyers have become so involved that its common for legal departments to manage risk assessments and even incident-response activities after a breach. The same approach should apply to AI.

When things go wrong and AI runs amok, the lawyers will be there to tell us the most company-friendly version of what happened. Most importantly, theyll protect companies from having to share how their AI systems work.

Were trading a technical black box for a legal one. Somehow, this seems even more unfair.

Read next: The super-rare Nintendo Play Station prototype is going to auction

See the original post here:

Researchers were about to solve AI's black box problem, then the lawyers got involved - The Next Web

Government provides boost to artificial intelligence skills – ComputerWeekly.com

The Department for Business, Energy & Industrial Strategy (BEIS) has announced funding to boost the national artificial intelligence (AI) skills base.

One of the funding packages comes from industry and government, and will see 200m going towards 1,000 PhD places focused on AI in the next five years.

Students will study the application of the technology to support diagnostics in healthcare and enhance processes in industries such as aviation and car manufacturing. Separately, a further 170m will be committed to funding 1,700 places to study PhDs in biosciences.

Announcing the funding, prime minister Boris Johnson said the UK must continue to be world-leading in AI and technology.

Thats why were investing millions of pounds to create hundreds of AI and bioscience PhDs, so research and development can thrive here in the UK and solve the biggest challenges that face us from climate change to better healthcare, he said.

Under the programme, 200 students from 14 universities nationwide will be working with a pool of 300 organisations including AstraZeneca, Google and Rolls-Royce over a five-year period.

Students will also be working with NHS Trusts in projects around using AI to improve diagnosis of illnesses such as cancer, accelerate the development and access to new drugs, design personalised medicine and improve care.

In addition, students will be looking to make businesses more energy-efficient, create low-carbon materials, improve monitoring of climate temperatures and design greener transport, such as planes, trains and cars.

In addition, science minister Chris Skidmore announced the first five AI Turing Fellowships, the UKs national institute for AI and data science, designed to ensure the UK has the skills needed to make the most of artificial intelligence.

He also called for more international academic talent to join these researchers, with 37.5m in further funding available.

Projects developed by the fellows include work on AI for discovery in data-intensive astrophysics, as well as AI methods which fuse high-performance mathematical simulations for the aviation sector.

In addition, a13m investment is aimed at building AI conversion courses from 2020, which will allow 2,500 more people to study AI from backgrounds other than science or maths at undergraduate level.

The programmes include the involvement of technology companies such as Accenture, DeepMind, QuantumBlack and Amplyfi, which are already sponsoring AI masters students.

The initiatives announced by the government will include 1,000 scholarships aimed at people from underrepresented backgrounds, including women, ethnic minorities and low-income families.

Continue reading here:

Government provides boost to artificial intelligence skills - ComputerWeekly.com

Heres your stupid horoscope made by smart AI – The Next Web

Apparently there are still some people who believe that horoscopes are legitimate. Id like to sell each of them a share in the Brooklyn bridge (and direct them to some of our science articles here on TNW). But now is definitely not the time to be pissing off potential readers. So Im meeting them halfway instead.

I made OpenAIs GPT-2 (one of the worlds most advanced AI-powered text generators) spit out horoscopes for all 12 zodiac signs. Accomplishing this was actually quite easy. I went to the Talk To Transformer website where programmer Adam King maintains an interactive version of GPT-2 and entered What is my horoscope for prompts for each sign.

It works like this:

GPT-2 isnt always coherent. In the video above its generating nonsense and gibberish. And thats why, when I was generating the horoscopes below, I was tempted to just keep smashing the generate another button until something funny, clever, or shocking came out. But, since literally anything GPT-2 spits out will be equally as accurate as any human-generated horoscope, I decided it would be a waste of everyones time to do that.

Instead, what youre about to see are twelve completely unedited horoscopes, generated by entering the same What is my horoscope for prompt for each zodiac sign. The only time I pressed the generate button more than once was for Cancer, and thats because it spit out some morbid stuff about disease that didnt seem appropriate right now.

Anyway, we hope you enjoy this stupid horoscope made by smart AI.

Scorpio:

Horus (Scorpio), the lord of hell, is the number one wizard in this lesson. Does that mean he is the most powerful being in the universe? Definitely. Maybe in your life, too. But scorpios is the only alchemy of hope available to anyone of the twelve zodiacal signs.

Aries:

Authors note: Youll notice in the quote below that this is a horoscope for Pisces despite the fact I asked it for Aries. Apparently, if youre an Aries, the AIs advice is to become a Pisces instead.

As the astrological sign of Pisces, you might be nervous about your relationship with your husband/wife, your feeling that your friends need attention and perhaps looking forward to the time when your self-esteem could recover a bit after a stressful week or two. This is normal, and this is fine!

As a Pisces, you are very close to others, you might feel a little restless and overwhelmed, and you are good at following people around in life. If you want to get more personal, read our article on the 13 notes of your birth chart.

Leo:

Leo is always looking for new things to be doing or have to do so it can further evolve as an individual. I feel it will be the time for a more intensive working load in the days ahead of me. I will be home-bound from March 8th. So until then I will do all my activities home based so you may not be seeing me for a few weeks.

Please do come visit, and to anyone who will be in my near future, this email is a prompt and dear request.

Yours Sincerely,

Authors note: In the above quote, GPT-2 followed Yours Sincerely with a Twitter username that actually appears to belong to a real person. We purposely omitted it from this article, but thought it was noteworthy enough to mention.

Cancer:

See the sun after a period of deep darkness

Have a gift of comedy and take it out on others

Shout it from the top of your lungs

Bring forth as much pain as you think is needed

Put your faith in love and your faith in God

Treat people with kindness and dont be rude

Shout your defiance from your rooster

Finish what you start. Itll be easier to continue when you start

Get results with every first thing you do

Promote peace and unity wherever you can

By my hand I give life to everything, and every thing in turn gives life to me

Forget love, all that

Pisces

Due to the fact that my being a spiritual person, I have such a connection to other spiritual entities that for me the planets of the zodiac are a source of comfort for me. In the past I have a relationship with Saturn and Venus. However I cant justify having a relationship with other planets because they are not really in your field of vision. My life will only have meaning if the universe says my lifes work matters.

Aqours is all I can do for pisces what does it mean to me?

I am your energy of hope! Let your touch permeate into my heart and radiates your love. I am yours and I am yours too.

Taurus

A horoscope means a glimpse into the future and depends on different factors like your birth date, the colour of the moon, the cycles of your moon, some life circumstances and your own values and temperament. Even the address on your birth certificate is a very important one.

For a person born under the sign of the Roman god Taurus, the phases of the moon are seen as more negative than for a person born under the sign of the sun, for example. Your last lunar phase, called the New Moon, is the moment that your Moon Phase indicates most strongly. (Thats why a crescent Moon will always represent the New Moon.)

What signs are associated with astrology?

Sagittarius

You are energetic and passionate about life and your deepest desires. Your powerful intuition and vitality are infectious. Feelings of worthlessness and arrogance exist within you. If you feel drained, do not hesitate to seek help. Some aspects of your horoscope are higher than others. The Ascendant means to create and nurture life. They will have a higher responsibility. Empathy, optimism, sociability, and a willingness to feel helpless are your best traits. Know yourself and your best self.

Capricorn

You value the gentle and approachable aspects of life. You are extremely intuitive, likely to feel thoughts and feelings as immediate as your breath. This part of your personality tends to express itself in relationships through the support of others. Because your ambition has always come first, you will not hesitate to let others have their way. In order to express your full potential in life, you will have to approach any situation with tact and skill.

Gemini

A major chart reveal or astrological revelation can never happen at any other time than at this age. Its important to remember that since the age of 22, your horoscope has been preparing you for this date. If your total and life-long horoscope have released a forecast of having a boy as a partner, that could be because your high school sweetheart graduated, or if your future partner or future husband is still in high school.

Featuring so many unique qualities about you, it is the responsibility of the astrologer to bring to light these talents and talents worth experiencing!

How was your horoscope revealed?

It can be difficult for some astrologers to think.

Virgo

At this time, Scorpio is your ruler and Jupiter your friend, be they as great or as small as they may be. During the course of your life, you will have to turn and face both of them in order to remain self-sufficient. Fortunately for you, that can be done.

If the case arises that Scorpio is in conjunction with your Mercury meaning Jupiter is in conjunction with your Sun you should begin to look out for the influence that your Karmic tendencies hold for you. The placement of Jupiter in your sign may influence how important your personality is to you, as well as the strengths and weaknesses you have in dealing with other people.

Libra

The bright, radiant, adventurous personality of Libra is her greatest strength. Her loyalty and intimacy are palpable, and she is able to trust perfectly. Her heart is ruled by passion, but her emotions are often very tender, trying to find harmony. This allows her to know that other people matter and is the great strong master of balancing moods.

Libras desire to maintain balance and harmony and to be guided by the stars are very strong. A naturally good speaker and a well-spoken public speaker, Libra expresses her thought and ideas as accurately and subtly as she can. In practical terms, this means that Libra usually wants to improve the lives of others.

Aquarius

Cancer

Current position:

Starting vocation:

Potential aspiration:

Sign of passage:

Perception of negativity:

Self-actualization:

Constellation:

This is the Aquarian Age of Perseverance. You want to be a strong person, but you need to learn not to be too conscious about being a strong person.

Your greatest fantasies:

Dreams/Life goals:

They may be abstract ideas or life lessons.

Childhood personas:

Sources of inspiration:

Television, movies, books, newspapers, etc.

Consciousness side:

Sorry about that last one. Evidently being an Aquarius involves a lengthy acceptance process. On the bright side, at least youre not an Aries right? They dont even get a horoscope this week. Let us know what you think about GPT-2s Zodiac prowess in the comments.

Published April 28, 2020 18:16 UTC

Visit link:

Heres your stupid horoscope made by smart AI - The Next Web

Coronavirus tests the value of artificial intelligence in medicine – The Star Online

Dr Albert Hsiao and his colleagues at the UC San Diego health system in the United States had been working for 18 months on an artificial intelligence (AI) program designed to help doctors identify pneumonia on a chest X-ray. When the coronavirus hit the United States, they decided to see what it could do.

The researchers quickly deployed their program, which dots X-ray images with spots of colour where there may be lung damage or other signs of pneumonia. It has now been applied to more than 6,000 chest X-rays, and its providing some value in diagnosis, said Dr Hsiao, the director of UCSDs augmented imaging and artificial intelligence data analytics laboratory.

His team is one of several around the country that has pushed AI programs into the Covid-19 crisis to perform tasks like deciding which patients face the greatest risk of complications and which can be safely channeled into lower-intensity care.

The machine-learning programs scroll through millions of pieces of data to detect patterns that may be hard for clinicians to discern.

Yet few of the algorithms have been rigorously tested against standard procedures.

So while they often appear helpful, rolling out the programs in the midst of a pandemic could be confusing to doctors and dangerous for patients, some AI experts warn.

AI is being used for things that are questionable right now, said Dr Eric Topol, director of the Scripps Research Translational Institute and author of several books on health IT.

Topol singled out a system created by Epic, a major vendor of electronic health records software, that predicts which coronavirus patients may become critically ill. Using the tool before it has been validated is pandemic exceptionalism, he said.

Epic said the companys model had been validated with data from more than 16,000 hospitalised Covid-19 patients in 21 healthcare organizations.

No research on the tool has been published for independent researchers to assess, but in any case, it was developed to help clinicians make treatment decisions and is not a substitute for their judgment, said James Hickman, a software developer on Epics cognitive computing team.

Others see the Covid-19 crisis as an opportunity to learn about the value of AI tools.My intuition is its a little bit of the good, bad and ugly, said Eric Perakslis, a data science fellow at Duke University and former chief information officer at the Food and Drug Administration. Research in this setting is important.

Nearly US$2bil (RM8.5bil) poured into companies touting advancements in healthcare AI in 2019.

Investments in the first quarter of 2020 totalled US$635mil (RM2.7bil), up from US$155mil (RM663mil) in the first quarter of 2019, according to digital health technology funder Rock Health.

At least three healthcare AI technology companies have made funding deals specific to the Covid-19 crisis, including Vida Diagnostics, an AI-powered lung-imaging analysis company, according to Rock Health.

Overall, AIs implementation in everyday clinical care is less common than hype over the technology would suggest. Yet the coronavirus has inspired some hospital systems to accelerate promising applications.

UCSD sped up its AI imaging project, rolling it out in only two weeks.

Dr Hsiaos project, with research funding from Amazon Web Services, the University of California and the National Science Foundation, runs every chest X-ray taken at its hospital through an AI algorithm.

While no data on the implementation has been published yet, doctors report that the tool influences their clinical decision- making about a third of the time, said Dr Christopher Longhurst, UCSD Healths chief information officer.

The results to date are very encouraging, and were not seeing any unintended consequences, he said.

Anecdotally, were feeling like its helpful, not hurtful.

AI has advanced further in imaging than in other areas of clinical medicine because radiological images have tons of data for algorithms to process, and more data makes the programs more effective, Longhurst said.

But while AI specialists have tried to get AI to do things like predict sepsis and acute respiratory distress researchers at Johns Hopkins University recently won a National Science Foundation grant to use it to predict heart damage in Covid-19 patients it has been easier to plug it into less risky areas such as hospital logistics.

In New York City, two major hospital systems are using AI-enabled algorithms to help them decide when and how patients should move into another phase of care or be sent home.

At Mount Sinai Health System, an artificial intelligence algorithm pinpoints which patients might be ready to be discharged from the hospital within 72 hours, said Robbie Freeman, vice president of clinical innovation at Mount Sinai.

Freeman described the AIs suggestion as a conversation starter, meant to help clinicians working on patient cases decide what to do. AI isnt making the decisions.

NYU Langone Health has developed a similar AI model. It predicts whether a Covid-19 patient entering the hospital will suffer adverse events within the next four days, said Dr Yindalon Aphinyanaphongs, who leads NYU Langones predictive analytics team.

The model will be run in a four- to six-week trial with patients randomised into two groups: one whose doctors will receive the alerts, and another whose doctors will not.

The algorithm should help doctors generate a list of things that may predict whether patients are at risk for complications after theyre admitted to the hospital, Aphinyanaphongs said.

Some health systems are leery of rolling out a technology that requires clinical validation in the middle of a pandemic. Others say they didnt need AI to deal with the coronavirus.

Stanford Health Care is not using AI to manage hospitalised patients with Covid-19, said Ron Li, the centres medical informatics director for AI clinical integration.

The San Francisco Bay Area hasnt seen the expected surge of patients who would have provided the mass of data needed to make sure AI works on a population, he said.

Outside the hospital, AI-enabled risk factor modelling is being used to help health systems track patients who arent infected with the coronavirus but might be susceptible to complications if they contract Covid-19.

At Scripps Health, clinicians are stratifying patients to assess their risk of getting Covid-19 and experiencing severe symptoms using a risk-scoring model that considers factors like age, chronic conditions and recent hospital visits.

When a patient scores seven or higher, a triage nurse reaches out with information about the coronavirus and may schedule an appointment.

Though emergencies provide unique opportunities to try out advanced tools, its essential for health systems to ensure doctors are comfortable with them, and to use the tools cautiously, with extensive testing and validation, Topol said.

When people are in the heat of battle and overstretched, it would be great to have an algorithm to support them, he said.

We just have to make sure the algorithm and the AI tool isnt misleading, because lives are at stake here. Kaiser Health News/Los Angeles Times/Tribune News Service

(Kaiser Health News (KHN) is a US national health policy news service. It is an editorially independent programme of the Henry J. Kaiser Family Foundation.)

See more here:

Coronavirus tests the value of artificial intelligence in medicine - The Star Online

From Our Foxhole: Empowering Tactical Leaders to Achieve Strategic AI Goals – War on the Rocks

Editors Note: This article was submitted in response to thecall for ideasissued by the co-chairs of the National Security Commission on Artificial Intelligence, Eric Schmidt and Robert Work. It addresses the second part of the second question on AI expertise and skill sets for the national security workforce.

The race to harness artificial intelligence for military dominance is on and China might win. Whoever wins the AI race will secure critical technological advantages that allow them to shape global politics. The United States brings considerable strengths an unparalleled university system, a culture of innovation, and the only military that bestrides the globe to this contest. Its also constrained by shortcomings. Washingtons most serious problem isnt a shortage of ideas. Its a shortage of talent. And this shortage is large enough to threaten national security.

While the current administration has publicly recognized the need to invest in AI talent, a senior defense official admitted that finding, borrowing, begging and creating talent is a really big challenge for all of us. Institutions like the Joint Artificial Intelligence Center and university research labs are central to the Pentagons development strategy; however, challenges ranging from data collection to refining operational concepts place huge burdens on existing technical talent.

These demands could be reduced by integrating our junior military officers and enlisted personnel as partners in the development process. Hiring junior leaders as product managers would accelerate technology development and build new operational capabilities while integrating user feedback. This immediately expands the number of personnel contributing to AI development efforts and grooms the next generation of leaders for the challenges of multi-domain operations.

This year I worked with data scientists from the University of Southern California to test the thesis that military personnel could be integrated into the AI development pipeline as product managers. We did this through a forecasting tournament based on security issues on the Korean Peninsula. This tournament created an opportunity to simultaneously experiment with machine learning technologies and expand civilian-military collaboration. The results provided new behavioral insights for the University of Southern Californias research team and refined a method for expanding national security AI research using existing military personnel.

The Imitation Game Problem

Our AI experiments explored how to deal with the daily flood of data that is used to provide key decision-makers with predictive analysis and enhanced situational awareness. We chose this problem for our first round of experiments because the challenge is so common and is only getting worse, with 2.5 quintillion bytes of additional data each day. We termed this the Imitation Game problem, honoring the challenge that confronted British cryptographers cracking the Nazi enigma code, who began with more potential solutions each day then could be tried in multiple lifetimes.

Traditional methods for mitigating overwhelming data processing requirements, like assigning more personnel, cannot keep pace with this challenge. This is especially true given military recruitment shortages. The consequences of missing key information or processing it too late are stark, as evident in the findings of the 9/11 Commission Report.

Building the Team

The experiments to circumvent the imitation game problem began after I spoke with Fred Morstatter from the University of Southern Californias Synergistic Anticipation of Geopolitical Events lab. Unlike traditional machine learning models that use only quantitative data sets to train algorithms, USCs lab combines human judgement with quantitative models so that the strengths of both can optimize predictive value. This hybrid model addresses the militarys traditional aversion to replacing human decision-making with technology, captured in the saying that humans are more important than hardware.

Our pilot pursued improving commander decision-making through greater situational awareness using tools that combined human judgement and machine learning models. This approach can scale to a variety of defense challenges, though our initial experiment used public facing questions that were immediately relevant to our organization. Those questions became the basis for the Korean Security forecasting tournament we hosted with the University of Southern Californias lab in the spring and summer of 2019, which served as our first research sprint exploring the following:

What Did We Learn?

Solutions Require User-based Feedback Loops

When we separate technologists and military users, those who understand the problem cannot shape technology solutions, and those shaping technology solutions do not understand the problem. While there is a critical need to develop ties between researchers and operators, junior military personnel are generally removed from capability development efforts. This disconnect is largely due to the Armys preference for institutional approaches to capability development that favor large commands and senior leaders, discounting the potential contributions of junior leaders. This bias is evident in the lack of billets for junior officer and enlisted personnel in Army Futures Command, despite their preponderance in the force.

While our individual experiment is valuable, the real impact will come from scaling our experimental design across the military. That is because using junior leaders as product managers mitigates the disconnection challenge and creates immediate value to both parties. We found that bringing current operational problems to the academic team diversified research applications while generating capabilities with immediate military relevance. This method also increases the interactions between research organizations and military innovators in a way that other models cannot replicate, expanding the idea sourcing funnel and increasing the odds that experimentation will lead to decisive capabilities.

This approach mitigates the current shortage of uniform-wearing AI talent that is the source of frequent Pentagon complaint. Our experiment shows that intelligent junior leaders can contribute to multi-functional teams in a product manager role. Technology companies use product managers to maximize the outcome value of products; servicemembers in this role can maximize an experiments value and operational relevance. Military product managers achieve this by turning force generated requirements into defined capabilities, managing requirement backlog, and liaising between their commands and technology development teams.

Silicon Valley companies rely on non-technical product managers to complement highly specialized professionals, and adopting that practice allows currently unused military personnel to achieve similar impact. While our initial experiments demonstrated the feasibility of this approach with comparatively minimal training, a second step is to train servicemembers in basic tech innovation practices. Product management and data science training will allow servicemembers to effectively contribute to military product development and increase the capabilities of Americas future force. This training is immediately accessible using resources like data science boot camps or online courses, and could be readily expanded through existing institutional partnerships.

Bringing in non-technical contributors to the project was valuable. Over the course of the tournament, forecaster accuracy improved (a development that speaks to the ability to rapidly train intelligence analysts to use these tools) and the best forecasters had the highest degrees of interaction with the system, accelerating algorithm training. The result was a virtuous cycle where the growing number of human forecasts enhanced the models predictive value while increasing user familiarity. The result provided USC researchers greater insight into behavioral patterns and optimization strategies for using their technology to inform future development efforts.

The post-product manager talent surge could expand the use of academic partnership programs like Hacking 4 Defense (H4D), since servicemembers could serve as problem sponsors for cross-functional academic teams. These teams could conduct problem curation and prototype development for AI initiatives and access senior mentors from the technology community through organizations like the Defense Entrepreneurs Forum. These research teams could report insights and progress to service-level AI organizations, simultaneously improving partnerships across the civilian-military AI ecosystem, training servicemembers in critical innovation skills, and closing capability gaps. The knowledge generated by these cross-functional academic teams could then be used to guide acquisitions efforts, including Small Business Innovation Research grants, forming an agile AI integration ecosystem.

The U.S. military could implement this strategy by launching programs through the Joint Artificial Intelligence Center or service-specific AI centers like the Army Artificial Intelligence Task Force that train innovative thinkers as product managers and junior data scientists. These leaders could then return to their host commands and sponsor operational problems through experimental pilots during initial concept development. After the efforts gain momentum, servicemembers could be mentored by experienced product managers and data scientists from startup partners to mature these capabilities. This would immediately create a Department of Defense talent development pipeline to meet the present shortage, while expanding the vibrancy of Americas AI ecosystem to regain its comparative advantage.

AI is Only as Useful as the Questions You Ask and the Data You Offer

AI demands specificity in asking questions, determining resolution criteria, and selecting training data sets. While AI is praised for its power and precision, those traits come with costs that must be included in experimental design.

These are acute challenges when AI confronts security arena complexity, as both problems and solutions are often ambiguous. We encountered this challenge as we iterated through crafting tournament questions to sufficient granularity to drive algorithm development. The danger of focusing too much on asking the questions the right way is failing to ask the right questions in the first place. Further, opportunity cost is incurred for every model launch, since pivoting to a second batch of questions often requires generating new data sets to train algorithms.

After crafting the right questions, our next hurdle was sourcing data sets for model training. This is difficult for security problems due to the limited number of existing data sets and event infrequency when trying to create one. For example, individual missile launches offer less robust data sets than commodity market data on sugar prices over the same period. A powerful strategy for overcoming this hurdle and developing more robust security algorithms is to generate proxy tabular data sets from currently underleveraged and unstructured data sources, i.e., dark data. Learning to deconstruct your operational environment into data sets allows for more rapid subsequent adaptation to environmental changes.

Our pilot accepted risk on optimizing questions and data sets by focusing on high value topics; even if more timely inquiries arose later, our effort was justified. Despite this hedge, we were confronted with surprises during the pilot. The DMZ visit between Chairman Kim, President Moon, and President Trump resolved several questions in spirit on June 30, but not according to the definition we wrote in April.

The pilot also allowed SAGE researchers to test how forecasters reason over different time horizons by deploying two identical sets of the approved questions, one with a resolution date of April 25, 2019 and the other using July 25, 2019. Preliminary findings indicate that the forecasters who engaged in both tended to have more conservative forecasts initially for the longer horizon questions, and more aggressive forecasts for the shorter ones. These observed predictive trends offer insights into underlying cognitive properties.

The Goal for AI Capabilities is Not More, but Better

Our goal in this pilot was to create valuable insights that could be integrated into operational rhythms of units across the Army. While the research crowd understands the value of AI systems, introducing this value to operational units required minimizing barriers to entry and reproducibility.

The goal of self-evident value creation led to aligning our research efforts to existing military tools designed to improve commander decision-making and awareness called priority information requirements. This critical information and signaling criteria allow leaders to understand when to use certain courses of action, allowing them to become proactive regarding decision-making. The benefit of building the AI experiments using priority information requirements was to ensure our model could scale since all Army units use these tools. This ubiquitous framework provides a natural focal point for incorporating and training other algorithms.

The next challenge is avoiding overloading existing digital infrastructure once tactical leaders understand the value of integrating these systems. An all too common, and toxic, paradigm in capability development is limitlessly expanding the tools assigned to commanders on the assumption that more is better. The result of this approach is adding yet another layer of technology on top of arcane digital infrastructure without considering existing systems. Users become overwhelmed by the number of systems they are expected to simultaneously manage, essentially nullifying the impact of new military technology.

Military product management is uniquely suited to prevent saturation of user cognitive bandwidth and optimize the value created while introducing new technologies. The goal should not be simply adding additional systems, but eliminating waste and simplifying tasks to increase organizational speed and agility. AI research efforts approached from this perspective benefit military leaders by creating data ecosystems that help units efficiently navigate complex operational environments.

The Next Iteration

Preserving an American-led international system requires achieving the technological superiority necessary for military dominance. A critical step in reaching that objective is closing the talent gap confronting Americas defense ecosystem by pivoting current strategy to include junior leaders. This pivot should integrate servicemembers as product managers and junior data scientists on cross-functional teams with academic institutions and tech sector volunteers, simultaneously mitigating manpower shortages and training our servicemembers to leverage these tools.

The United States has a history of making up for lost ground by combining the power of our private and public sector from surpassing the Nazis with nuclear weapons to defeating the Soviets in the space race. Its time to align tactical action with strategic priorities to ensure America wins the AI race. The United States can start today by bringing its tactical leaders into the fight for AI dominance.

Capt. James Jay Longis an Army infantry officer, National Security Innovation Network (NSIN) Startup Innovation Fellow, and experienced national security innovator. He is currently transitioning from active duty and last served as an operations officer with United Nations Command Security BattalionJoint Security Area.

Image: U.S. Army Graphic

Read this article:

From Our Foxhole: Empowering Tactical Leaders to Achieve Strategic AI Goals - War on the Rocks

The benefits of AI and machine learning – The Guardian

The Guardian is right to express legitimate concerns about the opacity of machine learning systems and attempts to replicate what humans do best (Editorial, 23 September), and we welcome this. However, as founders of the Institute for Ethical AI in Education (IEAIED) we believe these problems must be overcome in order to ensure people are able to benefit from artificial intelligence, not just fear it.

There are highly beneficial applications of machine learning. In education, for example, this innovation will enable personalised learning for all and is already enabling individualised learning support for increasing numbers of students. Well-designed AI can be used to identify learners particular needs so that everyone especially the most vulnerable can receive targeted support. Given the magnitude of what people have to gain from machine learning tools, we feel an obligation to mitigate and counteract the inherent risks so that the best possible outcomes can be realised.

First, we must not accept that machine learning systems have to be block-boxes whose decisions and behaviours are beyond the reach of human understanding. Explainable AI (XAI) is a rapidly developing field, and we encourage education stakeholders to demand and expect high levels of transparency. There are also further means by which we can ethically derive benefits from machine learning systems, while retaining human responsibility.

Another approach to benefiting from AI without being undermined by a lack of human oversight is to consider that AI is not bringing about these benefits single-handedly. Genuine advancement arises when AI augments and assists human-driven processes and skills. Machine learning is a powerful tool for informing strategy and decision-making, but people remain responsible for how that information is harnessed.

Incorporating ethics into the design and development of AI-driven technology is vital, and we currently rely on programmes such as UCL Educate, an accelerator for education SMEs and startups, to instil that ethos in innovation from the concept stage.

Crucially, though, we must inform the public at large about AI what it is and what benefits can be derived from its use or we risk alienating people from the technology that already forms part of their everyday lives. Worse still, we risk causing alarm and making them fearful.Prof Rose Luckin Professor of learner centred design at UCL Institute of Education and director of UCL Educate Sir Anthony Seldon Vice-chancellor, University of Buckingham Priya Lakhani Founder CEO, Century Tech

Join the debate email guardian.letters@theguardian.com

Read more Guardian letters click here to visit gu.com/letters

Do you have a photo youd like to share with Guardian readers? Click here to upload it and well publish the best submissions in the letters spread of our print edition

See more here:

The benefits of AI and machine learning - The Guardian

Self-driving AI clinic reimagines healthcare for the 21st century – New Atlas

The self-driving clinic not only offers AI diagnostics but could transport high-risk patients to hospitals (Credit: Artefact Group)

Seattle-based design firm Artefact Group has revealed a comprehensive concept that would make the future of healthcare mobile. Integrating passive monitoring technologies in the home, a smartphone app, AI diagnostics and a self-driving clinic, the system combines a variety of innovations for a new spin on healthcare.

While many sectors of society are being dramatically disrupted by rapidly evolving digital innovations, the arena of healthcare seems to responding more slowly, with many hospitals still largely relying on paper to record patient data. Earlier in the year we saw a gadget-filed, subscription-based medical clinic open in San Francisco, and several fascinating advances are occurring in the field of artificial intelligence diagnostics, But the Aim concept envisions a fundamentally different healthcare approach than what we have been used to for the past 100 years.

NEW ATLAS NEEDS YOUR SUPPORT

Upgrade to a Plus subscription today, and read the site without ads.

It's just US$19 a year.

The system begins with a series of active testing and passive monitoring devices in the home, capturing data from several sources, such as the bathroom scale, toilet and medicine cabinet. The goal is to create an interconnected set of devices, including health-monitoring wearables, that can create a unified, patient-owned health record.

A constantly learning AI would then monitor a person's health data and flag unusual results. When needed, a self-driving mini clinic could navigate to your location for more comprehensive diagnostics, such as thermography, breath analysis, and respiration or cardiac rhythm.

Inside this mobile clinic, an AI could offer its diagnosis, and even deliver common pharmaceuticals such as antibiotics or contraceptives. If a health condition is flagged as serious or escalating, the Aim system would then connect the patient to an on-call specialist or even transport them directly to a hospital emergency room.

"The mission of Aim is to close the data, experience and logistical gaps between home and clinical environments," the designers say.

Despite being a slightly pie-in-the-sky concept right now, rapid advances in personal health monitoring and AI means it's not necessarily that far from being feasible, and much of the Aim system feels like it could be pragmatically implemented into our current healthcare processes without too much trouble. With the current burden on patients to get to doctors' clinics, which can sometimes be quite far away, an integrated monitoring system such as this could lighten the load for overworked healthcare workers.

AI-driven diagnostic tools are also set to inevitably become increasingly useful for low-risk patient monitoring, and a mobile autonomous clinic could significantly reduce the drain on current hospital resources by catching conditions early before they become serious enough to require a hospital admission.

Cost is of course a major consideration here and developing such a sophisticated system wouldn't be cheap, but as the costs of healthcare continue to skyrocket maybe some outside-the-box thinking such as this is should be encouraged. Much like the San Francisco Forward clinic, a cost-effective subscription-based system could possibly offer many who currently can't afford big health insurance premiums greater access to medical care.

Source: Artefact Group

View post:

Self-driving AI clinic reimagines healthcare for the 21st century - New Atlas

Google Test Of AI’s Killer Instinct Shows We Should Be Very Careful – Gizmodo

If climate change, nuclear weapons or Donald Trump dont kill us first, theres always artificial intelligence just waiting in the wings. Its been a long time worry that when AI gains a certain level of autonomy it will see no use for humans or even perceive them as a threat. A new study by Googles DeepMind lab may or may not ease those fears.

The researchers at DeepMind have been working with two games to test whether neural networks are more likely to understand motivations to compete or cooperate. They hope that this research could lead to AI being better at working with other AI in situations that contain imperfect information.

In the first game, two AI agents (red and blue) were tasked with gathering the most apples (green) in a rudimentary 2D graphical environment. Each agent had the option of tagging the other with a laser blast that would temporarily remove them from the game.

The game was run thousands of times and the researchers found that red and blue were willing to just gather apples when they were abundant. But as the little green dots became more scarce, the dueling agents were more likely to light each other up with some ray gun blasts to get ahead. This video doesnt really teach us much but its cool to look at:

Using a smaller network, the researchers found a greater likelihood for co-existence. But with a larger, more complex network, the AI was quicker to start sabotaging the other player and horde the apples for itself.

In the second, more optimistic, game called Wolfpack the agents were tasked to play wolves attempting to capture prey. Greater rewards were offered when the wolves were in close proximity during a successful capture. This incentivised the agents to work together rather than heading off to the other side of the screen to pull a lone wolf attack against the prey. The larger network was much quicker to understand that in this situation cooperation was the optimal way to complete the task.

While all of that might seem obvious, this is vital research for the future of AI. More and more complex scenarios will be needed to understand how neural networks learn based on incentives as well as how they react when theyre missing information.

The most practical short-term application of the research is to be able to better understand and control complex multi-agent systems such as the economy, traffic systems, or the ecological health of our planet - all of which depend on our continued cooperation.

For now, DeepMinds research is focused on games with strict rules like the ones above and Go, a strategy game which it famously beat the worlds top champion. But it has recently partnered up with Blizzard in order to start learning Starcraft II, a more complex game in which reading an opponents motivations can be quite tricky. Joel Leibo, the lead author of the paper tells Bloomberg, Going forward it would be interesting to equip agents with the ability to reason about other agents beliefs and goals.

Lets just be glad the DeepMind team is taking things very slowly methodically learning what does and does not motivate AI to start blasting everyone around it.

[DeepMind Blog via Bloomberg]

Read more here:

Google Test Of AI's Killer Instinct Shows We Should Be Very Careful - Gizmodo

WWT Named Partner of the Year for Deep Learning AI by NVIDIA – Business Wire

ST. LOUIS--(BUSINESS WIRE)--World Wide Technology (WWT) today announced that it has been selected by the NVIDIA Partner Network (NPN) as the 2019 Deep Learning AI Partner of the Year for the Americas. This is the third year that WWT has been honored in this category.

The NPN selected WWT for its ongoing AI research and development program. To help customers develop AI leadership, WWT published six white papers about leveraging the compute power of NVIDIA DGX systems to develop Machine Learning and Deep Learning models for real-time edge video analytics, network optimization, and performance comparisons of multiple reference architectures for ML model development. The WWT research into ML and Deep Learning is tied to real-world business outcomes, and improvements in mining safety, utilities grid optimization, and resource management for manufacturing.

In addition, the WWT Advanced Technology Center (ATC) offers Lab-as-a-service environments for AI development, MLOps, Deep Learning, and testing of storage and networking with GPU-accelerated compute. WWT [also] engineered and deployed some of the largest clusters of DGX-2 servers in North America and China for production of Natural Language Processing applications at massive scale.

Its due to the strength of our engineering and data science partnership with NVIDIA that WWTs customers are today realizing strategic value from Deep Learning and ML solutions that WWT has deployed, said Tim Brooks, Managing Director of AI Solutions for World Wide Technology. Our customers are leveraging Natural Language Processing, computer vision, robotics, and geospatial analysis for intelligent agents, autonomous vehicles, retail loss prevention, mining safety, and manufacturing QA.

NVIDIA has long worked with WWT to deliver AI solutions for data center and cloud-hosted environments across numerous industries, said Craig Weinstein, Vice President of the Americas Partner Organization at NVIDIA. Together with NVIDIA and our OEM partners, WWT provides customers with AI solutions that leverage the power of NVIDIA GPUs and 30 years of engineering and global deployment reliability of WWT.

The NPN honors its top North American partners who have shown growth in their GPU business through the growth, leadership, and investments made throughout the year.

About World Wide Technology

World Wide Technology (WWT) is a technology solution provider with $12 billion in annual revenue that provides digital strategy, innovative technology and supply chain solutions to large public and private organizations around the globe. While most companies talk about delivering business and technology outcomes, WWT does it. Based in St. Louis, WWT employs more than 6,000 people and operates approximately 4 million square feet of warehousing and integration space in more than 20 facilities throughout the world.

For more information about World Wide Technology, visit http://www.wwt.com.

Connect with WWT: Twitter | Instagram | Facebook | LinkedIn

Read the rest here:

WWT Named Partner of the Year for Deep Learning AI by NVIDIA - Business Wire

Why We Need To Focus On The Positives Of AI – Forbes

Getty

AI is everybodys pet bogeyman these days.

Theres a lot of apprehension about AI these days, and for good reason. Some of the talk is about predictions of automation and job loss. Some of it is about the possibility of AI consciousness. And some of it is just off-the-wall conspiracy theory stuff most of us would be best off to ignore altogether.

Many people tend to think of AI as purely an automation technology, but that just skims the surface of what these tools and techniques can do for us. While I do acknowledge that AI has some potential downsidesespecially as it relates to ethics and misinformationhere, Id like to focus on the many positives of AI. Because trulyfor every negative associated with AI, there are multiple good things already improving the quality of business for companies, and the quality of life for humans, worldwide.

The key to understanding AIs potential for good is to look at its contributions to our lives in three key areas: automating processes, augmenting human decision making, and providing greater sensor-fed awareness of environmental context in real time.

Heres where I put in a good word for AI-facilitated automation. It has made the lives of employees in my company so much easier. They can pass off tedious, repetitive tasks and focus on things that really matter. From small things like creating a bit.ly for a blog article weve created, to larger things like email database management, we have programs powered by AI-automated processes that help us do all of that.

Some observers think that AI-automated processes are inherently biased, but thats not true Most of know humans can be biased, which can create problems at work. We tend to want to hire people who are like uswho agree with uswho want the same things we do. However, we also know that diversity is what strengthens an organization. Differing opinions make ideas stronger. Differing viewpoints make products better. Using AI, companies are able to remove human bias from the hiring processat least to a degree.

Yes, algorithms can be biased, as weve learned. But after some false starts, I think well see some improvements in AI-based recruitment, especially in the area of. recruitment for diversity.

One of the most obvious positives of AI, to me, is the fact that it is empowering businesses everywhere to gain even deeper understanding about their customers. Its allowing companies to meet their customers where they arein the most literal sensebe it outside the movie theater, on their drive home from work, on their couch in the family room. And companies are able to offer their customers deals, products, and opportunities that truly matter to them. Yes, there is bad AI.

Yes, I do get personalized marketing that is off. But by and large, as AI quality and data processing improve, businesses will be able to do much more effective marketing, and customers will be able to receive only the marketing that actually pertains to them. Which is pretty awesome.

Plus from the consumer standpoint, isnt it pretty great to get a discount for something you buy regularly? I think so!

As the Internet of Things, smartphones, smart cameras, and the like come into our lives, AI is an essential tool for helping us stay aware of whats going on in our surroundings, for predicting what might happen, and taking appropriate responses.

Making Life Safer

On almost every level, AI has the potential to make life safer for people. This is especially true in the area of dangerous jobssuch as military work, engineering and construction work, and policing. As robots are able to take a first-responder role in doing the jobs that people have had to risk their lives for thousands of years, we can all be grateful for the lives it saves. In addition to frontline work, AI is helping people train on jobs in a safer wayfor instance, preparing for flight, space travel, and even performing medical procedures.

Outside of the work arena, AI is almost making it safer to live in our neighborhoods and communities. Smart cameras can be found all over cities throughout the world, helping alert law enforcement to potential hazards, and helping to identify suspects in certain cases.

Democratizing Healthcare

Currently, those who live in rural area have far less access to quality healthcare. In fact, information from the Center for Disease Control shows 20% of Americans have no access to a doctor at all. One of the positives of AI is that it helps democratize healthcare by making remote and mobile health a reality. Using AI, doctors in distant cities can be notified in real-time on the health matters of their patients and give insights, advice, diagnoses, etc., to those who need them.

Aging in Place

Research shows most older people want to age in their own homes. And thanks to AI and other new technologies, thats increasingly possible. Because AI can help remind those suffering from dementia to take their medications, eat their meals, and even find their TV remote, its offering new levels of independence to those who may traditionally had to live with friends or in assisted living communities. Other positives of AI include being able to remotely monitor our loved ones health, arrange food and transportation, and even notify loved ones when the older person is walking and may be in danger of a fall. That type of freedom is priceless for aging people.

Coding for Disaster Response

You may have been following the annual Call for Code event, which allows developers around the world to gather together to build apps and new technologies to help with global disasters. Though its only been around a few years, some of its solutions are already being put to work, creating real-life answers to problems like finding victims following a disaster, accessing the Internet when most WiFI options would be inaccessible, and divvying up resources following an emergency. This isnt just cool; it will literally save lives.

Its important that we focus on the positives that AI can bring to our society. While the technology will require a tremendous amount of management in areas like policy, ethics, privacy and security, the potential of the technology to enhance our lives is significant and Im excited to watch it become a bigger part of our lives with each passing day.

See more here:

Why We Need To Focus On The Positives Of AI - Forbes

Introducing AI to Marketing in 5 Steps – AdAge.com

What happens when tech is asked to make decisions that are more creative in nature? Credit: iStock

Brands might not be replacing their existing ad-tech and martech stacks with artificial intelligence just yet, but many are experimenting with AI solutions that focus on isolated tasks, like recommendations, targeted ad buying, sentiment-driven actions and so on.

The coming wave of AI in advertising will be defined by the autonomous execution of cohesive, multistep campaigns. For brands, this will mean relinquishing control and trusting the technology to do what they've traditionally relied on complex technologies and teams to handle -- but at a far greater pace and scale.

Before handing over the reins, it's helpful to understand how AI works. Here's a look at five overarching steps that go into converting human thought processes into algorithms, and algorithms into digital marketing programs that run autonomously, from start to finish.

1. Understand why marketers do what they do.

Creating AI for "self-driving" marketing technology is not so different from creating AI for a self-driving car. In the case of the car, it must know how close it is to other cars, how to make a turn and end up in the right position after the turn, when to hit the gas pedal, what the road conditions are like and so on -- all without the driver telling it what to do.

Like driving a car, many of the decisions that drive the day-to-day execution of marketing programs also happen largely on the subconscious level. Transforming these processes into algorithms requires understanding why each decision was made by acutely observing marketers as they execute them and then asking them to verbalize the reasoning behind the decisions they made:

"Why did you keep these words and ditch those?" "How did you decide bid size?" "Say you increase spend on a specific keyword by 20% ... why did you choose 20%?" "What's the best time to send stuff to that person?" "What about that other person?"

2. Teach technology how to understand abstract information, such as creative.

Data is unquestionably the domain of AI, but what happens when technology is asked to process and make decisions that are more creative in nature?

For a human, understanding why certain images and text make more sense as a first interaction with a consumer rather than as a secondary or final interaction is almost second nature. A machine, on the other hand, needs to be told (or programmed) with this knowledge in order to be able to judge images and text and determine where they should appear along the journey, without relying on a human.

3. Program it to consider all scenarios and outcomes before each and every move.

At any moment, there are several variables -- and combinations of variables -- that influence an exponential number of outcomes in a campaign. "If I do this, that will happen. But if I do that, this will happen."

Take deciding which headline to use with which creative on a specific channel. It could be that there are 10 creative options and six headline options. The technology must create a real-time model to predict which of several thousand possible headline/creative combinations will perform best in relation to all other combinations, considering variables such as known audience, past behaviors of similar audiences, the specific channel, geographic region, time of day and so forth. Once it's predicted every possible outcome, it must execute the headline/creative combo that's determined by the model to perform best.

4. Make individual building blocks work together as a holistic system.

A major issue in digital marketing is the fact that different aspects of the program -- Facebook, search, Twitter, display, email, SMS and so on -- are handled by different people and technologies.

Each is privy to different insights and uses them to calibrate their respective efforts, but it's impossible to manually gather insights from one channel and apply them across all relevant channels at the rate and efficiency of a machine.

For AI to do this, such systems will require an understanding of the interplay of all the moving pieces that go into a cohesive, holistic program -- plus the ability to sequence them to create a whole that's greater than its individual parts.

5. Introduce checks and balances so the AI doesn't go rogue.

Making sure AI doesn't go rogue is a huge concern for companies, so it's necessary to introduce built-in rules that prevent it from making decisions that are at cross purposes with the people or organization it's serving.

This is especially critical when it comes to budget-related decisions. Imagine, for instance, that the machine predicts you should triple your regular ad spend. In this case, checks and balances would kick in to give the team the opportunity to understand the market conditions and potential outcomes before agreeing to let the machine act on its recommendations.

Finally, all of this must happen autonomously, with little to no input from marketers, and at a far greater scale than is possible by even the largest teams.

Read the original post:

Introducing AI to Marketing in 5 Steps - AdAge.com

Facebook is reportedly developing AI to summarize news what could go wrong? – The Next Web

Facebook has been trying to get a foothold in the news space for many years. Last year, the company launched a dedicated section on its sitecalled Facebook News for users in the US. It also wants to expand this program to other countries such as Brazil, Germany, and India.

But thats not the only project the social network is working on in the news space. According to a report from BuzzFeed News, Facebook is testing an AI-powered tool called TL;DR (Too Long; Didnt Read) to summarizenews pieces, so you dont even have to click through to read those articles.

The report noted that the company showed off this tool in an internal meeting last night. Its also planning to add features such as voice narration and an assistant to answer queries about an article.

At the outset, this seems like a great idea getting a short summary of an article you dont have time to read, right. There are already some similar tools such as theAutoTLDR bot on Reddit.However, given Facebooks sketchy history with news and publishers, there are many ways this could go wrong.

At best, the AI makes silly mistakes in parsing the article content, so you cant make sense of the summary it spits out. After all,Weve seen many incidents where bots picked out problematic portions of contentfrom their training algorithms and spewed racist gibberish.

At worst, theres potential to create or distribute misinformation. There are a ton of news sources on Facebook that are not known for their accuracy. If a skewed summary of those articles starts floating around, it might create more trouble.

Facebook will also need to train its algorithm to avoid taking quotes or sentences from articles out of context. A seemingly non-problematic summary could be contradicting the article or the subject and vice versa.

In thepast, researchers have successfully tricked AI systems that are designed to detect toxic comments on the internetwith positive words. If the people behind propaganda operations could crack Facebooks algorithm for summarizing articles, they could write stories in such a way that the summaries include the messages they want to spread.

Many reports have pointed out the social networks massive misinformation problem, and a lot of it was because of poorly designed software. While Facebooks TL;DR product is not public yet,it already sounds like it could be a disaster.

Published December 16, 2020 06:16 UTC

Read more here:

Facebook is reportedly developing AI to summarize news what could go wrong? - The Next Web

Covid-19 drug development to include AI by Iktos and SRI. – Pharmaceutical Technology

]]> The companies plan to use AI to identify potential Covid-19 drug candidates. Credit: SRI International.

Visit our Covid-19 microsite for the latest coronavirus news, analysis and updates

Follow the latest updates of the outbreakon ourtimeline.

Artificial intelligence (AI) technology provider Iktos and research centre SRI International have partnered to discover and develop drugs to treat various viruses, including the novel coronavirus that causes Covid-19 and influenza.

Iktos will combine its generative modelling technology with SRIs fully automated synthetic chemistry platform called SynFini to design compounds and speed-up the identification of drug candidates.

The Iktos AI technology leverages deep generative models for the accelerated drug discovery process, made possible via the automatic design of virtual molecules with the required characteristics of a new drug candidate.

Iktos co-founder and CEO Yann Gaston-Math said: Iktos generative AI technology has proven its value and potential to accelerate drug discovery programs in multiple collaborations with renowned pharmaceutical companies.

We are eager to apply it to SRIs endonuclease programme and hope our collaboration can make a difference and speed up the identification of promising new therapeutic option for the treatment of Covid-19.

The SynFini platform is intended to speed-up chemical discovery and development, advancing drugs to the clinic quickly and affordably, said SRI.

The closed-loop platform is said to automate the design, reaction screening and optimisation (RSO), as well as generation of target molecules.

SRIs ongoing programme is working towards drugs that can block endonuclease enzymes, known to be prevalent to several viruses.

These enzymes are associated with viral replication and inhibition of host resistance to infection.

Covid-19 sequence analysis suggests the presence of an endonuclease that it is nearly 97% genetically similar to the SARS virus.

According to findings from recent studies, inhibition of the SARS virus endonuclease blocks the virus pathogenesis, said to demonstrate a 100% survival rate in preclinical models.

Based on research, Covid-19 endonuclease should be a beneficial therapeutic target.

Read the original here:

Covid-19 drug development to include AI by Iktos and SRI. - Pharmaceutical Technology