A college kids fake, AI-generated blog fooled tens of thousands. This is how he made it. – MIT Technology Review

GPT-3 is OpenAIs latest and largest language AI model, which the San Franciscobased research lab began drip-feeding out in mid-July. In February of last year, OpenAI made headlines with GPT-2, an earlier version of the algorithm, which it announced it would withhold for fear it would be abused. The decision immediately sparked a backlash, as researchers accused the lab of pulling a stunt. By November, the lab had reversed position and released the model, saying it had detected no strong evidence of misuse so far.

The lab took a different approach with GPT-3; it neither withheld it nor granted public access. Instead, it gave the algorithm to select researchers who applied for a private beta, with the goal of gathering their feedback and commercializing the technology by the end of the year.

Porr submitted an application. He filled out a form with a simple questionnaire about his intended use. But he also didnt wait around. After reaching out to several members of the Berkeley AI community, he quickly found a PhD student who already had access. Once the graduate student agreed to collaborate, Porr wrote a small script for him to run. It gave GPT-3 the headline and introduction for a blog post and had it spit out several completed versions. Porrs first post (the one that charted on Hacker News), and every post after, was copy-and-pasted from one of the outputs with little to no editing.

From the time that I thought of the idea and got in contact with the PhD student to me actually creating the blog and the first blog going viralit took maybe a couple of hours, he says.

SCREENSHOT / LIAM PORR

The trick to generating content without the need for much editing was understanding GPT-3s strengths and weaknesses. It's quite good at making pretty language, and it's not very good at being logical and rational, says Porr. So he picked a popular blog category that doesnt require rigorous logic: productivity and self-help.

From there, he wrote his headlines following a simple formula: hed scroll around on Medium and Hacker News to see what was performing in those categories and put together something relatively similar. Feeling unproductive? Maybe you should stop overthinking, he wrote for one. Boldness and creativity trumps intelligence, he wrote for another. On a few occasions, the headlines didnt work out. But as long as he stayed on the right topics, the process was easy.

After two weeks of nearly daily posts, he retired the project with one final, cryptic, self-written message. Titled What I would do with GPT-3 if I had no ethics, it described his process as a hypothetical. The same day, he also posted a more straightforward confession on his real blog.

SCREENSHOT / LIAM PORR

Porr says he wanted to prove that GPT-3 could be passed off as a human writer. Indeed, despite the algorithms somewhat weird writing pattern and occasional errors, only three or four of the dozens of people who commented on his top post on Hacker News raised suspicions that it might have been generated by an algorithm. All those comments were immediately downvoted by other community members.

For experts, this has long been the worry raised by such language-generating algorithms. Ever since OpenAI first announced GPT-2, people have speculated that it was vulnerable to abuse. In its own blog post, the lab focused on the AI tools potential to be weaponized as a mass producer of misinformation. Others have wondered whether it could be used to churn out spam posts full of relevant keywords to game Google.

Porr says his experiment also shows a more mundane but still troubling alternative: people could use the tool to generate a lot of clickbait content. It's possible that there's gonna just be a flood of mediocre blog content because now the barrier to entry is so easy, he says. I think the value of online content is going to be reduced a lot.

Porr plans to do more experiments with GPT-3. But hes still waiting to get access from OpenAI. Its possible that theyre upset that I did this, he says. I mean, its a little silly.

Update: Additional details have been added to the text and photo captions to explain how Liam Porr created his blog and got it to the top of Hacker News.

Link:

A college kids fake, AI-generated blog fooled tens of thousands. This is how he made it. - MIT Technology Review

This article discusses how AI has become a vital tool to the industry | Security News – SourceSecurity.com

Artificial intelligence (AI) is more than a buzzword. AI is increasingly becoming part of our everyday lives, and a vital tool in the physical security industry. In 2020, AI received more attention than ever, and expanded the ways it can contribute value to physical security systems. This article will revisit some of those development at year-end, including links back to the originally published content.

In the security market today, AI is expanding the use cases, making technologies more powerful and saving money on manpower costs - and today represents just the beginning of what AI can do for the industry. What it will never do, however, is completely take the place of humans in operating security systems. There is a limit to how much we are willing to turn over to machines - even the smartest ones.

"Apply AI to security and now you have an incredibly powerful tool that allows you to operate proactively rather than reactively," said Jody Ross of AMAG Technology, one of our Expert Roundtable Panelists.

AI made its initial splash in the physical security market by transforming the effectiveness of video analytics

AI made its initial splash in the physical security market by transforming the effectiveness of video analytics. However, now there are many other applications, too, as addressed by our Expert Panel Roundtable in another article. Artificial intelligence (AI) and machine learning provide useful tools to make sense of massive amounts of Internet of Things (IoT) data. By helping to automate low-level decision-making, the technologies can make security operators more efficient.

Intelligent capabilities can expand integration options such as increasing the use of biometrics with access control. AI can also help to monitor mechanics and processes. Intelligent systems can help end users understand building occupancy and traffic patterns and even to help enforce physical distancing. These are just a few of the possible uses of the technologies - in the end, the sky is the limit.

AI is undoubtedly one of the bigger disrupters in the physical security industry, and adoption is growing at a rapid rate. And its not just about video analytics. Rather, it is data AI, which is completely untapped by the security industry. Bottom line: AI can change up your security game by automatically deciphering information to predict the future using a wide range of sources and data that have been collected, whether past, present, and future. Thats right. You can look into the future.

Now, Intrusion Detection (Perimeter Protection) systems with cutting-edge, built-in AI algorithms to recognise a plethora of different object types, can distinguish objects of interest, thus significantly decreasing the false-positive intrusion rate. The more advanced AI-based systems enable the users to draw ROIs based on break-in points, areas of high-valuables, and any other preference to where alerts may be beneficial.

AI Loitering Detection can be used to receive alerts on suspicious activity outside any given store

Similarly, AI Loitering Detection can be used to receive alerts on suspicious activity outside any given store. The loitering time and region of interest are customisable in particular systems, which allows for a range of detection options. Smart security is advancing rapidly. As AI and 4K rise in adoption on smart video cameras, these higher video resolutions are driving the demand for more data to be stored on-camera. AI and smart video promise to extract greater insights from security video.

Complex, extensive camera networks will already require a large amount of data storage, particularly if this is 24/7 monitoring from smart video-enabled devices. Newer edge computing will play an important role in capturing, collecting, and analysing data. There are many more types of cameras being used today, such as body cameras, dashboard cameras, and new Internet of Things (IoT) devices and sensors.

Video data is so rich nowadays, you can analyse it and deduce a lot of valuable information in real-time, instead of post-event. In smart cities applications, the challenge of identifying both physical and invisible threats to meet urban citizens needs will demand a security response that is proactive, adaptable and dynamic.

As we look ahead to the future of public safety, its clear that new technologies, driven by artificial intelligence (AI), can dramatically improve the effectiveness of todays physical security space. For smart cities, the use of innovative AI and machine learning technologies have already started to help optimise security solutions.

In sports stadium applications, AIs role in getting fans and spectators back after the COVID pandemic is huge, through capabilities such as social distance monitoring, crowd scanning/metrics, facial recognition, fever detection, track and trace and providing behavioural analytics. Technologies such as AI-powered collaboration platforms now work alongside National Leagues, Franchises and Governing Bodies to implement AI surveillance software into their CCTV/surveillance cameras.

In many ways, its the equivalent of a neighbourhood watch programme made far more intelligent through the use of AI

This is now creating a more collaborative effort from the operations team in stadiums, rather than purely security. AI surveillance software, when implemented into the surveillance cameras can be accessed by designated users on any device and on any browser platform. One of the biggest advantages of using AI technology is that its possible to integrate this intelligent software into building smarter, safer communities and cities.

Essentially, this means developing a layered system that connects multiple sensors for the detection of visible and invisible threats. Integrated systems mean that threats can be detected and tracked, with onsite and law enforcement notified faster, and possibly before an assault begins to take place. In many ways, its the equivalent of a neighbourhood watch programme made far more intelligent through the use of AI.

Using technology in this way means that thousands of people can be screened seamlessly and quickly, without invading their civil liberties or privacy. AIs ability to detect visible or invisible threats or behavioural anomalies will prove enormously valuable to many sectors across our global economy. Revolutionary AI-driven technologies can help to fight illicit trade across markets. AI technologies in this specific application promise to help build safer and more secure communities in the future.

AI can support the ongoing fight against illicit trade on a global scale in a tangible way. For financial transactions at risk of fraud and money laundering, for example, tracking has become an increasing headache if done manually. As a solution to this labour-intensive process, AI technology can be trained to follow all the compliance rules and process a large number of documents - often billions of pages of documents - in a short period of time.

Visit link:

This article discusses how AI has become a vital tool to the industry | Security News - SourceSecurity.com

World’s First AI-Solution for Primary Diagnosis of Breast Cancer Deployed by Ibex Medical Analytics and KSM, the Research and Innovation Center of…

TEL AVIV, Israel, Dec. 16, 2020 /PRNewswire/ --Ibex Medical Analytics, a pioneer in artificial intelligence (AI)-based cancer diagnostics, and KSM, the Research and Innovation Center of Maccabi Healthcare Services Israel's leading HMO - announced today a first-of-a-kind pilot of Ibex's Galen Breast solution for AI-powered primary diagnosis of breast cancer at Maccabi's Pathology Institute.

Breast cancer is the most common malignant disease in women worldwide, with over 2 million new cases each year. Early and accurate detection is critical for effective treatment and saving women's lives.

The pilot at Maccabi's Pathology Institute includes 2,000 breast biopsies on which pathologists will use Galen Breast as a First Read application. It is the first-ever deployment of an AI application for primary diagnosis of breast cancer.

During the pilot, all breast biopsies examined at Maccabi will be digitized using a digital pathology scanner, and automatically analyzed by the Galen Breast solution prior to review by a pathologist. The solution detects suspicious findings on biopsies, such as regions with high probability of including cancer cells, and classifies them to one of three risk levels, ranging from high risk of cancer to benign. The Galen Breast First Read is designed to help pathologists diagnose breast biopsies more accurately, more efficiently, and at a considerably faster turnaround time compared to diagnosis on a microscope.

Ibex's AI solution has been used at Maccabi's Pathology Institute since 2018, and already today, all breast and prostate biopsies undergo AI-based second read, supporting improved accuracy and quality control. The solution alerts when discrepancies between the pathologist's diagnosis and the AI algorithm's findings are detected, thus providing a safety net in case of error or misdiagnosis.

"We are proud to use AI as an integral part of breast cancer diagnosis," said Judith Sandbank, MD and Director of the Pathology Institute at Maccabi. "We have already had a successful experience with Ibex's AI solution, enabling us to implement quality control and perform second read on biopsies, and now we are making a significant leap forward with the integration of AI into primary cancer diagnosis."

"Artificial intelligence is revolutionizing healthcare, and its integration into clinical practice will significantly improve the ability to diagnose cancer quickly and efficiently," said Dr. Chaim Linhart, Co-founder and CTO of Ibex. "Our solutions are used in routine practice in pathology laboratories worldwide, and have already helped detect breast and prostate cancers that were misdiagnosed by pathologists as benign. It is now time to take AI to the next level and employ its capabilities across a broader range of the diagnostic workflow."

About Ibex Medical Analytics

Ibex uses AI to develop clinical-grade solutions that help pathologists detect and grade cancer in biopsies. The Galen Prostate and Galen Breast are the first-ever AI-powered cancer diagnostics solutions in routine clinical use in pathology and deployed worldwide, empowering pathologists to improve diagnostic accuracy, integrate comprehensive quality control and enable more efficient workflows. Ibex's solutions are built on deep learning algorithms trained by a team of pathologists, data scientists and software engineers. For more information go to http://www.ibex-ai.com.

About KSM

KSM (Kahn-Sagol-Maccabi), the Maccabi Research and Innovation Center, was founded in 2016 in cooperation with Morris Kahn and Sami Sagol. KSM has unique access to Maccabi's professional abilities and wealth of medical knowledge, including a large database of 2.5 million members with 30 years of data collection. We are a strong force in multiple global health areas. OurInnovation & Big Datautilizes advanced data sources and AI technologies. We have founded Israel's largestBiobank(over 450K samples collected and analyzed),Clinical Researchactivities, and a highly awardedEpidemiological Researchdepartment. KSM is leading advanced global health improvements by partnering with well-known scientists, researchers, academic institutions, pharmaceutical companies, startups, and tech companies to create and expedite medical breakthroughs. Our co-operations within the global health eco-system allow us to deliver groundbreaking discoveries and solutions - shaping the future of health. http://www.ksminnovation.com.

Media ContactLaura RaananGK for Ibex[emailprotected]

SOURCE Ibex Medical Analytics

More here:

World's First AI-Solution for Primary Diagnosis of Breast Cancer Deployed by Ibex Medical Analytics and KSM, the Research and Innovation Center of...

Revealed: Where TfL Is Deploying 20 AI Cameras Around London, and Why – Gizmodo UK

Londons CCTV cameras are about to get a lot smarter, thanks to a new partnership between Transport for London, the capitals transport agency, and VivaCity Labs. Together, the pair are rolling out 20 new artificial intelligence enabled cameras across the centre of the city.

But why? The reason TfL are interested in the cameras is a bit of a no-brainer: as the organisation are responsible for making sure Londoners can get around the city, the more data they have, the better. If they can more accurately monitoring crowding and congestion, and understand the journeys people are actually taking, it can inform both how TfL plans infrastructure improvements for the future (would extra cycle lanes here be a good idea?), and immediate traffic management challenges (keep the lights green for 3 seconds longer on this road after a football match).

Last July, TfL rolled out its Tube tracking full time - which uses the wifi signals from our phones to follow us around the Tube network for similar reasons. But taking the Tube is only one type of journey; what about cars, buses, bikes and pedestrians? TfL already has hundreds of traffic cameras placed around London (you can even watch them in close to real time), but these cameras are dumb, and to understand what is going on in the pictures requires a human operator to take a look and decide what the pictures are telling us.

Hence, enter stage left VivaCity Labs. The VivaCity Sensor Platform makes use of an artificial intelligence layer on top of the cameras, to analyse images and reveal insights that TfL might find useful.

Image: VivaCity

For example, point it at a road and it will count all of the vehicles that pass by - but instead of just counting vehicles like many existing systems, it will classify the vehicles by type, giving TfL a breakdown on the number of cars, vans, lorries and so on. It will also estimate the speed of each road user (though the companys documentation points out this is not for law enforcement). According to the launch press release, VivaCitys cameras are up to 98 per cent accurate.

In a response to a Freedom of Information request from Gizmodo UK, TfL also revealed that one unique feature of the camera is that it will be able to be trained to spot new and specific vehicle types, such as London buses or cargo-bikes, to differentiate them from similar vehicles. In other words, TfL could soon have data on Londons busiest Deliveroo routes.

The cameras are also able to identify how people are moving within a cameras field of vision - so it could conceivably be used to, for example, see how long it takes people to cross the road or how the road space is being used.

Judging by the map of locations, it appears that for this initial rollout, cameras are being placed around Londons inner ring-road, following basically the Congestion Charge zone, as well as on a number of bridges and pinch points in the central core of the city.

This makes a lot of sense: presumably the eventual plan is to replace the actual Congestion Zone cameras with the VivaCity sensors. This is wild speculation but not only would that mean they can provide more detailed analytics, but by being generic cameras running software, it could ultimately mean saved money and more flexibility. Specialist cameras would not be require for Congestion Zone enforcement, and if TfL wanted to expand or retract the zone, itd simply be a case of pressing a button on some software to switch on number plate logging at other VivaCity camera locations, rather than a bigger manual process. (Again, wild speculation but we can imagine a future where the Congestion Zone moves dynamically - maybe covering a wider area on weekdays than weekends, say.)

Overall, 43 cameras are being placed around 20 locations for a trial period of two years. Heres a full list of locations:

Image:VivaCity

The rollout will, of course, provoke privacy concerns. After all, these cameras are not just taking pictures, but are interpreting them too. Though in all of its communications on the new cameras, TfL has downplayed any privacy concerns, saying in its original press release that All video captured by the sensors is processed and discarded within seconds, meaning that no personal data is ever stored.

The data, it emphasises, is processed in the camera units themselves - and it is only the outputted data, such as counts of the number of vehicles, that is being sent back to TfL for storage. All of the images collected by the cameras are apparently discarded within seconds.

Whats most illustrative of TfLs careful approach is that it has confirmed to Gizmodo UK that it does not intend to use one other feature of VivaCitys product: the ability to track vehicles as they travel across the city using number-plate recognition.

In response to our FOI, TfL says this:

Our requirements for this data include cycle and pedestrian counts, full traffic classified counts (13 types of traffic), turning movement counts, link delays, queue length monitoring and pedestrian crowding.

Notice how all of these tasks can be carried out with just a single camera - rather than require data to be stored and matched up elsewhere. Weve asked TfL to confirm whether or not our hunch is correct.

However, there are still gaps that might leave privacy experts asking questions. According to TfL, VivaCity Labs has produced a number of Privacy Impact Assessments, but because they were carried out by VivaCity, TfL says that it does not hold the assessments - and of course, being a private company, VivaCity is not subject to Freedom of Information laws. This also implies that VivaCity, not TfL, are the data controller of this data. Weve asked TfL for a more detailed explanation of its rationale for not releasing these private assessments.

Ultimately, this camera rollout sits at the nexus of a very familiar debate. As we saw with TfLs WiFi tracking system, there is a very real trade-off between giving planners better data and respecting privacy. TfL does, once again, appear to have behaved broadly responsibly - but it is still an important debate to have. The future is one where every CCTV camera will, by default, have this sort of functionality baked in - so better to debate now whether the gains are worth it, rather than risk waiting until it is too late.

Featured Photo byPawe CzerwiskionUnsplash

More:

Revealed: Where TfL Is Deploying 20 AI Cameras Around London, and Why - Gizmodo UK

Theres Nothing Fake About These 10 Artificial Intelligence Stocks to Buy – InvestorPlace

Artificial intelligence is one of those catchy phrases that continues to grab investors attention. Like 5G, it tugs on the sleeves of those looking to get in on cutting-edge technology. While it is a very important sector of technology, investors need to be wary of hype and focus on reality before buying AI stocks.

Take, for example, International Business Machines (NYSE:IBM). IBM has been on the front line of AI with its Watson-branded products and services. Sure, it did a bang up job on Jeopardy and it partners with dozens of companies. But for IBM shareholders, Watson is not a portfolio favorite.

Over the past five years, IBM has lost 28.7% in price compared to the S&P 500s gain of 37.5% and the S&P Information Technology Indexs gain of 130%. And over the past 10 years, IBMs AI leadership has generated a shareholder loss of 3.4%.

Source: Chart by Bloomberg

IBM (White), S&P 500 (Red) & S&P 500 Information Technology (Gold) Indexes Total Return

But AI is more than just a party trick like Watson. AI brings algorithms into computers. These algorithms then take internal and external data, and in turn process decisions behind all sorts of products and services. Think for example something as simple as targeted ads. Data is gathered and processed while you simply shop online.

But AI can go much further. Think, of course, of autonomous vehicles. AI takes all sorts of input data and the central processor makes calls to how the vehicle moves and at what speed and direction.

Or in medicine, AI brings quicker analysis of symptoms, diagnostic data and tests.

And the list goes on.

So then what do I bring to the table as a human? I have found ten AI stocks that arent just companies using AI. These are companies to own and follow for years complete with dividends along the way.

Lets start with the index of the best technology companies found inside that S&P Information Technology index cited earlier. The Vanguard Information Technology ETF (NYSEARCA:VGT) synthetically invests in the leaders of that index. It should be the starting point for all technology investing it offers a solid foundation for portfolios.

Source: Chart by Bloomberg

Vanguard Information Technology ETF (VGT) Total Return

The exchange-traded fund continues to perform well. Its return for just the past five years runs at 141.1% for an average annual equivalent return of 19.2%. This includes the major fall in March 2020.

Before I move to the next of my AI stocks, its important to note that data doesnt just get collected. It also has to be communicated quickly and efficiently to make processes work.

Take the AI example mentioned earlier for autonomous vehicles. AI driving needs to know not just what is in front of the vehicle, but what is coming around the next corner. This means having dependable data transmission. And the two leaders that make this happen now and will continue to do so with 5G are AT&T (NYSE:T) and Verizon (NYSE:VZ).

Source: Chart by Bloomberg

AT&T (White) & Verizon (VZ) Total Return

Much like other successful AI stocks, AT&T and Verizon have lots of communications services and content. This provides some additional opportunities and diversification but can limit investor interest in the near term. This is the case with AT&T and its Time Warner content businesses. But this also means that right now, both of these stocks are good bargains.

And they have a history of delivering to shareholders. AT&T has returned 100% over the past 10 years, while Verizon has returned 242%.

AI takes lots of equipment. Chips, processors and communications gear all go into making AI computers and devices. And you should buy these two companies for their role in equipment: Samsung Electronics (OTCMKTS:SSNLF) and Ericsson (NASDAQ:ERIC).

Samsung is one of the global companies that is essential for nearly anything that involves technology and hardware.Hardly any device out there isnt a either a Samsung product or has components invented and produced by Samsung.

And Ericsson is one of the leaders in communications gear and systems. Its products makes AI communications and data transmission work, including on current 4G and 5G.

Source: Chart by Bloomberg

Samsung Electronics (White) & Ericsson (Red) Total Return

Over the past 10 years Samsung has delivered a return of 235.4% in U.S. dollars while Ericsson has lagged, returning a less-than-stellar 6.5%.

Both have some challenges in their stock prices. Samsungs shares are more challenging to buy in the U.S. And Ericsson faces economic challenges as its deep in the European market. But in both cases, you get great products from companies that are still value buys.

Samsung is valued at a mere 1.2 times book and 1.3 times trailing sales, which is significantly cheaper than its global peers. And Ericsson is also a bargain, trading at a mere 1.3 times trailing sales.

To make AI work you need lots of software. This brings in Microsoft (NASDAQ:MSFT). The company is one of the cornerstones of software its products have all sorts of tech uses.

And AI especially on the move needs quick access to huge amounts of data in the cloud. Microsoft and its Azure-branded cloud fits the bill.

Source: Chart by Bloomberg

Microsoft (MSFT) Total Return

Microsoft, to me, is the poster child of successful technology companies. It went from one-off unit sales of packaged products to recurring income streams from software subscriptions. Now its pivoting to cloud services. And shareholders continue to see rewards. The companys stock has returned 702.7% over the past 10 years alone.

AI and the cloud are integral in their processing and storage of data. But beyond software and hardware, you need to stack off the hardware, complete with power and climate controls, transmission lines and wireless capabilities.

This means data centers. And there are two companies set up as real estate investment trusts (REITs) that lead the way with their real estate and data centers. These are Digital Realty Trust (NYSE:DLR) and Corporate Office Properties (NYSE:OFC).

Digital Realty has the right name, as Corporate Office Properties doesnt tell the full story. The latter company has Amazon (NASDAQ:AMZN) and its Amazon Web Services (AWS) as exclusive clients in core centers, including the vital hub in Northern Virginia.

And the stock-price returns show the power of the name. Digital Realty has returned 310.9% against a loss of 0.9% for Corporate Office Properties.

Source: Chart by Bloomberg

Corporate Office Properties (White) & Digital Realty (Red) Total Return

But this means that while both are good buys right now, Corporate Office Properties is a particular bargain. The stock price is at a mere 1.7 times the companys book value.

Now Ill get to the newer companies in the AI space. These are the companies that are in various stages of development. Some are private now, or are pending public listings. Others are waiting for larger companies to snap them up.

Most individual investors unless they have a net worth nearing $1 billion dont get access. But I have a company that brings this access, and its my stock for theInvestorPlaceBest Stocks for 2020 contest.

Hercules Capital (NYSE:HTGC) is set up as business development company (BDC) that provides financing to all levels of technology companies. Along the way, it takes equity participation in these companies.

It supports hundreds of current technology companies using or developing AI for products and services along with a grand list of past accomplishments. The current portfolio can be found here.

I have followed this company since its early days. I like that it is very investor focused, complete with big dividend payments throughout the many years. And it has returned 184.3% over just the past 10 years alone.

Source: Chart by Bloomberg

Hercules Capital (HTGC) Total Return

Who doesnt buy goods and services from Amazon? I am a prime member with video, audio and book services. And I also have many Alexa devices that I use throughout the day. While I dont contract directly with its AWS, I use its cloud storage as part of other services. Few major companies that are part daily life make use of AI more than Amazon.

The current lockdown mess has made Amazon a further necessity. Toilet paper, paper towels, cleaning supplies, toothpaste, soap and so many other items are sold and delivered by Amazon.

And I also use the platform for additional digital information from the Washington Post. Plus, I get food and other household goods from Whole Foods, and products for my miniature dachshund, Blue, come from Amazon.

This is a company that I have always liked as a consumer, but didnt completely get as an investor. Growth for growths sake was what it appeared to be from my perspective. But I have been coming to a different understanding of what Amazon means as an investment.

It really is more of an index of what has been working in the U.S. for cloud computing and goods and services. And the current mess makes it not just more relevant but a necessity. Its proof comes from the sales that keep rolling up for the company on real GAAP terms.

Source: Chart by Bloomberg

Amazon Sales Revenue (GAAP)

I know that my subscribers to my Profitable Investing dont pay to have me tell them about Amazon. But I am recommending buying shares as the company is really a leading index of the evolving U.S. It is fully engaged in benefitting from AI, like my other AI stocks.

Neil George was once an all-star bond trader, but now he works morning and night to steer readers away from traps and into safe, top-performing income investments. Neils new income program is a cash-generating machine one that can help you collect $208 every day the markets open. Neil does not have any holdings in the securities mentioned above.

Read the original here:

Theres Nothing Fake About These 10 Artificial Intelligence Stocks to Buy - InvestorPlace

Cloak your photos with this AI privacy tool to fool facial recognition – The Verge

Ubiquitous facial recognition is a serious threat to privacy. The idea that the photos we share are being collected by companies to train algorithms that are sold commercially is worrying. Anyone can buy these tools, snap a photo of a stranger, and find out who they are in seconds. But researchers have come up with a clever way to help combat this problem.

The solution is a tool named Fawkes, and was created by scientists at the University of Chicagos Sand Lab. Named after the Guy Fawkes masks donned by revolutionaries in the V for Vendetta comic book and film, Fawkes uses artificial intelligence to subtly and almost imperceptibly alter your photos in order to trick facial recognition systems.

The way the software works is a little complex. Running your photos through Fawkes doesnt make you invisible to facial recognition exactly. Instead, the software makes subtle changes to your photos so that any algorithm scanning those images in future sees you as a different person altogether. Essentially, running Fawkes on your photos is like adding an invisible mask to your selfies.

Scientists call this process cloaking and its intended to corrupt the resource facial recognition systems need to function: databases of faces scraped from social media. Facial recognition firm Clearview AI, for example, claims to have collected some three billion images of faces from sites like Facebook, YouTube, and Venmo, which it uses to identify strangers. But if the photos you share online have been run through Fawkes, say the researchers, then the face the algorithms know wont actually be your own.

According to the team from the University of Chicago, Fawkes is 100 percent successful against state-of-the-art facial recognition services from Microsoft (Azure Face), Amazon (Rekognition), and Face++ by Chinese tech giant Megvii.

What we are doing is using the cloaked photo in essence like a Trojan Horse, to corrupt unauthorized models to learn the wrong thing about what makes you look like you and not someone else, Ben Zhao, a professor of computer science at the University of Chicago who helped create the Fawkes software, told The Verge. Once the corruption happens, you are continuously protected no matter where you go or are seen.

The group behind the work Shawn Shan, Emily Wenger, Jiayun Zhang, Huiying Li, Haitao Zheng, and Ben Y. Zhao published a paper on the algorithm earlier this year. But late last month they also released Fawkes as free software for Windows and Macs that anyone can download and use. To date they say its been downloaded more than 100,000 times.

In our own tests we found that Fawkes is sparse in its design but easy enough to apply. It takes a couple of minutes to process each image, and the changes it makes are mostly imperceptible. Earlier this week, The New York Times published a story on Fawkes in which it noted that the cloaking effect was quite obvious, often making gendered changes to images like giving women mustaches. But the Fawkes team says the updated algorithm is much more subtle, and The Verges own tests agree with this.

But is Fawkes a silver bullet for privacy? Its doubtful. For a start, theres the problem of adoption. If you read this article and decide to use Fawkes to cloak any photos you upload to social media in future, youll certainly be in the minority. Facial recognition is worrying because its a society-wide trend and so the solution needs to be society-wide, too. If only the tech-savvy shield their selfies, it just creates inequality and discrimination.

Secondly, many firms that sell facial recognition algorithms created their databases of faces a long time ago, and you cant retroactively take that information back. The CEO of Clearview, Hoan Ton-That, told the Times as much. There are billions of unmodified photos on the internet, all on different domain names, said Ton-That. In practice, its almost certainly too late to perfect a technology like Fawkes and deploy it at scale.

Naturally, though, the team behind Fawkes disagree with this assessment. They note that although companies like Clearview claim to have billions of photos, that doesnt mean much when you consider theyre supposed to identify hundreds of millions of users. Chances are, for many people, Clearview only has a very small number of publicly accessible photos, says Zhao. And if people release more cloaked photos in the future, he says, sooner or later the amount of cloaked images will outnumber the uncloaked ones.

On the adoption front, however, the Fawkes team admits that for their software to make a real difference it has to be released more widely. They have no plans to make a web or mobile app due to security concerns, but are hopeful that companies like Facebook might integrate similar tech into their own platform in future.

Integrating this tech would be in these companies interest, says Zhao. After all, firms like Facebook dont want people to stop sharing photos, and these companies would still be able to collect the data they need from images (for features like photo tagging) before cloaking them on the public web. And while integrating this tech now might only have a small effect for current users, it could help convince future, privacy-conscious generations to sign up to these platforms.

Adoption by larger platforms, e.g. Facebook or others, could in time have a crippling effect on Clearview by basically making [their technology] so ineffective that it will no longer be useful or financially viable as a service, says Zhao. Clearview.ai going out of business because its no longer relevant or accurate is something that we would be satisfied [with] as an outcome of our work.

See original here:

Cloak your photos with this AI privacy tool to fool facial recognition - The Verge

How AI and tech could strengthen America’s border wall – Fox News

The best approach for border security and immigration control is a layered strategy, experts tell Fox News. This harnesses artificial intelligence, aerial drones, biometrics and other sophisticated technologies in addition to existing or future fencing or walls along U.S. borders.

Dr. Brandon Behlendorf, a noted border security expert and professor at the University of Albany, New York, told Fox News that advancements in technology have made virtual border security much more feasible. Motion sensors, surveillance systems, drone cameras, thermal imaging -- they help form a barrier that is fed into operations centers all across the border.

[This hinges on] the use of physical and virtual infrastructure, combined with patrol and response capabilities of agents, to provide multiple opportunities for detecting and interdicting illegal border crossings not just at the border, but also some distance from the border, he said. You need to leverage the benefits of each with properly trained and outfitted agents to provide the most effective approach to border security. Neither a wall nor technology itself will suffice.

HOW AI FIGHTS THE WAR AGAINST FAKE NEWS

One of the most interesting innovations is called the Edgevis Shield, a surveillance platform originally developed for use in Afghanistan. The platform uses ground-based sensors that detect activity, and they are self-healing. The sensors form a mesh network, so if one of them is compromised, the entire network can self-correct and keep functioning. The shield can detect whether someone is moving on foot or in a vehicle; and, it uses a low latency wireless network.

Charles King, principal analyst of the Hayward, Calif.-based tech research firm Pund-IT, says other advancements are helping create a virtual border. Because a physical wall only stops illegal border crossings above ground, the U.S. Customs and Border Protection plans to deploy surveillance robots called Marcbots that can explore tunnels, similar to what the military uses today for bomb detections, he says.

The AVATAR (or Automated Virtual Agent for Truth Assessments in Real-time) is a kiosk being developed at San Diego State University. The kiosk uses artificial intelligence to ask questions at a border crossing and can detect physiological changes in expression, voice, and gestures.

NEW $27 MILLION FUND AIMS TO SAVE HUMANITY FROM DESTRUCTIVE AI

For example, the kiosk might ask an immigrant if he or she is carrying any weapons, then look for signs of deception. The kiosk is currently being tested at Canadian border crossings.

Behlendorf says some of the most interesting work related to border patrol is in development at computer labs in the U.S., not at the actual border. Today, there are reams of data from the past that show how illegal immigrants have moved across the border and are then apprehended. This data provides a rich trove for machine learning to look for patterns and even predict likely behavior in the future. Its more than only tracking or blocking one individual crossing.

Developments in other fields related to pattern recognition, machine learning, and predictive analytics could greatly enhance the information with which sector and station commanders have to decide on allocations of key resources, Behlendorf said. Those efforts are starting to develop, and in my opinion over the next few years will form a cornerstone of virtual fence development.

WHITE HOUSE: WE'RE RESEARCHING AI, BUT DONT WORRY ABOUT KILLER ROBOTS

One example of this: using analytics data, border patrol agents could determine where to allocate the most resources to augment a physical wall. Theres already a precedent for this, he says. Los Angeles International Airport uses game theory to randomize how security guards go on patrol, rather than relying on the same set pattern that criminals and terrorists could predict.

The technologies required for supporting a virtual wall, from sensors to surveillance drones to wireless networks and communications to advanced analytics, are more capable and mature today than they have ever been in the past, said Pund-ITs King. The stars are better aligned for the development and deployment of virtual border security today than in the past.

In the future, border patrols could rely more on a virtual infrastructure -- the technology on the back end that looks for patterns, the facial recognition technology at borders -- for security.

In the end, its all of the above that will help protect U.S. borders.

More here:

How AI and tech could strengthen America's border wall - Fox News

Ai | Pretty Cure Wiki | FANDOM powered by Wikia

This article is about the Doki Doki! Pretty Cure character Ai also known as Ai-chan. For the Yes! Pretty Cure 5 and GoGo! character, please go to Natsuki Ai.

Ai (, Ai?)(or Dina in Glitter Force Doki Doki) also called Ai-chan (-, Ai-chan?)by the girls, is a baby fairy mascot that appears in Doki Doki! Pretty Cure.She hatches from an egg. She used to be Cure Ace's partner, until they separated, as Ai got turned into an egg. They saw each other again, inepisode 23. In episode 46, it was revealed that Ai is in fact Princess Marie Ange, who was reverted back into an egg after she split her good and bad parts and Joe found her later on.Though she mispronounced her sentences, she ends them with "~kyupi".

Ai is a fair-skinned baby with big blue eyes that have a curled yellow marking at the lower corner, pale blue hearts on her cheeks, and a button nose. Her pink hair is worn in heart shaped buns held by a yellow flower, and her bangs have a heart formed on the right side. She wears a yellow onsie with a white bib lined in light blue frills with a fuchsia heart on it, along with light purple booties. She has small angel wings.

According to Joe, he found her egg in a river and never found out her true origin.DDPC18However, long before that, she used to be Cure Ace's partner. As Cure Ace started fighting Selfish King, she lost resulting on them to separate, as Cure Ace went to Earth, while Ai got changed back into an egg.DDPC27

Ai-chan hatches from an giant egg in front of the Cures. Joe was nearby and was the one who had the egg to start with. He explains that the girls need to use other Cure Loveads to take care of her.DDPC08

Ai has powers similar toChiffon from Fresh Pretty Cure!. Ai can summonLoveadsas they could be a help for the Cures, or could be a help for her. She can also make a barrier to protect herself, as seen in episode 11. She is also the partner of Aguri.

Aida ManaandKenzaki Makoto: Joe calls them Ai-chan's mother and father, respectively. Both of them find Ai-chan cute, and promised to protect her.

Okada Joe: Joe seems to know alot about Ai-chan. He tells the girls about other Cure Loveads to take care of Ai.

Madoka Aguri: She is Ai's transformation partner. She was also once part of her heart, representing her good half.

Regina: She was once part of Ai's heart, representing her bad half.

Ai() -Aimeans "love" in Japanese, as well as a common girls' name in Japan.

Dina - A Hebrew name meaning "judged", given to the daughter of the biblical figures Jacob and Leah.[1] Dina could also be short for Adelina, meaning "noble"[2]; or Augustina[3], a feminine form of Augustus, which means "great" or "venerable".[4]

Ai's voice actress, Imai Yuka, has participated in one character song for the character she voices.

See original here:

Ai | Pretty Cure Wiki | FANDOM powered by Wikia

Sonys AI subsidiary is developing smarter opponents and teammates for PlayStation games – The Verge

In 2019, Sony quietly established a subsidiary dedicated to researching artificial intelligence. What exactly the company plans to do with this tech has always been a bit unclear, but a recent corporate strategy meeting offers a little more information.

Sony AI [...] has begun a collaboration with PlayStation that will make game experiences even richer and more enjoyable, say notes from a recent strategy presentation given by Sony CEO Kenichiro Yoshida. By leveraging reinforcement learning, we are developing Game AI Agents that can be a players in-game opponent or collaboration partner.

This is pretty much what youd expect from a partnership between PlayStation and Sonys AI team, but its still good to have confirmation! Reinforcement learning, which relies on trial and error to teach AI agents how to carry out tasks, has proved to be a natural fit for video game environments, where agents can run at high speeds under close observation. Its been the focus of heavy-hitting research, like DeepMinds StarCraft II AI.

Other big tech companies with gaming interests such as Microsoft are also exploring this space. But while Microsofts efforts are tilted towards pure research, Sonys sound like theyre more focused on getting this research out of the lab and into video games, pronto. The end result should be smarter teammates as well as opponents.

This tidbit was just one point in the presentation, though, in which Sony laid out numerous plans for its future growth. Here are some of the other ambitions mentioned:

For more details you can check out Sonys presentation for yourself here. Though, be prepared to wade through some absolutely incredible corporation-speak. We particularly liked the opening declaration that the company has now implemented structural reform that liberated us from a loss-making paradigm. In other words: they changed things so that Sony makes money instead of losing it! Got to dress that up somehow, I guess.

Read more from the original source:

Sonys AI subsidiary is developing smarter opponents and teammates for PlayStation games - The Verge

New AI test ‘can identify Covid-19 within one hour’ – Aberdeen Evening Express

A new test powered by artificial intelligence (AI) could be capable of identifying coronavirus within one hour, according to new research.

Its developers say it can rapidly screen people arriving at hospitals for Covid-19 and accurately predict whether or not they have the disease.

The Curial AI test has been developed by a team at the University of Oxford and assesses data typically gathered from patients within the first hour of arriving in an emergency department such as blood tests and vital signs to determine the chance of a patient testing positive for Covid-19.

Testing for the virus currently involves the molecular analysis of a nose and throat swab, with results having a typical turnaround time of between 12 and 48 hours.

However, the Oxford team said their tool could deliver near-real-time predictions for a patients Covid-19 status.

In a study running since March, the researchers have tested the AI tool on data from 115,000 visits to A&E at Oxford University Hospitals (OUH).

Study lead Dr Andrew Soltan said the tool had accurately predicted a patients Covid-19 status in more than 90% of cases, and argued that it could be a useful tool for the NHS.

Until we have confirmation that patients are negative, we must take additional precautions for patients with coronavirus symptoms, which are very common, he said.

The Curial AI is optimised to quickly give negative results with high confidence, safely excluding Covid-19 at the front door and maintaining flow through the hospital.

When we tested the Curial AI on data for all patients coming to OUHs emergency departments in the last week of April and the first week of May, it correctly predicted patients Covid status more than 90% of the time.

He added that the researchers now hope to carry out real-world trials of the technology.

The next steps are to deploy our AI in to the clinical workflow and assess its role in practice, he said.

A strength of our AI is that it fits within the existing clinical care pathway and works with existing lab equipment. This means scaling it up may be relatively fast and cheap.

I hope that our AI may help keep patients and staff safer while waiting for results of the swab test.

More here:

New AI test 'can identify Covid-19 within one hour' - Aberdeen Evening Express

Playing a piano duet with Google’s new AI tool is fun – CNET

The yellow notes are those played by the A.I. Duet.

Wanna play a piano duet but nobody's around? No worries; you still can, courtesy of Google's new interactive experiment called A.I. Duet. Basically, you play a few notes and the computer plays other notes in response to your melody.

What's special about A.I. Duet is that it plays with you using machine learning, and not just as a machine that's programmed to play music with notes and rules hard-coded into it.

According to Yotam Mann, a member of Google's Creative Lab team, A.I. Duet has been exposed to a lot of examples of melodies. Over time, it learns the relationships between notes and timing and builds its own music maps based on what it's "listened" to. These maps are saved in the A.I.'s neural networks. As you play music to the computer, it compares what you're playing with what it's learned and responds with the best match in real time. This results in "natural" responses, and the computer can even produce something it was never programmed to do.

You can try A.I Duet here. You don't need to be a musician to use it, because the A.I. responds even if you just smash on the keyboard. And in that case, its notes definitely sound better than yours.

A.I. Duet is part of a project called Magenta that's being run by Google's Google Brain unit. It's an open-source effort that's available for download.

Go here to read the rest:

Playing a piano duet with Google's new AI tool is fun - CNET

5 Fintech Companies Using AI to Improve Business – Singularity Hub

Artificial intelligence may be all the craze in Silicon Valley, but on Wall Street, well, theres a lot of skepticism.

High-powered algorithms are not a new phenomenon in finance, and for this industry, the name of the game is efficiency and precision.

Quite frankly, finance executives want systems that, in one way or another, make money. Because of this, new wild and flashy AI systems that just make something smart wont fly.

The fintech companies that are successfully leveraging AI today are the ones that have found a very concrete way to apply the technology to an existing business problem. For example, technology such as specialized hardware, big data analytics, and machine learning algorithms are being used in fintech to augment tasks that people already perform.

At the Singularity University Exponential Finance Summit this week, Neil Jacobstein, faculty chair of Artificial Intelligence and Robotics at SU, shared some of the most interesting AI companies in fintech right now.

Not surprisingly, these companies each have a clear market application and reduce friction in the business problems they address.

Numerai is a new kind of hedge fund that is built by crowdsourcing knowledge through a massive network of hedge fundsthe system collects hundreds of thousands of financial models and individual predictions. With this information, Numerai is building their own financial models that incorporate the algorithms submitted through the crowdsourced community. Numerai has already secured funding from First Round Capital and Union Square Ventures, which is no small feat.

In 2010, AlphaSense launched its intelligent search engine, which uses AI, natural language processing algorithms, and advanced linguistic search tools to provide researchers with critical insights with serious accuracy and speed. Financial analysts can pose questions to AlphaSenses systems and get insights that are significantly more customized and accurate than a simple Google search would provide. Its a great example of an AI augmenting a critical task in finance: research.

Opera is helping companies turn their big data into predictive insights and business intelligence. The company uses pattern recognition to identify what they call signals, meaning actionable insights from data. Their signals help researchers understand conditions that may be happening in the market, or the world at large, so they can act quickly on these changes.

AppZen is a very practical solution to one of every executives most arduous taskssubmitting expense reports. The system uses AI to audit 100 percent of employee expenses and then generates an expense report in real time. Automating this process saves companies hours of lost productivity. AppZen also gives companies more confidence in their ability to flag suspicious charges. So, if youve been considering expensing that pricey night out with clients, dont, because AppZen will likely flag it.

CollectAI is a cloud-based software system thats shaking up the collection business. The system is able to mimic the voice and tone of a collection agent to gather important information over the phone about a collections case. With this information, CollectAI uses a self-learning algorithm to learn about the case, and then pulls knowledge from previous successful cases and applies those insights to decide how to best approach the situation at hand. The system gets better and better over time, which is pretty incredible.

Image Credit: Pond5

Continue reading here:

5 Fintech Companies Using AI to Improve Business - Singularity Hub

5 AI-powered companies gaining traction for 2017 – VentureBeat

AI is becoming a way of life for many of us. We check on flights using a chatbot like Mezi, we benefit from the AI within the booking engine used at Hoppers website, and we are sending messages to businesses easier thanks to the machine learning at Yelp.

It should not come as a big surprise when the AI improves, advances, and becomes even more helpful. After all, taking a cue from the human brain, AI is always adapting, looking for new ways to help us on a constant iteration cycle. The engineers behind AI are keen to make the technology more powerful and integrated into our daily workflow, even when things get really complex.

Thats why several companies are not interested in spinning their wheels when it comes to AI. Today at MB 2017, four companies made a splash with announcements that are intended to make their services even more competitive and help make your life easier.

One interesting upgrade has to do with the Mezi chatbot. The app uses AI algorithms to help with flight searches and other duties but is also powered by human agents. Today, they have announced Mezi for Business. The new service, intended for travel agents and corporate travel reps, will improve efficiency and productivity.

Similar to the consumer app, it employs algorithms to help with travel booking and management and much more.

We have decided to go all-in on travel, saysSwapnil Shinde, the CEO and founder of Mezi, speaking at MB 2017. We empower businesses with a suite of travel bots that automate requests. For travel agents we offer a state-of-the-art travel dashboard.

Another example of gaining traction Yelp is using machine learning to facilitate and improve the interactions between customers and businesses. Its fine-tuned behind the scenes by an AI. 35,000 messages are fed through their machine learning tech. They use data from service companies to find out about geofencing parameters. They extract data about the services as well. Yelp is also using machine learning to weed through content and verify it, making sure that five-star review of an auto repair business is valid.

The last feature, requesting a quote from a business, is also AI enabled. For example, it makes sure a business matches the request.

We estimate that every month, Yelp sends billions of dollars of leads to local service businesses listed on our site through the Request A Quote feature, says Jim Blomo, the director of engineering at Yelp. Growth of this feature has been through the roof, and a lot of that progress can be attributed to the machine learning work on this product, allowing us to surface the most useful and relevant businesses when a consumer types iPhone 7 screen repair or overflowing toilet into Yelp.

Another company, GobTech, is using AI in its iOS and Android app called Neural Sandbox. The apps let you experiment with neural networks. At MB 2017, the startup is launching a way to compare neural networks called Gauntlet. Users can compare their score against other users using the Google Play leaderboard.

GobTech is exploring new frontiers in AI for gaming using a unique combination of neural networks and genetic algorithms, says Gabriel Kauffman, the CEO of GobTech. This combo, known as neuroevolution, is a way for neural networks to evolve through natural selection, in our case to learn to play a game by itself.

Meanwhile, Hopper is using machine learning to improve its back-end booking agent. Its an effort to make booking work more like you have a human helping you find the best travel deals. Maggie Moran, the Head of Product at Hopper, explained how the AI bunny empowers travelers about how to find the best deals.

GoPro revealed how they are using AI.Meghan Laffey, the VP of product at GoPro, explained how the app is central to their product offering. The phone has made it easy to go from capturing to sharing, she says. Its been a challenge to go from the experience to the actual playback.

A new feature called Quik Stories allows users to film and edit videos without the hassle of watching all of your footage. With a single tap, stories are generated automatically. Algorithms analyze content and find the best moments, syncing them to music.

These announcements show how AI will ultimately gain traction by iterating, improving, and capturing new audiences.

The ability to use AI within an app is nothing new. What will create a differentiator in the long run is when companies keep enhancing the AI, when the machine learning power an app or website is so compelling that it attracts new users.

See more here:

5 AI-powered companies gaining traction for 2017 - VentureBeat

Anyscale raises $20.6 million to simplify writing AI and ML applications with Ray – VentureBeat

Anyscale, a company promising to let application developers more easily build so-called distributed applications that are behind most AI and machine learning efforts, has raised $20.6 million from investors in a first round of funding.

The company has some credibility off the bat because its cofounded by Ion Stoica, a professor of computer science at the University of California, Berkeley who played a significant role in building out some successful big data frameworks and tools, including Apache Spark and Databricks.

The new company is based on an open source framework called Ray also developed in a lab that Stoica co-directs that focuses on allowing software developers to more easily write compute-intensive applications by simplifying the hardware decisions made underneath.

Rays emergence is significant because it aims to solve a growing problem in the industry, Stoica said in an interview with VentureBeat. On one hand, developers are writing more and more applications for example AI- and ML-driven applications that are increasingly intensive in their number-crunching needs. The amount of compute for the largest AI applications has doubled every three to four months since 2012, according to OpenAI an astonishing exponential rate.

On the other hand, the ability of the processing hardware underneath needed to do this number-crunching is falling behind. Application developers are thus being forced to distribute their applications across thousands of CPU and GPU cores to factor out the processing workload in a way that allows hardware to keep up with their needs. And that process is complex and labor intensive. Companies have to hire specialized engineers to build this architecture, linking things like AWS or Azure cloud instances with Spark and distribution management tools like Kubernetes.

The tools required for this have been kind of jerry-rigged in a way they shouldnt be, said Ben Horowitz, a partner at venture firm Andreessen Horowitz, which led the round of funding. Thats effectively meant large barriers to entry for building scaled applications, and its kept companies from reaping the promised benefits of AI.

Ray was developed at UC Berkeley, in the RISELab successor to the AMPLab, which created Apache Spark and Databricks. Stoica was cofounder of Databricks, a company that helped commercialize Apache Spark, a dominant open source framework that helps data scientists and data engineers process large amounts of data quickly. Databricks was founded in 2013, and is already valued at $6.2 billion. Whereas Spark and Databricks targeted data scientists, Ray is targeting software developers.

From a developer standpoint, you write the code in a way that it talks to Ray, said Horowitz, and you dont have to worry about a lot of that [infrastructure].

Ray is one of the fastest-growing open source projects weve ever tracked, and its being used in production at some of the largest and most sophisticated companies, Horowitz added. Intel has used Ray for things like AutoML, hyperparameter search, and training models, whereas startups like Bonsai and Skymind have used it for reinforcement learning projects. Amazon and Microsoft are also users.

Another Anyscale cofounder, Robert Nishihara, who is also the CEO, likens Anyscales mission with Ray to what Microsoft did when it built Windows. The operating system let developers build applications much more rapidly. We want to make it as easy to program clusters [or thousands of cores] and scalable applications as it is to program on your laptop.

Stoica and Nishihara say applications built with Ray can easily be scaled out from a laptop to a cluster, eliminating the need for in-house distributed computing expertise and resources.

To be sure, developing a company around an open source framework can be challenging. Theres no guarantee that the company can make money from an open framework that other companies can build around, too. Witness what happened with Docker, the company that built around Kubernetes, but which hasnt been able to commercialize. Other companies stepped in and did it instead.

Stoica and Nishihara said they were confident they would avoid Dockers fate, given Stoicas background with Databricks, which he gave as an example of knowing how to commercialize smartly and aggressively. They said that they knew more about Ray than anyone else, and so are in the best position to build a company around it.

Moreover, the pair said they arent afraid of other companies that have been building so-called serverless computing offerings for example, Google with Cloud Function and Amazon with AWS Lambda that are tackling the same problem of letting people develop scalable applications without thinking about infrastructure. Thats a very different approach, a very limited programming model, and restricted in terms of the things you can do, Nishihara said of serverless. What were doing is much more general.

These serverless platforms are notoriously bad at supporting scalable AI, added Stoica. We are excelling in that aspect.

The two founded the company in June alongside Philipp Moritz and UC Berkeley professor Michael Jordan, and Anyscale has no product or revenue yet. Besides Andreessen Horowitz, investors in the round include Intel Capital, Ant Financial, Amplify Partners, and The House Fund. With the funding, Anyscales founders said, they will expand the companys leadership team (the company has 12 employees) and continue to commit to expanding Ray.

See the original post here:

Anyscale raises $20.6 million to simplify writing AI and ML applications with Ray - VentureBeat

Google launches its own AI Studio to foster machine intelligence … – TechCrunch

A new week brings a fresh Google initiative targeting AI startups. We started the month with the announcement of Gradient Ventures, Googles on-balance sheet AI investment vehicle. Two days later we watched the finalists of Google Clouds machine learning competition pitch to a panel of top AI investors. And today, Googles Launchpad is announcing a new hands-on Studio program to feed hungry AI startups the resources they need to get off the ground and scale.

The thesis is simple not all startups are created the same. AI startups love data and struggle to get enough of it. They often have to go to market in phases, iterating as new data becomes available. And they typically have highly technical teams and a dearth of product talent. You get the picture.

The Launchpad Studio aims to address these needs head-on with specialized data sets, simulation tools and prototyping assistance. Another selling point of the Launchpad Studio is that startups accepted will have access to Google talent, including engineers, IP experts and product specialists.

Launchpad, to date, operates in 40 countries around the world, explains Roy Geva Glasberg, Googles Global Lead for Accelerator efforts. We have worked with over 10,000 startups and trained over 2,000 mentors globally.

This core mentor base will serve as a recruiting pool for mentors that will assist the Studio.Barak Hachamov, board member for Launchpad, has been traveling around the world withGlasberg to identify new mentors for the program.

The idea of a startup studio isnt new. It has been attempted a handful of times in recent years, but seems to have finally caught on withAndy Rubins Playground Global. Playground offers startups extensive services and access to top talent to dial-in products and compete with the largest of tech companies.

On the AI Studio front, Yoshua Bengios Element AI raised a $102 million Series A to create a similar program. Bengio, one of, if not the, most famous AI researchers, can help attract top machine learning talent to enable recruiting parity with top AI groups like Googles DeepMind and Facebooks FAIR. Launchpad Studio wont have Bengio, but it will bringPeter Norvig, Dan Ariely, Yossi Matias and Chris DiBona to the table.

But unlike Playgrounds $300 million accompanying venture capital arm and Elements own coffers, Launchpad Studio doesnt actually have any capital to deploy. On one hand, capital completes the package. On the other, Ive never heard a good AI startup complain about not being able to raise funding.

Launchpad Studio sits on top of the Google Developer Launchpad network. The group has been operating an accelerator with global scale for some time now. Now on its fourth class of startups, the team has had time to flesh out its vision and build relationships with experts within Google to ease startup woes.

Launchpad has positioned itself as the Google global program for startups, asserts Glasberg. It is the most scaleable tool Google has today to reach, empower, train and support startups globally.

With all the resources in the world, Googles biggest challenge with its Studio wont be vision or execution but this doesnt guarantee everything will be smooth sailing. Between GV, Capital G, Gradient Ventures, GCP and Studio, entrepreneurs are going to have a lot of potential touch-points with the company.

On paper, Launchpad Studio is the Switzerland of Googles programs. It doesnt aim to make money or strengthen Google Clouds positioning. But from the perspective of founders, theres bound to be some confusion. In an ideal world we will see a meeting of the minds between Launchpads Glasberg, Gradients Anna Patterson and GCPs Sam OKeefe.

The Launchpad Studio will be based in San Francisco, with additional operations in Tel Aviv and New York City. Eventually Toronto, London, Bangalore and Singapore will host events locally for AI founders.

Applications to the Studio are now open if youre interested you can apply here.The program itself is stage-agnostic, so there are no restrictions on size. Ideally early and later-stage startups can learn from each other as they scale machine learning models to larger audiences.

View post:

Google launches its own AI Studio to foster machine intelligence ... - TechCrunch

France is using AI to check whether people are wearing masks on public transport – The Verge

France is integrating new AI tools into security cameras in the Paris metro system to check whether passengers are wearing face masks.

The software, which has already been deployed elsewhere in the country, began a three-month trial in the central Chatelet-Les Halles station of Paris this week, reports Bloomberg. French startup DatakaLab, which created the program, says the goal is not to identify or punish individuals who dont wear masks, but to generate anonymous statistical data that will help authorities anticipate future outbreaks of COVID-19.

We are just measuring this one objective, DatakaLab CEO Xavier Fischer told The Verge. The goal is just to publish statistics of how many people are wearing masks every day.

The pilot is one of a number of measures cities around the world are introducing as they begin to ease lockdown measures and allow people to return to work. Although France, like the US, initially discouraged citizens from wearing masks, the country has now made them mandatory on public transport. Its even considering introducing fines of 135 ($145) for anyone found not wearing a mask on the subway, trains, buses, or taxis.

The introduction of AI software to monitor and possibly enforce these measures will be closely watched. The spread of AI-powered surveillance and facial recognition software in China has worried many privacy advocates in the West, but the pandemic is an immediate threat that governments may feel takes priority over dangers to individual privacy.

DatakaLab, though, insists its software is privacy-conscious and compliant with the EUs General Data Protection Regulation (GDPR). The company has sold AI-powered video analytics for several years, using the technology to generate data for shops and malls about the demographics of their customers. We never sell for security purposes, says Fischer. And that is a condition in all our sales contracts: you cant use this data for surveillance.

The software is lightweight enough to work on location wherever installed, meaning no data is ever sent to the cloud or to DatakaLabs offices. Instead, the software generates statistics about how many individuals are seen wearing masks in 15-minute intervals.

The company has already integrated the software into buses in the French city of Cannes in the south of the country. It added small CPUs to existing CCTV cameras installed in buses, which process the video in real time. When the bus returns to the depot at night, it connects to Wi-Fi and sends the data on to the local transport authorities. Then if we say, for example, that 74 percent of people were wearing a mask in this location, then the mayor will understand where they need to deliver more resources, says Fischer.

Although technology like DatakaLabs is only being tested right now, its likely it will become a staple of urban life in the near future. As countries begin to weigh the economic damage of a lockdown against the loss of life caused by more COVID-19 infections, greater pressure will be put on mitigating measures like mandatory masks. In countries in the West where mask-wearing is more unfamiliar, software like DatakaLabs can help authorities understand whether their messaging is convincing the public.

Fischer says that although the pandemic has certainly created new uses cases for AI, it doesnt mean that countries like France need to abandon their values of privacy and embrace invasive surveillance software. We respect the rules of Europe, says Fischer. This technology is very useful but can be very dangerous ... [But] we have our values and they are part of our company.

See more here:

France is using AI to check whether people are wearing masks on public transport - The Verge

China’s Didi Chuxing opens US lab to develop AI and self-driving car tech – TechCrunch

Chinas Uber rival Didi Chuxing has officially opened its U.S.-based research lab. The new center is part of a move to suck up talent beyond Didis current catchment pool in China, particularly in the areas of AI and self-driving vehicles, but it doesnt signal an expansion of itsservice into North America.

The existence of the research center itself isnt new. Last September, TechCrunch wrote that Didi had hired a pair of experienced security experts based in the U.S. Dr Fengmin Gong and Zheng Bu to lead the center, which works closely with another China-based facilitythat opened in late 2015, but now it is officially open.

Dr Gong will lead the facility in Mountain View, and his team of dozens of leading data scientists and researchers will include former Uber researcher Charlie Miller. Miller rose to fame in 2015 when he hacked a journalists vehicle from a laptop 10 miles awayin a pre-arranged stunt to demonstratevulnerabilities within the automotive industry.

Millers job seems much like his role at Uber according to tweets he sent out today. His defection is noteworthy since it appears to be the first major poach that Didi has made from Uber, and it falls in the self-driving car space whereUber has made a huge push.

Didi is looking to make an early impact in Silicon Valley through a partnership with Udacity around self-driving vehicles. The two companies announced a joint contest inviting teams to developan Automated Safety and Awareness Processing Stack (ASAPS) to increasedriving safety for both manual and self-driving vehicles. Five finalists chosen will get a shot at the $100,000 grand prize and the opportunity to work closer with Didi and Udacity on automotive projects.

Go here to see the original:

China's Didi Chuxing opens US lab to develop AI and self-driving car tech - TechCrunch

Cloudera built its conversational AI chops by keeping things simple – VentureBeat

Last Chance: Register for Transform, VB's AI event of the year, hosted online July 15-17.

When enterprise data software company Cloudera looked into using conversational AI to improve its customer support question-and-answer experience, it didnt want to go slow, said senior director of engineering Adam Warrington in a conversation at Transform 2020. When your company is new to conversational AI, conventional wisdom says you might gradually ease into it with a simple use case and an off-the-shelf chatbot that learns over time.

But Cloudera is a data company, which gives it a head start. We were kind of interested in how we could possibly use our own data sets and technologies that we had internally to do something a little bit more than just dipping our toes into the water, Warrington said. We were more interested in getting off-the-shelf chatbot software that was extensible through APIs, he added. Warrington said Cloudera already had an internally stored wealth of data in the form of customer interactions, support cases, community posts, and so on. The idea was to answer customer support questions with a high degree of accuracy without having to wait for the chatbot to acquire domain knowledge.

Because Cloudera maintained records again, this is a data company of past customer issues and solutions, it had its own corpus to feed the chatbot. In order to teach the chatbot, the company wanted to extract the semantic context of things like the back-and-forth chatter between a support person and customer, as well as the specifics of the actual problem being solved.

To ensure that they knew what was relevant, the Cloudera team relied on their own subject experts to manually label and classify the data set. The work can be a little bit tedious, as is the case with many machine learning projects, but you dont need in this particular case millions and millions of things categorized and labeled, Warrington said. He added that after about a week of work, they ended up with a labeled data set they could use for training and testing. And, Warrington said, they achieved their goal of 90% accuracy.

The company now had models that could understand which words and sentences within a given support case were technically relevant to that case. Then the models could extract the right solution from the best source, be it a knowledge base article, product documentation, community post, or what have you.

But the team needed to go a step further. Now theres the derivative problem downstream, which is [that] what we actually want to do is provide answers to the customers that are relevant to their problems. Its not just about understanding whats technically relevant and whats not, Warrington said. Here again, the team relied on subject matter experts specifically, support engineers to ensure customers were receiving the best solutions.

Warrington said that although Cloudera is currently using its subject matter experts internally, more data is coming in from real interactions. As this project continues to go on in the public space, we expect to get more signals from our customers that are actually using the chatbot, he said. And so well start to use those inputs, those signals, from our customers to really expand on our test sets and our training set, to improve the quality from where its at today.

Whats perhaps most surprising is the short time to market. From inception of the problem statement of trying to use our own data sets and our own technology to augment chatbot software to return relevant results based on customer problem descriptions this took under a month, Warrington said. Why so fast? It certainly helped that Cloudera has its data already set up in its own data lake. All of our processing capabilities already exist on top of this, so everything from analytics to operational databases to our machine learning systems and things like Spark were able to access these data sets through these different technologies.

More to the point, Warrington said in the course of researching chatbot software they could use, the team discovered they already had some pertinent models. They had previously built models to help their internal engineers more efficiently find and address customer support issues. It turns out when youre running all these machine learning projects on an architecture like this, you can share work that has been done in the past that you didnt necessarily expect to use in this way, Warrington noted. He also said the fact that they had a modern data structure, meaning the data was already unsiloed, was a huge advantage.

In addition to the wisdom of relying on subject matter experts, focusing on a specific problem or set of problems, and starting with data architectures that grant you agility, Warringtons advice is to keep things simple. As we grow and mature, this particular approach in this particular implementation we very well could go and explore more advanced techniques [and] more advanced models as we add more types of signals into the system, he said. But out of the gate, to hit the ground running, use something simple. We found that you can actually provide very useful results to the customers, very quickly, using these kinds of approaches.

Read the rest here:

Cloudera built its conversational AI chops by keeping things simple - VentureBeat

Outfoxed by a bot? Facebook is teaching AI to negotiate – CNET

Facebook is teaching chat bots a new skill.

One day, the art of the deal might just involve lettingartificial intelligencedo your dirty work for you.

Researchers from Facebook Artificial Intelligence Research (FAIR) have created AI models, or what they call dialog agents, that can negotiate, according to a blog post Wednesday. They're publishing open-source code as well as research on those dialog agents, the result of about six months' work on the project.

The idea is that negotiation is a basic part of life whether you're picking a restaurant with friends or deciding on a movie to watch. But current chat bots aren't capable of much complexity. Their state of the art is to do simple tasks like book a restaurant or have short conversations of limited scope.

FAIR worked on the problem of how to get dialog agents to operate like people -- that is, come into a situation with different goals and eventually reach a compromise.

The effort is part of a broader push by Facebook to get us to use chat bots. At its developer conference in 2016, founder and CEO Mark Zuckerberg walked through scenarios in which you might use a bot to interact with a business, for example, to order a product or get customer service help. While tech giants like Facebook, Google and Apple are keen to build the personal digital assistant of the future, today's helpers still lack the necessary skills.

It's just one stitch in the larger fabric of work by Silicon Valley, academic researchers and the business community in the area of artificial intelligence, driven by powerful chips, fast networks and access to massive amounts of data about how people lead their digital lives. That's showing up in everything from sorting photos on Facebook to beating Go champions and diagnosing medical conditions.

FAIR didn't delve too far into what applications might be appropriate for bot-bargaining or whether this capability will surface in any Facebook products. But the post did mention this could be an advantage for bot developers working on chat bots with the ability to "reason, converse and negotiate, all key steps toward building a personalized digital assistant."

Negotiation, the FAIR post explains, is both a linguistic and reasoning problem. In other words, you've got to know what you want several steps down the road and be able to communicate it.

In one example, dialog agents were tasked with dividing up a collection of items like five books, three hats and two balls. Each agent had different priorities and each item carried a different value for each agent. The AIs were taught, in a sense, that walking away from the negotiation wasn't an option.

The ability to think ahead is crucial and with the introduction of something called dialog rollouts, which simulate future conversations, the bots were able to do so.

Or as FAIR scientist Mike Lewis put it: "If I say this, you might say that, and then I'll say something else." Lewis said those rollouts are the key innovation in this project.

The research has boosted performance in using various negotiation tactics, like being able to negotiate until there's a successful outcome, propose more final deals and produce novel sentences. The agents even started pretending to be interested in an item so they could later concede it as if it were a compromise.

Humans had a chance to try out the agents, and the researchers said the people couldn't tell they were chatting with bots.

CNET Magazine: Check out a sample of the stories in CNET's newsstand edition.

Batteries Not Included: The CNET team reminds us why tech is cool.

See the original post here:

Outfoxed by a bot? Facebook is teaching AI to negotiate - CNET

Super Smash Borg Melee: AI takes on top players of the classic … – TechCrunch

You can add the cult classic Super Smash Bros Melee to the list of games soon to be dominated by AIs. Research at MITs Computer Science and Artificial Intelligence Laboratory has produced a computer player superior to the drones you can already fight in the game. Its good enough that it held its own against globally-ranked players.

In case youre not familiar with Smash, its a fighting game series from Nintendo that pits characters from the companys various franchises against each other. Its cutesy appearance belies its strategic depth: The SSBM environment has complex dynamics and partial observability, making it challenging for human and machine alike. The multiplayer aspect poses an additional challenge, reads the papers abstract.

Its playing style, as so often seems to be the case with these models, is a mixed bag of traditional and odd:

It uses a combination of human techniques and some odd ones too both of which benefit from faster-than-human reflexes, wrote Firoiu in an email to TechCrunch. It is sometimes very conservative, being unwilling to attack until it sees theres a opening. Other times it goes for risky off-stage acrobatics that it turns into quick kills.

Thats the system playing against several players ranked in the top 100 globally, against which it won more than it lost. Unfortunately its no good with projectiles (hence playing Caption Falcon), and it has a secret weakness:

If the opponent crouches in the corner for a long period of time, it freaks out and eventually suicides, Firiou wrote. (This should be a warning against releasing agents trained in simulation into the real world, he added)

Its not going to win the Nobel Prize, but as with Go, Doom, and others, this type of research is a good way to see how existing learning models and techniques stack up in a new environment.

You can read the details in the paper at Arxiv; its been submitted for consideration at the International Joint Conference on Artificial Intelligence in Melbourne, so best of luck to Firoiu et al.

Read more from the original source:

Super Smash Borg Melee: AI takes on top players of the classic ... - TechCrunch