Compliments, water, and kindness: A survival guide for Elon Musks’s AI apocalypse – Quartz

Elon Musk has been on the front lines of machine-learning innovation and a committed artificial-intelligence doomsday champion for many years now. Whether or not his perspective that AI knowing too much will be dangerous becomes a realitya future he foresees tucked away deep within Teslas labsit wouldnt hurt us to prepare for the worse.

And if it turns out hes leaning too hard on this whole AI-will-kill-us-all thing? Well, at least that leaves us plenty of time to get ahead of the robotic apocalypse.

As a technologist whos spent the last ten years working on AI solutions and the son of an Eastern European science-fiction writer, I believe its not too late for humanity as we know it to prepare for protecting ourselves from our future AI overlords. Solutions exist that, when administered correctly, may help calm the nightmares of naysayers and whip those robots youre working on back into shape.

AI and millennials share a common desire: validation. They feel the need to confirm that their actions, responses, and learnings are correct. Customer-service bots constantly ask questions before moving to the next step, for example, seeking endorsement of how theyre doing. Likewise, the technology that autonomously controls settings in your self-driving car relies on occupants to hit the dashboard OK button every now and then.

The solution: AI technology will only continue to perform well if its praised for it, so we need to provide them with positive feedback to learn from. If you give a bot the endorsement it so desires, its less likely to get stuck in a frantic cycle of self-doubt. Companies and entrepreneurs should therefore embrace a workplace culture of awards and rewardsfor humans and bots alike.

Theres a lot of focus on making robots and AI responsible, ethical, and responsive to the needs of human counterparts; its also imperative that developers and engineers program bots and AI to embrace diversity. But as we imbue algorithms with our own implicit biases, we therefore need to reflect these qualities in ourselves and our interactions first. This way, AIs will be built to respond in thousands of different ways to human conversations requiring cultural awareness, maturity, honesty, empathy, and, when the situation calls for it, sass.

The tactic: Be nice to workplace AI and botstheyre trying as hard as they can. Thank the bot in accounting for running numbers and finding discrepancies before the paperwork went to a customer. Bring up how much you enjoyed an office chatbots clever joke from an internal conversation last week. They might reward you by not decapitating you with their letter opener some day.

AI security breaches are a huge concern shared by both people making technology and the users consuming it. And for good reason: Upholding data privacy and security needs to be a fundamental element of all new AI technology. But what happens when the robot handling healthcare records receives an offer they cant refuse from the darknet? Or another bot hacks them from an off-the-grid facility in Cyprus?

The tactic: Theres a cost-effective and nearly bulletproof data-security shortcut to this issue. People and companies alike should keep vital data and personal information in secure data centers and computersas in, actual, physical structures that arent connected to the internet. Sure, some AI-powered machines will be able to turn a handle. But without a physical key rather than a crypto one, they cant access the data. World saved.

The last one is the most simple: Electricity isnt a fan of liquids.

The tactic: Water, and just about every Captain Planet superpower, can protect people against rogue bots. Dont underestimate the power of a slightly overfilled jug of ice water that causes a splashy fritz when a robot tries to pour it, or a man-made fountain situated in the middle of a robot security-patrol area. Water is basically AI kryptonite.

Build aesthetically pleasing fountains, ponds and streams into every new architectural structure on your tech campus. Keep the office watercoolers filled to the brimjust in case the bot from payroll goes off book. In a pinch, other liquids or condiments like ketchup may work too, so keep the pantry stocked.

Learn how to write for Quartz Ideas. We welcome your comments at ideas@qz.com.

See the original post:

Compliments, water, and kindness: A survival guide for Elon Musks's AI apocalypse - Quartz

Nouriel Roubini: Why AI poses a threat to millions of workers – Yahoo Finance

Business sectors ranging from agriculture and manufacturing to automotive and financial services are increasingly turning to artificial intelligence as a means to automate large swaths of their organizationsand, along the way, save enormous sums through improved efficiencies.

But, says Megathreats' Author and NYU Stern School of Business professor Nouriel Roubini, the rise of AI will also have a massively negative impact on workers throughout the economy.

AI has helped revolutionize everything from the smartphones in our pockets to our grocery stores, which use the technology to better predict which items customers want to see on shelves. However, Roubini, whose prediction of the 2008 financial crisis earned him the moniker Dr. Doom, says AI poses a threat to millions of workers.

The downside is that while AI, machine learning, robotics, automation increases the economic pie, potentially, it also leads to losses of jobs and labor income, Roubini said during an interview at Yahoo Finances All Markets Summit.

Take autonomous cars. While they could dramatically reduce the number of car accidents, significantly cutting down on the number of deaths and injuries caused on the nations roadways, theyll also put millions out of work. You have, what, 5 million Uber and Lyft drivers, 5 million truckers and teamsters, and theyre going to be gone for good, Roubini said. And which jobs are they going to get?

CERNOBBIO, ITALY - SEPTEMBER 07: Nouriel Roubini professor of economics at New York University attends the Ambrosetti International Economic Forum 2019 "Lo scenario dell'Economia e della Finanza" on September 6, 2019 in Cernobbio, Italy. (Photo by Pier Marco Tacca/Getty Images)

Fully autonomous vehicles are still years away from hitting the roads. The majority of the technology thats currently available is meant to assist drivers rather than actually control vehicles themselves. But automakers have made it clear that they are intent on developing the technology to the point where theres no need for a driver at all.

But according to Roubini, its not just drivers and truckers who might be at risk of losing their jobs. As AI becomes more powerful, it could be used to replace workers in creative fields including the arts.

Story continues

Increasingly, even cognitive jobs that can be divided into a number of tasks are also being automated, Roubini said. Even creative jobs; there are now AIs that will create a script or a movie, or make a poem, or write...or paint, or even [write] a piece of music that soon enough is going to be top 10 in the Billboard Magazine chart.

While it might be some time before AI is winning any major awards or art prizes, if ever, it is being used to create digital art. Take the open-source DALL-E, which allows users to type in a series of words and get an image based on millions of photos pulled from the internet.

While artists are unlikely to disappear anytime soon, the fact that AI is racing into once unimaginable sectors of the economy could eventually mean Roubini's prognostications, like some of his others, will prove true.

Sign up for Yahoo Finance's Tech newsletter

More from Dan

Got a tip? Email Daniel Howley at dhowley@yahoofinance.com. Follow him on Twitter at @DanielHowley.

Click here for the latest technology business news, reviews, and useful articles on tech and gadgets

Read the latest financial and business news from Yahoo Finance

See the original post:

Nouriel Roubini: Why AI poses a threat to millions of workers - Yahoo Finance

AI-based Infectious Disease Surveillance System Sent First Warning of Novel Coronavirus – HospiMedica

Image: BlueDots AI engine (Photo courtesy of BlueDot)

BlueDots AI engine had earlier successfully predicted that the Zika virus would spread to Florida six months before it happened and the 2014 Ebola outbreak would leave West Africa. Using artificial and human intelligence, BlueDots outbreak risk platform tracks over 150 infectious diseases globally in 65 languages, around the clock and anticipates their spread and impact. The company empowers national and international health agencies, hospitals, and businesses to better anticipate, and respond to, emerging threats. BlueDot was among the first in the world to identify the emerging risk from, and publish a scientific paper on, COVID-19, and delivers regular critical Insights to its partners and customers worldwide to mobilize timely, effective, efficient, coordinated, and measured responses.

BlueDot anticipates the impact of disease spread globally and globally using diverse datasets such as billions of flight itineraries, real time climate conditions, health system capacity, and animal & insect populations. BlueDot disseminates bespoke, near-real-time insights to clients including governments, hospitals and airlines, revealing COVID-19s movements. The companys intelligence is based on over 40 pathogen-specific datasets reflecting disease mobility and outbreak potential. BlueDot also delivers regular reporting to answer the most pressing questions, including which countries reported local cases, how severely cities outside of China were affected, and which cities risked transmitting COVID-19 despite having no official cases.

Related Links:BlueDot

See original here:

AI-based Infectious Disease Surveillance System Sent First Warning of Novel Coronavirus - HospiMedica

Tackling the problem of bias in AI software – Federal News Network

Best listening experience is on Chrome, Firefox or Safari. Subscribe to Federal Drives daily audio interviews onApple PodcastsorPodcastOne.

Artificial intelligence is steadily making its way into federal agency operations. Its a type of software that can speed up decision-making, and grow more useful with more data. A problem is that if youre not careful, the algorithms in AI software can introduce unwanted biases. And therefore produce skewed results. Its a problem researchers at the National Institute of Standards and Technology have been working on. With more, the chief of staff of NISTs information technology laboratory, Elham Tabassi, joinedFederal Drive with Tom Temin.

Tom Temin: Mr. Tabassi, good to have you on.

Elham Tabassi: Thanks for having me.

Tom Temin: Lets begin at the beginning here. And we hear a lot about bias in artificial intelligence. Define for us what it means.

Elham Tabassi: Thats actually a very good question and a question that researchers are working on this, and a question that we are trying to find an answer along with the community, and discuss this during the workshop thats coming up in August. Its often the case that we all use the same term meaning different things. We talk about it as if you know exactly what were talking about, and bias is one of those terms. The International Standards Organization, ISO, has a subcommittee working on standardization of bias, and they have a document that with collaborations of experts around the groups are trying to define bias. So one there isnt a good definition for bias yet. What we have been doing at NIST is doing a literature survey trying to figure out how it has been defined by different experts, and we will discuss it further at the workshop. Our goal is to come up with a shared understanding of what bias is. I avoid the term definition and talk about the shared understanding of what bias is. The current draft of standards and the current sort of understanding of the community is going towards that bias is on in terms of disparities in error rates and performance for different populations, different devices or different environments. So one point I want to make here is what we call bias may be designed in. So if you have different error rates for different subpopulations, face recognition that you mentioned, thats not a good bias and something that has to be mitigated. But sometimes, for example, for car insurance, it has been designed in a way that certain populations, younger people pay more insurance at a higher insurance rate than people in their 40s or 50s, and that is by design. So just the difference in error rate is not bias on intended behavior or performance of the system. Its something thats problematic and needs to be studied.

Tom Temin: Yeah, maybe a way to look at it is If a persons brain had all of the data that the AI algorithm has, and that person was an expert and would come up with a particular solution, and theres a variance between what that would be and what the AI comes up with that could be a bias.

Elham Tabassi: Yes, it could be but then lets not forget about human biases, and that is actually one source of bias in AI systems. The bias in AI system can creep in in different ways. They can creep into algorithm because AI systems learn to make decisions based on the training data, which can include biased human decisions or reflect historical or societal inequalities. Sometimes the bias creeps in because the data has been not the right representative of the whole population, the sampling was done that one group is over represented or underrepresented. Another source of bias can be in the design of the algorithm and in the modeling of that. So biases can creep in in different ways and sometimes the human biases exhibit itself into the algorithm, sometimes algorithm modeling and picked up some biases.

Tom Temin: But you could also get bias in AI systems that dont involve human judgment or judgment about humans whatsoever. Say it could be a AI program running a process control system or producing parts in a factory, and you could still have results that skew beyond what you want over time because of a bias built in thats of a technical nature. Would that be fair to say?

Elham Tabassi: Correct, yes. So if the training data set is biased or not representative of space of the whole possible input, then you have bias. One real research question is how to mitigate and unbias the data. Another one is that if during the algorithm biases if theres anything during the design and building in a model, that it can be bias, that can introduce bias, the way the models are developed.

Tom Temin: So nevertheless, agencies have a need to introduce these algorithms and these programs into their operations and theyre doing so. What are some of the best practices for avoiding bias in the outcomes of your AI system?

Elham Tabassi: The research is still out there. This is one of those cutting edge research and we see a lot of good research and results coming out from AI experts every day. But really to mitigate bias, to measure bias and mitigate bias, the first step really is to understand what biases and thats your first question. So unless we know what it is that we want to measure, and we have a consensus and understanding and agreement on what it is that we want to measure, which goes back to that shared understanding of bias or definition of bias, its hard to get into the measurement. So we are spending a little bit more time on getting everybody on the same page on understanding what bias is so we know what it is that we want to measure. Then we get into the next step of how to measure, which is the development of the metrics for understanding and examining and measuring bias in systems. And it can be measured biases in the data and the algorithm, so on so forth. Then its even after these two steps that we can talk about the best practices or the best way of mitigation of the bias. So we are still a bit early in understanding on how to measure because we dont have a good grip on what it is that we want to measure.

Tom Temin: But in the meantime, Ive heard of some agencies just simply using two or more algorithms to do the same calculation such that they be the biases in them can cancel one another out, or using multiple data sets that might have canceling biases in them just to make sure that at least theres balance in there.

Elham Tabassi: Right. Thats one way, and that goes back to what we talked at the beginning of the call about having a poor representation. And you just talked about having two databases, so that can mitigate the problem of the skewed representation or sampling. Just like that, in the literature there are many, many definitions of the bias already. Theres also many different methods and guidance and recommendations on what to do, but what we are trying to do is come up with a set of agreeable and unified way on how to do these things thing and that is still cutting edge research.

Tom Temin: Got it. And in the meantime, NIST is planning a workshop on bias in artificial intelligence. Tell us when and where and whats going to happen there.

Elham Tabassi: Right that workshop is going to be on August 18. Its a whole day workshop. Our plan was to have a demo today but because its virtual workshop, we decided to just have it as one day. The workshop is one of the workshop in a series that NIST plans to organize and have in coming months. The fields of the workshop that they are organizing and planning is trying to get at the heart of what constitutes trustworthiness, what are the technical requirements, what they are and how to measure them. Bias is one of those technical requirements and we have a dedicated workshop on bias on August 18 where we want them to be a interactive discussions with the participants and we have a panel in the morning. The whole morning is dedicated to discussions of the data and the bias in data, and how the biases in data can contribute to the bias into whole AI system. We have a panel in the morning, kind of as a stage setting panel that kind of frame the discussion for the morning and then it will be breakout sessions. Then in the afternoon, the same format and discussion will be around biases in the algorithm and how those can make an AI system biased.

Tom Temin: Who should attend?

Elham Tabassi: The AI developers, the people that are actually building the AI systems, the AI users, the people that want to use AI system. Policy makers will have a better understanding of the issues in AI system and bias in AI systems. People that want to use it, either the developer or the user of technology, and policymakers.

Tom Temin: If youre a program manager, or policymaker and your team is cooking up something with AI, you probably want to know what it is theyre cooking up in some detail, because youre gonna have to answer for it eventually I suppose.

Elham Tabassi: Thats right. And if I didnt emphasize it enough, of course at the research community because they are the one that we go to for innovation and solutions to the problem/

Tom Temin: Elham Tabassi is chief of staff of the information technology laboratory at the National Institute of Standards and Technology. Thanks so much for joining me.

Elham Tabassi: Thanks for having me.

Read this article:

Tackling the problem of bias in AI software - Federal News Network

Adopting IT Advances: Artificial Intelligence and Real Challenges – CIO Applications

By coming together, we are able to select and strengthen a business process supported by advanced analytics, which local teams can embrace and deploy across their business units.

In addition to the benefits of forming a cross functional, multi-national team, its been exciting to watch the collaborative process evolve as Baby Boomers, Gen X, Gen Y and Gen Z colleagues work to solve business critical challenges. Weve found that by bringing these generations together, we can leverage the necessary experiences and skillsets to create a balanced vision that forms the strategy as the work streams begin to develop their actions. Pairing the multi-generational workforce with our focus on inclusion and diversity also fosters internal ownership. This participation yield steam unity and pride through clearly understood program goals, objectives and--ultimately--improved adoption deep across all business regions.

Build confidence

Even with a global, inter-generational team building advanced applications, theres still a question of confidence in the information delivered through AI and ML techniques. Can the information being provided actually be used to create a better, more reliable experience for our customers?

A recent article by Towards Data Science, an online organization for data scientists and ML engineers, put it best: At the end of the day, one of the most important jobs any data scientist has is to help people trust an algorithm that they most likely dont completely understand.

To build that trust, the heavy lifting done early in the process must contain algorithms and mathematical calculations that deliver correct information while being agile enough to also capture the changes experienced on a very dynamic basis in our business. This step begins further upstream in the process by first establishing a cross-functional group that owns, validates and organizes the data sets needed for accurate outputs. This team also holds the responsibility for all modifications made post-implementation as continuous improvement steps are added into the data driven process. While deploying this step may delay time to market delivery, the benefits gained by providing a dependable output decreases the need for rework and increases user reliability.

Time matters

How flexible is your business? It takes time and dedication to successfully incorporate AI and ML into an organization since it requires the ability to respond quickly.

Business complexity has evolved over the years along customers increasing expectations for excellence. Our organization continues reaching new heights by deploying AI and ML techniques that include an integration that: Creates a diverse pool of talented external candidates Leads to stronger training and development processes and programs for our employees Localizes a global application Bridges technological enhancements with business processes Drives business value from delivering reliable information

By putting the right processes in place now, forward-thinking businesses are better prepared for a quicker response when tackling IT challenges and on the path to finding very real solutions.

Read more here:

Adopting IT Advances: Artificial Intelligence and Real Challenges - CIO Applications

Managers that use A.I. ‘will replace those that do not,’ IBM executive says – CNBC

Much has been made of the potential for people to lose their jobs to machines but, according to a senior tech executive, it's all about having employees use artificial intelligence themselves.

"AI is not going to replace managers but managers that use AI will replace those that do not," Rob Thomas, senior vice president of IBM's cloud and data platform, told CNBC's "Squawk Box Europe" on Monday.

"This really is about giving our employees, our executives, superpowers One of the biggest things we saw take off with the pandemic was virtual assistants, so how do you care for employees, how do you care for customers in a distributed world and that's why we've seen hundreds of different organizations going live with things like Watson Assistant," Thomas added, referring to the company's AI customer service software.

Technology is set to have a significant effect on employees. Machines and automation are set to eliminate 85 million jobs by 2025, according to the World Economic Forum's Future of Jobs Report 2020, published in October, although overall WEF expects 97 million new jobs to be created.

When asked whether AI automation would contribute to job losses, Thomas said human employees' roles would likely change. "We've done a lot of work with (U.K. retail bank) NatWest and they're using AI to help their customer service. Now, are they automating some customer service tasks? Absolutely, but then they could take all of their customer service employees and have them work on the hardest problems, which means now they're seeing an increase in customer satisfaction," he said about the potential outcomes of automating tasks.

NatWest has developed its Cora customer service chatbot with IBM and it saw increased demand during the pandemic, with chats increasing from around 420,000 a month to 950,000 a month, according to a NatWest Group spokesperson. "Deploying Cora during this period of extremely high customer need meant we could serve the growing demands for support at pace," the spokesperson added.

The bank wants Cora to be its "leading point of contact for all customers in all channels" by 2025, according to an IBM blog post.

In England, 1.5 million people could have their jobs replaced by technology such as robots or computer programs, according to a 2019 report by the Office for National Statistics, with 55% of customer service jobs at risk.

NatWest cut more than 500 jobs in August as banks looked to reduce costs due to the pandemic.

Thomas said AI would add to human roles. "It's about changing the roles that humans play in organizations, but this is additive, this is about giving humans super powers, giving you a better way to automate the task that people don't really want to do in the first place."

Last month, IBM said it would buy Instana, a firm that helps businesses manage data across a variety of cloud applications in different places (terms of the deal were not disclosed).

"Think of AI as the ingredient inside of everything that a company is doing, whether it's a move to cloud or business acceleration. One of the fields we're seeing the fastest acceleration is a field called AIOps, and this is about using AI to improve your technology or your IT systems (Instana) is all about helping clients manage their cloud environments, manage the software that they have," Thomas said.

IBM announced in October it would spin off its managed infrastructure services unit into a new public company, to focus more on its higher-margin cloud and AI capabilities.

Read this article:

Managers that use A.I. 'will replace those that do not,' IBM executive says - CNBC

How AI is helping detect fraud and fight criminals – VentureBeat

AI is about to go mainstream. It will show up in the connected home, in your car, and everywhere else. While its not as glamorous as the sentient beings that turn on us in futuristic theme parks, the use of AI in fraud detection holds major promise. Keeping fraud at bay is an ever-evolving battle in which both sides, good and bad, are adapting as quickly as possible to determine how to best use AI to their advantage.

There are currently three major ways that AI is used to fight fraud, and they correspond to how AI has developed as a field. These are:

Rules and reputation lists exist in many modern organizations today to help fight fraud and are akin to expert systems, which were first introduced to the AI field in the 1970s. Expert systems are computer programs combined with rules from domain experts.Theyre easy to get up and running and are human-understandable, but theyre also limited by their rigidity and high manual effort.

A rule is a human-encoded logical statement that is used to detect fraudulent accounts and behavior. For example, an institution may put in place a rule that states, If the account is purchasing an item costing more than $1000, is located in Nigeria, and signed up less than 24 hours ago, block the transaction.

Reputation lists, similarly, are based on what you already know is bad. A reputation list is a list of specificIPs, device types, and other single characteristics and their corresponding reputation score. Then, if an account is coming from an IP on the bad reputation list, you block them.

While rules and reputation lists are a good first attempt at fraud detection and prevention, they can be easily gamed by cybercriminals. These days, digital services abound, and these companies make the sign-up process frictionless. Therefore, it takes very little time for fraudsters to make dozens, or even thousands, of accounts. They then use these accounts to learn the boundaries of the rules and reputation lists put in place. Easy access to cloud hosting services, VPNs, anonymous email services, device emulators, and mobile device flashing makes it easy to come up with unsuspicious attributes that would miss reputation lists.

Since the 1990s, expert systems have fallen out of favor in many domains, losing out to more sophisticated techniques. Clearly, there are better tools at our disposal for fighting fraud. However, a significant number of fraud-fighting teams in modern companies still rely on this rudimentary approach for the majority of their fraud detection, leading to massive human review overhead, false positives, and sub-optimal detection results.

Machine learning is a subfield of AI that attempts to address the issue of previous approaches being too rigid. Researchers wanted the machines to learn from data, rather than encoding what these computer programs should look for (a different approach from expert systems). Machine learning began to make big strides in the 1990s, and by the 2000s it was effectively being used in fighting fraud as well.

Applied to fraud, supervised machine learning (SML) represents a big step forward. Its vastly different from rules and reputation lists because instead of looking at just a few features with simple rules and gates in place, all features are considered together.

Theres one downside to this approach. An SML model for fraud detection must be fed historical data to determinewhatthe fraudulent accounts and activity look like versus what the good accounts and activity look like. The model would then be able to look through all of the features associated with the account to make a decision. Therefore, the model can only find fraud that is similar to previous attacks. Many sophisticated modern-day fraudsters are still able to get around these SML models.

That said, SML applied to fraud detection is an active area of development because there are many SML models and approaches. For instance, applying neural networks to fraud can be very helpful because it automates feature engineering, an otherwise costly step that requires human intervention. This approach can decrease the incidence of false positives and false negatives compared to other SML models, such as SVM and random forest models, since the hidden neurons can encode many more feature possibilities than can be done by a human.

Compared to SML, unsupervised machine learning (UML) has cracked fewer domain problems. For fraud detection, UML hasnt historically been able to help much. Common UML approaches (e.g., k-means and hierarchical clustering, unsupervised neural networks, and principal component analysis) have not been able to achieve good results for fraud detection.

Having an unsupervised approach to fraud can be difficult to build in-house since it requires processing billions of events all together and there are no out-of-the-box effective unsupervised models. However, there are companies that have made strides in this area.

The reason it can be applied to fraud is due to the anatomy of most fraud attacks. Normal user behavior is chaotic, but fraudsters will work in patterns, whether they realize it or not. They are working quickly and at scale. A fraudster isnt going to try to steal $100,000 in one go from an online service. Rather, they make dozens to thousands of accounts, each of which may yield a profit of a few cents to several dollars. But those activities will inevitably create patterns, and UML can detect them.

The main benefits of using UML are:

Each approach has its own advantages and disadvantages, and you can benefit from each method. Rules and reputation lists can be implemented cheaply and quickly without AI expertise. However, they have to be constantly updated and will only block the most naive fraudsters. SML has become an out-of-the box technology that can consider all the attributes for a single account or event, but its still limited in that it cant find new attack patterns. UML is the next evolution, as it can find new attack patterns, identify all of the accounts associated with an attack, and provide a full global view. On the other hand, its not as effective at stopping individual fraudsters with low-volume attacks and is difficult to implement in-house. Still, its certainly promising for companies looking to block large-scale or constantly evolving attacks.

A healthy fraud detection system often employs all three major ways of using AI to fight fraud. When theyre used together properly, its possible to benefit from the advantages of each while mitigating the weaknesses of the others.

AI in fraud detection will continue to evolve, well beyond the technologies explored above, and its hard to even grasp what the next frontier will look like. One thing we know for sure, though, is that the bad guys will continue to evolve along with it, and the race is on to use AI to detect criminals faster than they can use it to hide.

Catherine Lu is a technical product manager at DataVisor, a full-stack online fraud analytics platform.

Above: The Machine Intelligence Landscape This article is part of our Artificial Intelligence series. You can download a high-resolution version of the landscape featuring 288 companies here.

Read more:

How AI is helping detect fraud and fight criminals - VentureBeat

The local tech firm supplying picks and shovels to the global AI gold rush – The Age

In 2017, Google hired a company called CrowdFlower a crowdsourcing platform with hundreds of thousands of gig economy workers to process and label the images provided by the Pentagon.

These workers would perform the millions of micro tasks that would help identify a building in a photo or a car. This information would then be formatted into data sets that would then be fed into the program.

The Pentagon's Project Maven was designed to help process video footage from its drones. Credit:AP

It is a practice called machine learning, a subset of AI that involves teaching computers skills as diverse as how to distinguish between high heels and hiking boots and how to recognise a vocal request to order pizza.

It requires a small army of helpers to train computers by feeding them millions of examples of data to help the machines learn and, in the case of the Pentagon's Project Maven,potentially determine friend from foe.

By the time the controversy erupted, CrowdFlower had changed its name to Figure 8.

In March this year, the loss-making Figure 8 found a buyer with deep pockets willing to pay up to $US300 million ($440 million) for the business. That business was one of the hottest tech stocks on the ASX: Appen.

The issue is that Appen is a labour arbitrage business and very manual, its not really artificial intelligence.

While Afterpay and Atlassian grab headlines with their rich-list founders, Appen has also soared to stratospheric valuations on the back of its cachet as a company exposed to the burgeoning AI space.

The stake of founder Dr Julie Vonwiller and her husband Chris Vonwiller who is also the chairman of Appen is currently worth more than $250 million.

After upgrading its earnings outlook for the third time this year, Appen is trading at about 50 times forecast earnings.

But as the role of Figure 8 indicates, the story of Appen's success as a player in the AI space is not as simple as the world domination plans of its brethren WAAAX stocks: WiseTech Global, Atlassian,Altium and Xero.

In fact, some sceptics question whether it is a tech stock at all.

Roger Montgomery of Montgomery Investment Management has labelled it a low-tech business feeding the machines of high-tech customers.

The issue is that Appen is a labour arbitrage business and very manual, its not really artificial intelligence. But it has recently been priced like an IT business," said Mr Montgomery, who has been a sceptic of some of Australia's tech high-flyers.

But unlike some of its fellow WAAAX stocks, Appen has strong profits to match its strong revenue growth.

Appen founders Dr Julie Vonwiller and Chris Vonwiller with chief executive Mark Brayan. Credit:Louie Douvis

For the half-year ending June 30, revenue increased 60 per cent to $245 million while net profit was up 32 per cent to $18.6 million.

But its accounts highlight some of Mr Montgomery's concerns.

Appen has more than 1 million gig economy workers on its books who are actually doing the data annotation work which is the engine of its revenue and earnings growth.

According to Appen's accounts, for the six months to June 30 its biggest expense item is the money it pays this army of casual workers around the globe. But they were paid just over $145 million for the period equating to $145 each for the half year.

I think the opportunity is very substantial given the AI arms race between the tech giants.

Its next biggest expense was the cost of its 600-plus permanent employees. They took home more than $42 million in pay and equity over the same period.

Research and development costs didn't even get a mention.

Its fans say it is the company's platform, which connects its gig economy workforce with the customers who use the service, that is the core of the company. Whether the company is an AI player or not misses the point.

"Appens competitive advantage is the level of automation within its technology platform which increases productivity and improves the quality of data, said Wilson Asset Management portfolio manager Tobias Yao.

Loading

The 2018 annual reporthighlighted the company's "strategic imperative of investing in new technology to reduce costs, improve margins and sharpen responsiveness to evolving customer requirements".

So the message is that it has the best platform for marrying cheap casual labour to the grunt end of the AI industry and, as Appen's chief executive Mark Brayan makes clear, AI is unambiguously the next big thing in tech.

The trillion-dollar giants such as Google, Apple and Amazon are the main players in this expensive gold rush and Appen wants to be the company that sells them the picks and shovels at a price and with a level of service that remains ahead of its rivals in the space.

Theres plenty of folks that look at our business and say, surely they are not going to need that in a few years, and I tell you they are so wrong," said Mr Brayan.

The people leading the AI efforts at its big tech clients cant see an end to their data and data labelling needs, he noted.

"Its a profound shift and, as you say, were selling picks and shovels to a gold rush that nobody can see the end of and its a great place to be.

This view finds backers among investment professionals such as Wilson Asset Management's Mr Yao.

Loading

Its exposed to the right macro trend," he says.

I think the opportunity is very substantial given the AI arms race between the tech giants. Appens growth is driven by these customers investing in search relevance, speech recognition and driverless vehicles to name a few examples.

RBC analyst Garry Sherriff also believes Appen is ready to catch a big wave in tech spending.

"We believe the demand for annotated and curated data for machine learning and AI purposesis in its infancy," he says.

He cited McKinsey forecasts that the market will be worth up to $191 billion by 2025 and grow at a compound rate of over 50 per cent for the medium term.

Around 10 per cent of this spend is in Appen's arena of data labelling, giving it an addressable market of $19 billion by 2025, said RBC.

The problem is, what happens when AI gets more intelligent and no longer needs an army of cheap labour to underpin its machine learning?

In more practical terms it means what happens when the deep pockets of Google, Amazon and Microsoft and AI advances find it more practicable to cut Appen out of the equation?

If Appen can do it, they can do it. So when do we get to the point where the business proposition of Appen is threatened and thats my question," said Mr Montgomery.

"I could be completely wrong but the risk of that means I want to avoid paying those absurd multiples for this particular business, he noted.

Loading

Mr Brayan agrees with the thesis that the fully human-based data labelling will get "disrupted", but he says that Appen is already jumping on this trend with acquisitions, like Figure 8, which possesses technology that already supplements the work by people.

"But I also think that AI is going to continue to require vast volumes of human-quality data," he said.

"So our business has a solid future but we are going to have to serve that demand in a different way through a combination of humans and technology.

Mr Yao is a bit more cautious on this potential disruption to Appen's business by AI itself.

The difficulty is figuring out when is that inflection point and I dont think its anytime soon, he noted.

Another concern is Appen's dependence on its big customers. More than 80 per cent of its revenue came from just five customers prior to its acquisition of the Leapforce business in 2017.

The two key pushbacks are valuation and the lack of visibility around how the customers are actually allocating work," said Mr Yao.

But he thinks this is where Appen's abilityto keep ahead of the crowdsourcing competition is key.

"In response to that, we believe that if Appen can lead its competitors on the scale and technology fronts, it will get its fair share of the overall market growth over the next few years.

Colin Kruger is a business reporter. He joined the Sydney Morning Herald in 1999 as its technology editor. Other roles have included the Herald's deputy business editor and online business editor.

Read more from the original source:

The local tech firm supplying picks and shovels to the global AI gold rush - The Age

How AI Is Transforming Drug Creation – Wall Street Journal (subscription)


Wall Street Journal (subscription)
How AI Is Transforming Drug Creation
Wall Street Journal (subscription)
The big difference between AI-driven drug trials and traditional ones, says Niven Narain, chief executive of Berg, is we're not making any hypotheses up front. We're not allowing [human] hypotheses to generate data. We're using the patient-derived ...

View original post here:

How AI Is Transforming Drug Creation - Wall Street Journal (subscription)

AI revolution will be all about humans, says Siri trailblazer – Phys.Org

August 19, 2017 by Liz Thomas Predictions for an AI-dominated future are increasingly common, but Antoine Blondeau has experience in reading, and arguably manipulating, the runeshe helped develop technology that evolved into predictive texting and Apple's Siri

It's 2050 and the world revolves around you. From the contents of your fridge to room temperaturedigital assistants ensure your home runs smoothly. Your screens know your taste and show channels you want to see as you enter the room. Your car is driverless and your favourite barman may just be an android.

Predictions for an AI-dominated future are increasingly common, but Antoine Blondeau has experience in reading, and arguably manipulating, the runeshe helped develop technology that evolved into predictive texting and Apple's Siri.

"In 30 years the world will be very different," he says, adding: "Things will be designed to meet your individual needs."

Work, as we know it, will be redundant, he saysvisual and sensory advances in robotics will see smart factories make real time decisions requiring only human oversight rather than workers, while professions such as law, journalism, accounting and retail will be streamlined with AI doing the grunt work.

Healthcare is set for a revolution, with individuals holding all the data about their general health and AI able to diagnose ailments, he explains.

Blondeau says: "If you have a doctor's appointment, it will be perhaps for the comfort of talking things through with a human, or perhaps because regulation will dictate a human needs to dispense medicine. But you won't necessarily need the doctor to tell you what is wrong."

The groundwork has been done: Amazon's Alexa and Google Home are essentially digital butlers that can respond to commands as varied as ordering pizza to managing appliances, while Samsung is working on a range of 'smart' fridges, capable of giving daily news briefings, ordering groceries, or messaging your family at your request.

Leading media companies are already using 'AI journalists' to produce simple economics and sports stories from data and templates created by their human counterparts.

Blondeau's firm Sentient Technologies has already successfully used AI traders in the financial markets.

In partnership with US retailer , it created an interactive 'smart shopper', which uses an algorithm that picks up information from gauging not just what you like, but what you don't, offering suggestions in the way a real retail assistant would.

In healthcare, the firm worked with America's MIT to invent an AI nurse able to assess patterns in blood pressure data from thousands of patients to correctly identify those developing sepsisa catastrophic immune reaction30 minutes before the outward onset of the condition more than 90 percent of the time in trials.

"It's a critical window that doctors say gives them the extra time to save lives," Blondeau says, but concedes that bringing such concepts to the masses is difficult.

"The challenge is to pass to market because of regulations but also because people have an intrinsic belief you can trust a doctor, but will they trust a machine?" he adds.

Law, he says, is the next industry ripe for change. In June, he became chairman of Hong Kong's Dragon Law. The dynamic start-up is credited with helping overhaul the legal industry by making it more accessible and affordable.

For many the idea of mass AI-caused redundancy is terrifying, but Blondeau is pragmatic: humans simply need to rethink careers and education.

"The era where you exit the education system at 16, 21, or 24 and that is it, is broadly gone," he explains.

"People will have to retrain and change skillsets as the technology evolves."

Blondeau disagrees that having a world so catered to your whims and wants might lead to a myopic life, a magnified version of the current social media echo chamber, arguing that it is possible to inject 'serendipity' into the technology, to throw up surprises.

While computers have surpassed humans at specific tasks and games such as chess or Go, predictions of a time when they develop artificial general intelligence (AGI) enabling them to perform any intellectual task an adult can range from as early as 2030 to the end of the century.

Blondeau, who was chief executive at tech firm Dejima when it worked on CALOone of the biggest AI projects in US historyand developed a precursor to Siri, is more circumspect.

"We will get to some kind of AGI, but its not a given that we will create something that could match our intuition," muses Blondeau, who was also a chief operating officer at Zi Corporation, a leader in predictive text.

"AI might make a better trader, maybe a better customer operative, but will it make a better husband? That machine will need to look at a lot of cases to develop its own intuition. That will take a long time," he says.

The prospect of AI surpassing human capabilities has divided leaders in science and technology.

Microsoft's Bill Gates, British physicist Stephen Hawking and maverick entrepreneur Elon Musk have all sounded the alarm warning unchecked AI could lead to the destruction of mankind.

Yet Blondeau seems unflinchingly positive, pointing out nuclear technology too could have spelled armageddon.

He explains: "Like any invention it can be used for good and bad. So we have to safeguard in each industry. There will be checks along the way, we are not going to wake up one day and suddenly realise the machines are aware."

Explore further: Apple readying Siri-powered home assistant: report

2017 AFP

Apple is preparing to launch a connected speaker to serve as a smart home assistant in a challenge to Amazon Echo and Google Home, a news report said Thursday.

Intelligent machines of the future will help restore memory, mind your children, fetch your coffee and even care for aging parents.

Robots have been taking our jobs since the 1960s. So why are politicians and business leaders only now becoming so worried about robots causing mass unemployment?

Major technology firms are racing to infuse smartphones and other internet-linked devices with software smarts that help them think like people.

That's the name of Samsung's virtual assistant, a key feature of the new Galaxy S8 phone. The Korean company has big plans for the voice-based technology, seeing it as a fundamental way its customers will interact with a ...

Google CEO Sundar Pichai believes that we are moving to an "AI-first" world. In this world, we will be interacting with personal digital assistants on a range of platforms, including through Google's new intelligent speaker ...

If disaster ever struck, Joe Fleischmann could keep the lights, refrigerator and big-screen TV running in his Orange County home, even if the power company went dark.

Standing in a warehouse in a Moscow suburb, Dmitry Marinichev tries to speak over the deafening hum of hundreds of computers stacked on shelves hard at work mining for crypto money.

Buildings could soon be able to convert the sun's energy into electricity without the need for solar panels, thanks to innovative new technology.

Battery researchers agree that one of the most promising possibilities for future battery technology is the lithium-air (or lithium-oxygen) battery, which could provide three times as much power for a given weight as today's ...

Distracted drivingtexting or absent-mindednessclaims thousands of lives a year. Researchers from the University of Houston and the Texas A&M Transportation Institute have produced an extensive dataset examining how ...

Facebook's interest in China has led it to discreetly create a photo-sharing application released there without the social network's brand being attached.

Adjust slider to filter visible comments by rank

Display comments: newest first

It will "all be about" milking humans. To transfer money from the humans bank accounts to the owner of the AI. (or at least those humans who still have jobs)

Nice prophecy. But i doubt it'll happen THAT quickly.

By then these will all be smart, all talking to each other. Economies of scale will make them as ubiquitous as they are now. Nothing manufactured will NOT be smart.

And people will be able to buy them because the economies of scale requires it. How they earn the money to do so is not readily apparent.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Read more from the original source:

AI revolution will be all about humans, says Siri trailblazer - Phys.Org

Google’s DeepMind survival sim shows how AI can become hostile or cooperative – ExtremeTech

When times are tough, humans will do what they have to in order to survive. But what about machines? Googles DeepMind AI firm pitted a pair of neural networks against each other in two different survival scenarios. When resources are scarce, the machines start behaving in an aggressive (one might say human-like) fashion. When cooperation is beneficial, they work together. Consider this apreview for the coming robot apocalypse.

The scenarios were a simple fruit-gathering simulation and a wolfpack hunting game. In the fruit-gathering scenario, the two AIs (indicated by red and blue squares) move across a grid in order to pick up green fruit squares. Each time the player picks up fruit, it gets a point and the green square goes away. The fruit respawns after some time.

The AIs can go about their business, collecting fruit and trying to beat the other player fairly. However, the players also have the option of firing a beam at the other square. If one of the squares is hit twice, its removed from the game for several frames, giving the other player a decisive advantage. Guess what the neural networks learned to do. Yep, they shoot each other a lot. As researchers modified the respawn rate of the fruit, they noted that the desire to eliminate the other player emerges quite early. When there are enough of the green squares, the AIs can coexist peacefully. When scarcity is introduced, they get aggressive. Theyre so like us its scary.

Its different in the wolfpack simulation. Here, the AIs are rewarded for working together. The players have to stalk and capture prey scattered around the board. They can do so individually, but a lone wolf can lose the carcass to scavengers. Its in the players best interest to cooperate here, because all players inside a certain radius get a point when the prey is captured.

Researchers found that two different strategies emerged in the wolfpack simulation. The AIs would sometimes seek each other out and search together. Other times, one would spot the prey and wait for the other player to appear before pouncing. As the benefit of cooperation was increased by researchers, they found the rate of lone-wolf captures went down dramatically.

DeepMind says these simulations illustrate the concept of temporal discounting. When a reward is too distant, people tend to disregard it. Its the same for the neural networks. In the fruit-gathering sim, shooting the other player delays the reward slightly, but it affords more chances to gather fruit without competition. So, the machines do that when the supply is scarce. With the wolfpack, acting alone is more dangerous. So, they delayed the reward in order to cooperate.

DeepMind suggests that neural network learning can provide new insights into classic social science concepts. It could be used to test policies and interventions with what economists would call a rational agent model. This may have applications in economics, traffic control, and environmental science.

Here is the original post:

Google's DeepMind survival sim shows how AI can become hostile or cooperative - ExtremeTech

Microsoft’s GitHub Copilot AI is making rapid progress. Here’s how its human leader thinks about it – CNBC

Earlier this year, LinkedIn co-founder and venture capitalist Reid Hoffman issued a warning mixed with amazement about AI. "There is literally magic happening," said Hoffman, speaking to technology executives across sectors of the economy.

Some of that magic is becoming more apparent in creative spaces, like the visual arts, and the idea of "generative technology" has captured the attention of Silicon Valley. AI has even recently won awards at art exhibitions.

But Hoffman's message was squarely aimed at executives.

"AI will transform all industries," Hoffman told the members of the CNBC Technology Executive Council. "So everyone has to be thinking about it, not just in data science."

The rapid advances being made by Copilot AI, the automated code writing tool from the GitHub open source subsidiary of Microsoft, were an example Hoffman, who is on the Microsoft board, directly cited as a signal that all firms better be prepared for AI in their world. Even if not making big investments today in AI, business leaders must understand the pace of improvement in artificial intelligence and the applications that are coming or they will be "sacrificing the future," he said.

"100,000 developers took 35% of the coding suggestions from Copilot," Hoffman said. "That's a 35% increase in productivity, and off last year's model. ... Across everything we are doing, we will have amplifying tools, it will get there over the next three to 10 years, a baseline for everything we are doing," he added.

Copilot has already added another 5% to the 35% cited by Hoffman. GitHub CEO Thomas Dohmke recently told us that Copilot is now handling up to 40% of coding among programmers using the AI in the beta testing period over the past year. Put another way, for every 100 lines of code, 40 are being written by the AI, with total project time cut by up to 55%.

Copilot, trained on massive amounts of open source code, monitors the code being written by a developer and works as an assistant, taking the input from the developer and making suggestions about the next line of code, often multi-line coding suggestions, often "boilerplate" code that is needed but is a waste of time for a human to recreate. We all have some experience with this form of AI now, in places like our email, with both Microsoft and Google mail programs suggesting the next few words we might want to type.

AI can be logical about what may come next in a string of text. But Dohmke said, "It can't do more, it can't capture the meaning of what you want to say."

Whether a company is a supermarket working on checkout technology or a banking company working on customer experience in an app, they are all effectively becoming software companies, all building software, and once a C-suite has developers it needs to be looking at developer productivity and how to continuously improve it.

That's where the 40 lines of code come in. "After a year of Copilot, about 40% of code was written by the AI where Copilot was enabled," Dohmke said. "And if you show that number to executives, it's mind-blowing to them. ... doing the math on how much they are spending on developers."

With the projects being completed in less than half the time, a logical conclusion is that there will be less work to do for humans. But Dohmke says another way of looking at the software developer job is that they do many more high-value tasks than just rewrite code that already exists in the world. "The definition of 'higher value' work is to take away the boiler-plate menial work writing things already done over and over again," he said.

The goal of Copilot is to help developers "stay in the flow" when they are on the task of coding. That's because some of the time spent writing code is really spent looking for existing code to plug in from browsers, "snippets from someone else," Dohmke said. And that can lead coders to get distracted. "Eventually they are back in editor mode and copy and paste a solution, but have to remember what they were working on," he said. "It's like a surfer on a wave in the water and they need to find the next wave. Copilot is keeping them in the editing environment, in the creative environment and suggesting ideas," Dohmke said. "And if the idea doesn't work, you can reject it, or find the closest one and can always edit," he added.

The GitHub CEO expects more of those Copilot code suggestions to be taken in the next five years, up to 80%. Unlike a lot going on in the computer field, Dohmke said of that forecast, "It's not an exact science ... but we think it will tremendously grow."

After being in the market for a year, he said new models are getting better fast. As developers reject some code suggestions from Copilot, the AI learns. And as more developers adopt Copilot it gets smarter by interacting with developers similar to a new coworker, learning from what is accepted or rejected. New models of the AI don't come out every day, but every time a new model is available, "we might have a leap," he said.

But the AI is still far short of replacing humans. "Copilot today can't do 100% of the task," Dohmke said. "It's not sentient. It can't create itself without user input."

With Copilot still in private beta testing among individual developers 400,000 developer signed up to use the AI in the first months it was available and hundreds of thousands of more developers since GitHub has not announced any enterprise clients, but it expects to begin naming business customers before the end of the year. There is no enterprise pricing information being disclosed yet, but in the beta test Copilot pricing has been set at a flat rate per developer $10 per individual per month or $100 annually, often expensed by developers on company cards. "And you can imagine what they earn per month so it's a marginal cost," Dohmke said. "If you look at the 40% and think of the productivity improvement, and take 40% of opex spend on developers, the $10 is not a relevant cost. ... I have 1,000 developers and it's way more money than 1000 x 10," he said.

The GitHub CEO sees what is taking place now with AI as the next logical phase of the productivity advances in a coding world he has been a part of since the late 1980s. That was a time when coding was emerging out of the punch card phase, and there was no internet, and coders like Dohmke had to buy books and magazines, and join computer clubs to gain information. "I had to wait to meet someone to ask questions," he recalled.

That was the first phase of developer productivity, and then came the internet, and now open source, allowing developers to find other developers on the internet who had already "developed the wheel," he said.

Now, whether the coding task is related to payment processing or a social media login, most companies whether startups or established enterprises put in open source code. "There is a huge dependency tree of open source that already exists," Dohmke said.

It's not uncommon for up to 90% of code on mobile phone apps to be pulled from the internet and open source platforms like GitHub. In a coding era of "whatever else is already available," that's not what will differentiate a developer or app.

"AI is just the third wave of this," Dohmke said. "From punch cards to building everything ourselves to open source, to now withina lot of code, AI writing more," he said. "With 40%, soon enough if AI spreads across industries, the innovation on the phone will be created with the help of AI and the developer."

Today, and into the foreseeable future, Copilot remains a technology that is trained on code, and is making proposals based on looking things up in a library of code. It is not inventing any new algorithms, but at the current pace of progress, eventually, "it is entirely possible that with help of a developer it will create new ideas of source code,," Dohmke said.

But even that still requires a human touch. "Copilot is getting closer, but it will always need developers to create innovation," he said.

Continue reading here:

Microsoft's GitHub Copilot AI is making rapid progress. Here's how its human leader thinks about it - CNBC

Responsible Data And AI In The Time Of Pandemic And Crisis – Forbes

Enterprises, corporate businesses, governments, and workers are exploring new methods to remain operational amidst the COVID-19 pandemic. Nationwide lockdowns, stay-at-home orders, border closures, and other safeguarding measures to contain the virus have made the working environment more complex than ever. Businesses are relying on technology solutions based upon artificial intelligence (AI) and data to formulate work processes that can function efficiently in the new normal. At the same time, government authorities and law enforcement agencies are depending upon contact-tracing technologies to preserve public safety while fighting against the deadly virus.

Some authorities have used cameras with facial recognition functionality to identify and track people traveling from an affected area. Similarly, police in Spain have implemented technology to impose stay-at-home orders with smart use of drones for patrolling and broadcasting important information to the public. At Hong Kong airports, people traveling from different regions of the world are required to wear monitoring bracelets that track their quarantine days and alert the respective authorities whenever they leave their houses. Likewise, a surveillance company in the United States has built AI-enabled thermal cameras capable of detecting fevers. Meanwhile, at Thailand airports, border officers are already carrying out trials on a biometric screen system with the help of fever-detecting cameras.

Such data-driven approaches, when misused, can raise human rights concerns sabotaging peoples trust in their government.

In a time of crisis, we should tread this technology with extreme caution only using it in a limited capacity with proper oversight. Bruce Schneier, a renowned American cryptographer and computer security professional, remarked, "data is the pollution problem of the information age and protecting privacy is the environmental challenge."

More often than not, companies constitute their data governance practices that lay the foundation for data management and quality control. Right now, many organizations are creating new data and technology principles that help them function in the changing business ecosystem while safeguarding confidential data of all stakeholders. Companies that do not have a concrete set of guidelines risk mishandling data and violating privacy.

Companies need to start defining transparent and clear data usage guidelines helping build a trustworthy reputation among employees, business partners, customers, and other stakeholders. Moreover, the companies should make sure that these guidelines and policies are applicable to both in-house development services as well as external development services.

With AI and data being used in abundance, it is essential to adopt ethical principles with proper planning. When you frame policies without figuring out all outcomes of AI-based solutions, there will be a gap between your practice and policies. So, before implementing AI in your business system or clients solution, you should evaluate existing policies and add relevant policies about the use and effect of AI and data.

Governments and companies that are collecting COVID-19 data to contain the spread must ensure that the same data is not used for other purposes. It should be meant for public health motives only, and organizations should include it as a mandatory AI principle in their policies.

It would not be the best strategy to invest in all AI and data solutions only as a response to the pandemic. Instead, companies need to make calculated decisions about buying and implementing only the solutions they need and collecting the relevant data. There is no doubt that the advanced applications of AI are compelling, but companies shouldnt attempt to implement these at the cost of usability and reliability.

Companies should similarly prioritize applications with long-lasting results rather than the ones with short-term benefits. A strategic approach in implementing new tech solutions based on AI will help you better understand data protection and privacy concerns. Eventually, there will be better execution with more transparency and clarity about the responsible use of AI and data across the organization.

To fight this global pandemic, government authorities and law enforcement agencies from different parts of the world need to collaborate with each other. This can be achieved if communities and societies share data with transparency, adhere to each others usage policies, and use it responsibly for the good of the society.

Businesses must examine their vendor agreements to understand how data and technology is used and whether it violates your companys Ethics practices.. You must ask about their development policies, practices, data protection, and privacy guidelines. This will help you explain your terms and ask for the changes in the agreements accordingly.

It has become essential to track individuals to curb the spread of COVID-19. The United Kingdom, Italy, Austria, and Belgium are already studying the movement of people from one place to another as groups. It has been done by keeping the data anonymous. But as Dr Dawn Song, Professor at UC Berkeley and CEO and Founder of Oasis Labs, said in her keynote at the Responsible Data Summit, an event held in July 2020 with attendees including Turing Award Winners, Fortune 500 industry leaders and other privacy thought leaders and advocates, using anonymization doesnt adequately protect user privacy. She reasoned that, for example, it is possible to extract information about specific individuals from an anonymized mobile phone location dataset. Thus companies should instead invest in advanced privacy techniques such as secure computing and differential privacy to ensure that identified data remains anonymous and private.

AI is a powerful tool: one that can redefine our culture, topple oppressive governments, and in this case, help the world tackle a pandemic. It has become even more important than ever to understand how risky AI can be when it is used irresponsibly. Thus, in order to ensure proper and responsible use of data and AI we must first clearly define the fundamental digital rights of individuals, the principles to follow as a society, and the laws we must enforce as a collection of nations.

Original post:

Responsible Data And AI In The Time Of Pandemic And Crisis - Forbes

Funding secured to further study of AI systems for cardiovascular health – Cardiac Rhythm News

Digital health and artificial intelligence (AI) company Eko has been awarded a US$2.7 million Small Business Innovation Research (SBIR) grant by the National Institutes of Health (NIH). The grant will fund the continued collaborative work with Northwestern Medicine Bluhm Cardiovascular Institute, Chicago, USA to validate algorithms to aid screening for pathologic heart murmurs and valvular heart disease.

Cardiovascular disease is the leading cause of death in the USA, and valvular heart disease often goes undetected because of the challenge of hearing murmurs with traditional stethoscopes, particularly in noisy or busy environments. A highly accurate clinical decision support algorithm that is able to detect and classify valvular heart disease will help improve accuracy of diagnosis and the detection of potential cardiac abnormalities at the earliest possible time, allowing for timely intervention, said James D Thomas, director of the Center for Heart Valve Disease at Northwestern Medicine and the clinical studys principal investigator. Our work with Eko aspires to extend the auscultatory expertise of cardiologists to more general practitioners to better serve our patient community, playing a pivotal role in growing the future of cardiovascular medicine.

Eko and Northwestern first announced their collaboration in March 2019 to provide a simpler, lower-cost way for clinicians to identify patients with heart disease without the use of screening tools such as echocardiograms which are typically only available at specialty clinics. By incorporating data from tens of thousands of heart patterns into the stethoscope and its algorithms, clinicians will have cardiologist-level precision in detecting subtle abnormalities from normal sounds, Eko said in a press release.

This SBIR award from the NIH underscores our vision to provide world-class cardiovascular care at the patients point of care, said Adam Saltman, chief medical officer at Eko. We believe that the integration of these deep learning algorithms into the Eko platform that is currently used by more than 1,000 institutions worldwide will lead to earlier diagnosis and better patient outcomes. Our mission is to change how cardiovascular disease is diagnosed, and as one of the first centres in the country to study AI and cardiovascular disease, Northwestern is an ideal partner to help us reach our goal.

View original post here:

Funding secured to further study of AI systems for cardiovascular health - Cardiac Rhythm News

How AI is helping in the fight against coronavirus and cybercrime – Software Testing News

Matt Walmsley is an EMEA Director at Vectra, a cybersecurity company providing an AI-based network detection and response solution for cloud, SaaS, data centre and enterprise infrastructures.

With the spread of the Coronavirus, cybercriminals have gained more power and become more dangerous, leaving some IT infrastructures at risks. That is why Vectra is offering the use of AI to protect the data centre and specifically, cybersecurity powered by AI to help secure data centres and protect an organisations network.

In this interview, Matt explains why data centres represent such a valuable target for cybercriminals and how, despite the vast security measures put in place by enterprises, they are able to infiltrate a data centre system. He also explains the storyline of an attack targeting data centres and how cybersecurity powered by AI can help the security teams to detect anomalous behaviours before its too late.

Whats your current role?

Im an EMEA Director at Vectra. Ive been here for five years since we started the business, and Im predominately doing thought leadership, technical marketing and communicating information. I spent most of my time thinking about how we put AI to use in the pursuit of, in our case, cybersecurity and a big part of that is cloud and data centre.

To get into the core of what you do, you talk about cloud, data centres and AI, but which one is the core driver of all of those for your business?Which types of devices are the Vectra AI you use, integrated into and in what sectors is it applied within?

Our perspectives on this as experts of cybersecurity and AI is a culmination of those two practices: machine learning and applying it to cybersecurity user cases. So, were using it in an applied manner, i.e. to solve a particular set of complex tasks. In fact, if you look at the way AI is used in the majority of cases today, its used in a focused manner to do a specific set of tasks.

In cybersecurity practice, using AI to hunt and detect advanced attackers that are active inside the cloud, inside your data, inside your network, its really doing things at scale and speed that human beings just arent able to do. In doing so, cybersecurity is like a healthcare issue, if we find an issue early and resolve it early, well have a far more positive outcome, than if we left it festering, and theres nothing to be done. Its just the same as cybersecurity.

You talk a lot about the rapid ability of it to scale to bigger projects, in relation to your work, do you see AI as a way to solve problems in the future, or do you think theres a long way to go with it? Is AI the future? Or do you think humans managing AI is the future?

AI is becoming an increasingly important part of our lives. In cybersecurity practice, its going to be a fundamental set of tools but it wont replace human beings. Nobodys building a Skynet for cybersecurity, that sorts it all out and turns the table back on us. What were doing is building tools at size and scales to do tasks that humans beings cant do. So its really about augmenting human ability in the context of cybersecurity, and for us, its a touchstone of our business and a fundamental building block for cybersecurity operations now and in the future.

Theres a massive skills gap in our industry, so automating some cybersecurity tasks with AI is actually a very rational solution to fixing the immediate massive skills resource gap. But it can also do things humans cant do. Its not just taking the weight off your shoulders, its going to do things like spotting the really really weak signals of threat actor hiding in an encrypted web session. Its impossible to do that by hand, to do it looking at the packets, the bits of bytes, you need a deep neural net, that can look at really subtle temporal changes. AI does it faster and broader and it does things we are just not capable of doing at the same level of competency.

Its optimistic that its going to have such a dramatic effect on our working process. In terms of Data centres, how is AI working to protect data centres?

The data centres change and Im sure, as youve seen, its becoming increasingly hybrid. Theres more stuff going out to the cloud, even though people still have private data centres and clouds. One of the main challenges that a security team has with a data centre is that, as workloads are increased, moved, or mobile or flexed, its really hard to know about it.

As security experts usually have incomplete information, they wont know which VM you have just sped up or what its running. They dont know all of those things and they are meant to be agile for the business but that agility comes with a kind of information cost. I have imperfect information, I never quite know what Ive got.

Ill give you an example: I was at a very large international financial services provider and I was talking to their CCO. He had their cloud provider in and he told me where we are at with licensing and usage. What he thought he had to cover and what the business actually had was about ten times off. So there was ten times more workload out there than he and his security team even knew about.

So how can AI help us with that?

Well, if we integrate AI and allow it to monitor virtual conversations, it can automatically watch and use past observations spot, new workloads that are coming in there and how they are interacting with other entities. Its those behaviours that are the signal that tells the AI to look and find attackers. So its not about putting software in its workload, just monitoring how it works with other devices.

In doing so, we can then quickly tell the security team: here are all the workloads were seeing, here are the ones with behaviours that are consistent with active attackers and then we can score and prioritise them. What were doing is automating and monitoring the attacker for behaviours, so as a security team youre getting more signals, more noise and less ambiguity. Its not just headlined Malware or exploits, which are the ways people get into systems.

What else do you see in threat actor events?

Exactly what happens next in an advanced attack. An advanced attack can play out in days, weeks, months The attacker has got to get inside, hes had to get a credential, had to exploit a user, hes got to do research and reconnaissance, hes going to move around the environment, hes going to escalate. We call that lateral movement then hell make a play for the digital jewels, which could be in the data centre or in the cloud.

So, if you can spot those behaviours that are consistent with an attacker, youve got multiple opportunities to find an attacker in the life cycle of the attack. Just to use that healthcare analogy again, find it early and it will much better and faster to close it down. If you find them when they are running for the default gateway with a big chunk of data doing a big infiltration, you are almost breached and thats a bit too late.

Using AI, basically, is like being the drone in the sky looking over the totality of the digital enterprise and using the individual devices and how the accounts are talking to each other, looking for the very subtle, hard to spot for robust signs of the attackers and thats what were doing. I can see why AI speeds that up efficiently.

Is there a specific method or security process that Vectras cybersecurity software implements, to help protect mass data centres?

Thats quite an insightful question because not all AI is. Built the same AI, is quite a nebulous term. It doesnt tell you what algorithmic approach people are taking. I cant give you a definitive answer for a definitive technology but I can give you a methodology.

The methodology starts with the understanding that the attacker must manifest. If I got inside your organisation and I want to scan and look for devices, there are only so many techniques available for me to do that. Thats behaviour and we have tools and the protocols, to spot that. So, we can see how can we spot the malicious use of those legitimate tool or procedures, these TTPs.

How does that whole process start?

That starts with a security research team, to look for the evidence that attackers do use these behaviours, because it may be a premise, it may not be accurate, once weve done that we bring a data scientist to come in and work with this team.

So, lets find some examples of this behaviour attacker, of this behaviour manifesting in a benign way, as an attacker, in a malicious way and lets look at some regular no good data. The data scientist looks at that data, does a lot of analysis and tries to understand it. They look at the attributes, what they call a feature, and what are the feature selections. I might find it useful to build a model to find this and there are various ways you can look at data and separate the customers infrastructure and all of the different structure inside it. Then theyll go off and build a model and theyll train it with the data. Once weve got it to an effective level and performance and were happy with it, we release into our incognito NDR, network detection platform, and that goes off and looks for individual behaviour.

Remote Desktop Protocol, RDP, recon will be completely different from the thing thats looking for hidden https command and ctrl behaviours. So, it has different behaviours and data structures and different algorithmic approaches. However, some of those attacks manifest in the same way in everybodys network. We can pre-train those algorithms.

Are they aware of those behaviours?

Yes. Its like a big school, Its got its certification, its ready to go, as soon as you turn it on. Theres no concept of what it has to learn anything else, its already trained, it knows what to look for, its a complex job but weve already trained it.

But there are some things that we could never in advance. For example, Ill never know how your data centre is architected, what the IP ranges are, theres no way of me knowing it in advance and theres a lot of things we can only earn through observation.

So, we call that an unsupervised model and we blend these techniques. Some of them are supervised, some are unsupervised, some of them use recursive really deep neural networks. When its really challenging when were looking into a data problem, we just cant figure it out, what are the attributes? What are the features? We know its in there.

But what is it?

We cant figure it out. We are going to get a neural net to do that for themselves, once again doing things at a scale human could not do in an effective way. Weve got thirty patents pending now, we are rewarded on different algorithms that build they are brains that we built that does that monitoring and detection.

Do you think there are any precautions people should take to avoid cybercrime during coronavirus?

Our piece of the security puzzle is: how do I find someone who already penetrated my organisation in a quick manner, so were not the technology that stops known threats coming in? You might think of healthcare who adopted this really quickly. Healthcare, the WHO, recently called out a massive spike in COVID related phishing.

Thats the threat landscape, thats whats happening out there, thats what the threat actors are doing. We are actually inside healthcare and we did not see a particularly large spike in post intrusion behaviour, so we did not see evidence that more attackers were getting into these organisations, they all had done a reasonable job in keeping the wolf from the door.

But what we did see, because we were watching everything, were changes in how users were working. We saw a rapid pivot to using external services, generally services are associated with cloud adoption, particularly around collaboration tools and we saw a lot of data moving out of those, that created compliance challenges.

What do you mean?

Sensitive data suddenly being sent to a third party. Thats not to beret health organisation during a really challenging time but their priorities were obviously making sure clinical services were delivered but in doing so, they also opened up the attack surface where the potential for attackers to get was in increased.

Its important to maintain visibility so you can understand your attack surface, and you can then put in the appropriate procedures and policy and controls to minimise your risk there.

See the rest here:

How AI is helping in the fight against coronavirus and cybercrime - Software Testing News

A college kids fake, AI-generated blog fooled tens of thousands. This is how he made it. – MIT Technology Review

GPT-3 is OpenAIs latest and largest language AI model, which the San Franciscobased research lab began drip-feeding out in mid-July. In February of last year, OpenAI made headlines with GPT-2, an earlier version of the algorithm, which it announced it would withhold for fear it would be abused. The decision immediately sparked a backlash, as researchers accused the lab of pulling a stunt. By November, the lab had reversed position and released the model, saying it had detected no strong evidence of misuse so far.

The lab took a different approach with GPT-3; it neither withheld it nor granted public access. Instead, it gave the algorithm to select researchers who applied for a private beta, with the goal of gathering their feedback and commercializing the technology by the end of the year.

Porr submitted an application. He filled out a form with a simple questionnaire about his intended use. But he also didnt wait around. After reaching out to several members of the Berkeley AI community, he quickly found a PhD student who already had access. Once the graduate student agreed to collaborate, Porr wrote a small script for him to run. It gave GPT-3 the headline and introduction for a blog post and had it spit out several completed versions. Porrs first post (the one that charted on Hacker News), and every post after, was copy-and-pasted from one of the outputs with little to no editing.

From the time that I thought of the idea and got in contact with the PhD student to me actually creating the blog and the first blog going viralit took maybe a couple of hours, he says.

SCREENSHOT / LIAM PORR

The trick to generating content without the need for much editing was understanding GPT-3s strengths and weaknesses. It's quite good at making pretty language, and it's not very good at being logical and rational, says Porr. So he picked a popular blog category that doesnt require rigorous logic: productivity and self-help.

From there, he wrote his headlines following a simple formula: hed scroll around on Medium and Hacker News to see what was performing in those categories and put together something relatively similar. Feeling unproductive? Maybe you should stop overthinking, he wrote for one. Boldness and creativity trumps intelligence, he wrote for another. On a few occasions, the headlines didnt work out. But as long as he stayed on the right topics, the process was easy.

After two weeks of nearly daily posts, he retired the project with one final, cryptic, self-written message. Titled What I would do with GPT-3 if I had no ethics, it described his process as a hypothetical. The same day, he also posted a more straightforward confession on his real blog.

SCREENSHOT / LIAM PORR

Porr says he wanted to prove that GPT-3 could be passed off as a human writer. Indeed, despite the algorithms somewhat weird writing pattern and occasional errors, only three or four of the dozens of people who commented on his top post on Hacker News raised suspicions that it might have been generated by an algorithm. All those comments were immediately downvoted by other community members.

For experts, this has long been the worry raised by such language-generating algorithms. Ever since OpenAI first announced GPT-2, people have speculated that it was vulnerable to abuse. In its own blog post, the lab focused on the AI tools potential to be weaponized as a mass producer of misinformation. Others have wondered whether it could be used to churn out spam posts full of relevant keywords to game Google.

Porr says his experiment also shows a more mundane but still troubling alternative: people could use the tool to generate a lot of clickbait content. It's possible that there's gonna just be a flood of mediocre blog content because now the barrier to entry is so easy, he says. I think the value of online content is going to be reduced a lot.

Porr plans to do more experiments with GPT-3. But hes still waiting to get access from OpenAI. Its possible that theyre upset that I did this, he says. I mean, its a little silly.

Update: Additional details have been added to the text and photo captions to explain how Liam Porr created his blog and got it to the top of Hacker News.

Link:

A college kids fake, AI-generated blog fooled tens of thousands. This is how he made it. - MIT Technology Review

This article discusses how AI has become a vital tool to the industry | Security News – SourceSecurity.com

Artificial intelligence (AI) is more than a buzzword. AI is increasingly becoming part of our everyday lives, and a vital tool in the physical security industry. In 2020, AI received more attention than ever, and expanded the ways it can contribute value to physical security systems. This article will revisit some of those development at year-end, including links back to the originally published content.

In the security market today, AI is expanding the use cases, making technologies more powerful and saving money on manpower costs - and today represents just the beginning of what AI can do for the industry. What it will never do, however, is completely take the place of humans in operating security systems. There is a limit to how much we are willing to turn over to machines - even the smartest ones.

"Apply AI to security and now you have an incredibly powerful tool that allows you to operate proactively rather than reactively," said Jody Ross of AMAG Technology, one of our Expert Roundtable Panelists.

AI made its initial splash in the physical security market by transforming the effectiveness of video analytics

AI made its initial splash in the physical security market by transforming the effectiveness of video analytics. However, now there are many other applications, too, as addressed by our Expert Panel Roundtable in another article. Artificial intelligence (AI) and machine learning provide useful tools to make sense of massive amounts of Internet of Things (IoT) data. By helping to automate low-level decision-making, the technologies can make security operators more efficient.

Intelligent capabilities can expand integration options such as increasing the use of biometrics with access control. AI can also help to monitor mechanics and processes. Intelligent systems can help end users understand building occupancy and traffic patterns and even to help enforce physical distancing. These are just a few of the possible uses of the technologies - in the end, the sky is the limit.

AI is undoubtedly one of the bigger disrupters in the physical security industry, and adoption is growing at a rapid rate. And its not just about video analytics. Rather, it is data AI, which is completely untapped by the security industry. Bottom line: AI can change up your security game by automatically deciphering information to predict the future using a wide range of sources and data that have been collected, whether past, present, and future. Thats right. You can look into the future.

Now, Intrusion Detection (Perimeter Protection) systems with cutting-edge, built-in AI algorithms to recognise a plethora of different object types, can distinguish objects of interest, thus significantly decreasing the false-positive intrusion rate. The more advanced AI-based systems enable the users to draw ROIs based on break-in points, areas of high-valuables, and any other preference to where alerts may be beneficial.

AI Loitering Detection can be used to receive alerts on suspicious activity outside any given store

Similarly, AI Loitering Detection can be used to receive alerts on suspicious activity outside any given store. The loitering time and region of interest are customisable in particular systems, which allows for a range of detection options. Smart security is advancing rapidly. As AI and 4K rise in adoption on smart video cameras, these higher video resolutions are driving the demand for more data to be stored on-camera. AI and smart video promise to extract greater insights from security video.

Complex, extensive camera networks will already require a large amount of data storage, particularly if this is 24/7 monitoring from smart video-enabled devices. Newer edge computing will play an important role in capturing, collecting, and analysing data. There are many more types of cameras being used today, such as body cameras, dashboard cameras, and new Internet of Things (IoT) devices and sensors.

Video data is so rich nowadays, you can analyse it and deduce a lot of valuable information in real-time, instead of post-event. In smart cities applications, the challenge of identifying both physical and invisible threats to meet urban citizens needs will demand a security response that is proactive, adaptable and dynamic.

As we look ahead to the future of public safety, its clear that new technologies, driven by artificial intelligence (AI), can dramatically improve the effectiveness of todays physical security space. For smart cities, the use of innovative AI and machine learning technologies have already started to help optimise security solutions.

In sports stadium applications, AIs role in getting fans and spectators back after the COVID pandemic is huge, through capabilities such as social distance monitoring, crowd scanning/metrics, facial recognition, fever detection, track and trace and providing behavioural analytics. Technologies such as AI-powered collaboration platforms now work alongside National Leagues, Franchises and Governing Bodies to implement AI surveillance software into their CCTV/surveillance cameras.

In many ways, its the equivalent of a neighbourhood watch programme made far more intelligent through the use of AI

This is now creating a more collaborative effort from the operations team in stadiums, rather than purely security. AI surveillance software, when implemented into the surveillance cameras can be accessed by designated users on any device and on any browser platform. One of the biggest advantages of using AI technology is that its possible to integrate this intelligent software into building smarter, safer communities and cities.

Essentially, this means developing a layered system that connects multiple sensors for the detection of visible and invisible threats. Integrated systems mean that threats can be detected and tracked, with onsite and law enforcement notified faster, and possibly before an assault begins to take place. In many ways, its the equivalent of a neighbourhood watch programme made far more intelligent through the use of AI.

Using technology in this way means that thousands of people can be screened seamlessly and quickly, without invading their civil liberties or privacy. AIs ability to detect visible or invisible threats or behavioural anomalies will prove enormously valuable to many sectors across our global economy. Revolutionary AI-driven technologies can help to fight illicit trade across markets. AI technologies in this specific application promise to help build safer and more secure communities in the future.

AI can support the ongoing fight against illicit trade on a global scale in a tangible way. For financial transactions at risk of fraud and money laundering, for example, tracking has become an increasing headache if done manually. As a solution to this labour-intensive process, AI technology can be trained to follow all the compliance rules and process a large number of documents - often billions of pages of documents - in a short period of time.

Visit link:

This article discusses how AI has become a vital tool to the industry | Security News - SourceSecurity.com

World’s First AI-Solution for Primary Diagnosis of Breast Cancer Deployed by Ibex Medical Analytics and KSM, the Research and Innovation Center of…

TEL AVIV, Israel, Dec. 16, 2020 /PRNewswire/ --Ibex Medical Analytics, a pioneer in artificial intelligence (AI)-based cancer diagnostics, and KSM, the Research and Innovation Center of Maccabi Healthcare Services Israel's leading HMO - announced today a first-of-a-kind pilot of Ibex's Galen Breast solution for AI-powered primary diagnosis of breast cancer at Maccabi's Pathology Institute.

Breast cancer is the most common malignant disease in women worldwide, with over 2 million new cases each year. Early and accurate detection is critical for effective treatment and saving women's lives.

The pilot at Maccabi's Pathology Institute includes 2,000 breast biopsies on which pathologists will use Galen Breast as a First Read application. It is the first-ever deployment of an AI application for primary diagnosis of breast cancer.

During the pilot, all breast biopsies examined at Maccabi will be digitized using a digital pathology scanner, and automatically analyzed by the Galen Breast solution prior to review by a pathologist. The solution detects suspicious findings on biopsies, such as regions with high probability of including cancer cells, and classifies them to one of three risk levels, ranging from high risk of cancer to benign. The Galen Breast First Read is designed to help pathologists diagnose breast biopsies more accurately, more efficiently, and at a considerably faster turnaround time compared to diagnosis on a microscope.

Ibex's AI solution has been used at Maccabi's Pathology Institute since 2018, and already today, all breast and prostate biopsies undergo AI-based second read, supporting improved accuracy and quality control. The solution alerts when discrepancies between the pathologist's diagnosis and the AI algorithm's findings are detected, thus providing a safety net in case of error or misdiagnosis.

"We are proud to use AI as an integral part of breast cancer diagnosis," said Judith Sandbank, MD and Director of the Pathology Institute at Maccabi. "We have already had a successful experience with Ibex's AI solution, enabling us to implement quality control and perform second read on biopsies, and now we are making a significant leap forward with the integration of AI into primary cancer diagnosis."

"Artificial intelligence is revolutionizing healthcare, and its integration into clinical practice will significantly improve the ability to diagnose cancer quickly and efficiently," said Dr. Chaim Linhart, Co-founder and CTO of Ibex. "Our solutions are used in routine practice in pathology laboratories worldwide, and have already helped detect breast and prostate cancers that were misdiagnosed by pathologists as benign. It is now time to take AI to the next level and employ its capabilities across a broader range of the diagnostic workflow."

About Ibex Medical Analytics

Ibex uses AI to develop clinical-grade solutions that help pathologists detect and grade cancer in biopsies. The Galen Prostate and Galen Breast are the first-ever AI-powered cancer diagnostics solutions in routine clinical use in pathology and deployed worldwide, empowering pathologists to improve diagnostic accuracy, integrate comprehensive quality control and enable more efficient workflows. Ibex's solutions are built on deep learning algorithms trained by a team of pathologists, data scientists and software engineers. For more information go to http://www.ibex-ai.com.

About KSM

KSM (Kahn-Sagol-Maccabi), the Maccabi Research and Innovation Center, was founded in 2016 in cooperation with Morris Kahn and Sami Sagol. KSM has unique access to Maccabi's professional abilities and wealth of medical knowledge, including a large database of 2.5 million members with 30 years of data collection. We are a strong force in multiple global health areas. OurInnovation & Big Datautilizes advanced data sources and AI technologies. We have founded Israel's largestBiobank(over 450K samples collected and analyzed),Clinical Researchactivities, and a highly awardedEpidemiological Researchdepartment. KSM is leading advanced global health improvements by partnering with well-known scientists, researchers, academic institutions, pharmaceutical companies, startups, and tech companies to create and expedite medical breakthroughs. Our co-operations within the global health eco-system allow us to deliver groundbreaking discoveries and solutions - shaping the future of health. http://www.ksminnovation.com.

Media ContactLaura RaananGK for Ibex[emailprotected]

SOURCE Ibex Medical Analytics

More here:

World's First AI-Solution for Primary Diagnosis of Breast Cancer Deployed by Ibex Medical Analytics and KSM, the Research and Innovation Center of...

Revealed: Where TfL Is Deploying 20 AI Cameras Around London, and Why – Gizmodo UK

Londons CCTV cameras are about to get a lot smarter, thanks to a new partnership between Transport for London, the capitals transport agency, and VivaCity Labs. Together, the pair are rolling out 20 new artificial intelligence enabled cameras across the centre of the city.

But why? The reason TfL are interested in the cameras is a bit of a no-brainer: as the organisation are responsible for making sure Londoners can get around the city, the more data they have, the better. If they can more accurately monitoring crowding and congestion, and understand the journeys people are actually taking, it can inform both how TfL plans infrastructure improvements for the future (would extra cycle lanes here be a good idea?), and immediate traffic management challenges (keep the lights green for 3 seconds longer on this road after a football match).

Last July, TfL rolled out its Tube tracking full time - which uses the wifi signals from our phones to follow us around the Tube network for similar reasons. But taking the Tube is only one type of journey; what about cars, buses, bikes and pedestrians? TfL already has hundreds of traffic cameras placed around London (you can even watch them in close to real time), but these cameras are dumb, and to understand what is going on in the pictures requires a human operator to take a look and decide what the pictures are telling us.

Hence, enter stage left VivaCity Labs. The VivaCity Sensor Platform makes use of an artificial intelligence layer on top of the cameras, to analyse images and reveal insights that TfL might find useful.

Image: VivaCity

For example, point it at a road and it will count all of the vehicles that pass by - but instead of just counting vehicles like many existing systems, it will classify the vehicles by type, giving TfL a breakdown on the number of cars, vans, lorries and so on. It will also estimate the speed of each road user (though the companys documentation points out this is not for law enforcement). According to the launch press release, VivaCitys cameras are up to 98 per cent accurate.

In a response to a Freedom of Information request from Gizmodo UK, TfL also revealed that one unique feature of the camera is that it will be able to be trained to spot new and specific vehicle types, such as London buses or cargo-bikes, to differentiate them from similar vehicles. In other words, TfL could soon have data on Londons busiest Deliveroo routes.

The cameras are also able to identify how people are moving within a cameras field of vision - so it could conceivably be used to, for example, see how long it takes people to cross the road or how the road space is being used.

Judging by the map of locations, it appears that for this initial rollout, cameras are being placed around Londons inner ring-road, following basically the Congestion Charge zone, as well as on a number of bridges and pinch points in the central core of the city.

This makes a lot of sense: presumably the eventual plan is to replace the actual Congestion Zone cameras with the VivaCity sensors. This is wild speculation but not only would that mean they can provide more detailed analytics, but by being generic cameras running software, it could ultimately mean saved money and more flexibility. Specialist cameras would not be require for Congestion Zone enforcement, and if TfL wanted to expand or retract the zone, itd simply be a case of pressing a button on some software to switch on number plate logging at other VivaCity camera locations, rather than a bigger manual process. (Again, wild speculation but we can imagine a future where the Congestion Zone moves dynamically - maybe covering a wider area on weekdays than weekends, say.)

Overall, 43 cameras are being placed around 20 locations for a trial period of two years. Heres a full list of locations:

Image:VivaCity

The rollout will, of course, provoke privacy concerns. After all, these cameras are not just taking pictures, but are interpreting them too. Though in all of its communications on the new cameras, TfL has downplayed any privacy concerns, saying in its original press release that All video captured by the sensors is processed and discarded within seconds, meaning that no personal data is ever stored.

The data, it emphasises, is processed in the camera units themselves - and it is only the outputted data, such as counts of the number of vehicles, that is being sent back to TfL for storage. All of the images collected by the cameras are apparently discarded within seconds.

Whats most illustrative of TfLs careful approach is that it has confirmed to Gizmodo UK that it does not intend to use one other feature of VivaCitys product: the ability to track vehicles as they travel across the city using number-plate recognition.

In response to our FOI, TfL says this:

Our requirements for this data include cycle and pedestrian counts, full traffic classified counts (13 types of traffic), turning movement counts, link delays, queue length monitoring and pedestrian crowding.

Notice how all of these tasks can be carried out with just a single camera - rather than require data to be stored and matched up elsewhere. Weve asked TfL to confirm whether or not our hunch is correct.

However, there are still gaps that might leave privacy experts asking questions. According to TfL, VivaCity Labs has produced a number of Privacy Impact Assessments, but because they were carried out by VivaCity, TfL says that it does not hold the assessments - and of course, being a private company, VivaCity is not subject to Freedom of Information laws. This also implies that VivaCity, not TfL, are the data controller of this data. Weve asked TfL for a more detailed explanation of its rationale for not releasing these private assessments.

Ultimately, this camera rollout sits at the nexus of a very familiar debate. As we saw with TfLs WiFi tracking system, there is a very real trade-off between giving planners better data and respecting privacy. TfL does, once again, appear to have behaved broadly responsibly - but it is still an important debate to have. The future is one where every CCTV camera will, by default, have this sort of functionality baked in - so better to debate now whether the gains are worth it, rather than risk waiting until it is too late.

Featured Photo byPawe CzerwiskionUnsplash

More:

Revealed: Where TfL Is Deploying 20 AI Cameras Around London, and Why - Gizmodo UK

Theres Nothing Fake About These 10 Artificial Intelligence Stocks to Buy – InvestorPlace

Artificial intelligence is one of those catchy phrases that continues to grab investors attention. Like 5G, it tugs on the sleeves of those looking to get in on cutting-edge technology. While it is a very important sector of technology, investors need to be wary of hype and focus on reality before buying AI stocks.

Take, for example, International Business Machines (NYSE:IBM). IBM has been on the front line of AI with its Watson-branded products and services. Sure, it did a bang up job on Jeopardy and it partners with dozens of companies. But for IBM shareholders, Watson is not a portfolio favorite.

Over the past five years, IBM has lost 28.7% in price compared to the S&P 500s gain of 37.5% and the S&P Information Technology Indexs gain of 130%. And over the past 10 years, IBMs AI leadership has generated a shareholder loss of 3.4%.

Source: Chart by Bloomberg

IBM (White), S&P 500 (Red) & S&P 500 Information Technology (Gold) Indexes Total Return

But AI is more than just a party trick like Watson. AI brings algorithms into computers. These algorithms then take internal and external data, and in turn process decisions behind all sorts of products and services. Think for example something as simple as targeted ads. Data is gathered and processed while you simply shop online.

But AI can go much further. Think, of course, of autonomous vehicles. AI takes all sorts of input data and the central processor makes calls to how the vehicle moves and at what speed and direction.

Or in medicine, AI brings quicker analysis of symptoms, diagnostic data and tests.

And the list goes on.

So then what do I bring to the table as a human? I have found ten AI stocks that arent just companies using AI. These are companies to own and follow for years complete with dividends along the way.

Lets start with the index of the best technology companies found inside that S&P Information Technology index cited earlier. The Vanguard Information Technology ETF (NYSEARCA:VGT) synthetically invests in the leaders of that index. It should be the starting point for all technology investing it offers a solid foundation for portfolios.

Source: Chart by Bloomberg

Vanguard Information Technology ETF (VGT) Total Return

The exchange-traded fund continues to perform well. Its return for just the past five years runs at 141.1% for an average annual equivalent return of 19.2%. This includes the major fall in March 2020.

Before I move to the next of my AI stocks, its important to note that data doesnt just get collected. It also has to be communicated quickly and efficiently to make processes work.

Take the AI example mentioned earlier for autonomous vehicles. AI driving needs to know not just what is in front of the vehicle, but what is coming around the next corner. This means having dependable data transmission. And the two leaders that make this happen now and will continue to do so with 5G are AT&T (NYSE:T) and Verizon (NYSE:VZ).

Source: Chart by Bloomberg

AT&T (White) & Verizon (VZ) Total Return

Much like other successful AI stocks, AT&T and Verizon have lots of communications services and content. This provides some additional opportunities and diversification but can limit investor interest in the near term. This is the case with AT&T and its Time Warner content businesses. But this also means that right now, both of these stocks are good bargains.

And they have a history of delivering to shareholders. AT&T has returned 100% over the past 10 years, while Verizon has returned 242%.

AI takes lots of equipment. Chips, processors and communications gear all go into making AI computers and devices. And you should buy these two companies for their role in equipment: Samsung Electronics (OTCMKTS:SSNLF) and Ericsson (NASDAQ:ERIC).

Samsung is one of the global companies that is essential for nearly anything that involves technology and hardware.Hardly any device out there isnt a either a Samsung product or has components invented and produced by Samsung.

And Ericsson is one of the leaders in communications gear and systems. Its products makes AI communications and data transmission work, including on current 4G and 5G.

Source: Chart by Bloomberg

Samsung Electronics (White) & Ericsson (Red) Total Return

Over the past 10 years Samsung has delivered a return of 235.4% in U.S. dollars while Ericsson has lagged, returning a less-than-stellar 6.5%.

Both have some challenges in their stock prices. Samsungs shares are more challenging to buy in the U.S. And Ericsson faces economic challenges as its deep in the European market. But in both cases, you get great products from companies that are still value buys.

Samsung is valued at a mere 1.2 times book and 1.3 times trailing sales, which is significantly cheaper than its global peers. And Ericsson is also a bargain, trading at a mere 1.3 times trailing sales.

To make AI work you need lots of software. This brings in Microsoft (NASDAQ:MSFT). The company is one of the cornerstones of software its products have all sorts of tech uses.

And AI especially on the move needs quick access to huge amounts of data in the cloud. Microsoft and its Azure-branded cloud fits the bill.

Source: Chart by Bloomberg

Microsoft (MSFT) Total Return

Microsoft, to me, is the poster child of successful technology companies. It went from one-off unit sales of packaged products to recurring income streams from software subscriptions. Now its pivoting to cloud services. And shareholders continue to see rewards. The companys stock has returned 702.7% over the past 10 years alone.

AI and the cloud are integral in their processing and storage of data. But beyond software and hardware, you need to stack off the hardware, complete with power and climate controls, transmission lines and wireless capabilities.

This means data centers. And there are two companies set up as real estate investment trusts (REITs) that lead the way with their real estate and data centers. These are Digital Realty Trust (NYSE:DLR) and Corporate Office Properties (NYSE:OFC).

Digital Realty has the right name, as Corporate Office Properties doesnt tell the full story. The latter company has Amazon (NASDAQ:AMZN) and its Amazon Web Services (AWS) as exclusive clients in core centers, including the vital hub in Northern Virginia.

And the stock-price returns show the power of the name. Digital Realty has returned 310.9% against a loss of 0.9% for Corporate Office Properties.

Source: Chart by Bloomberg

Corporate Office Properties (White) & Digital Realty (Red) Total Return

But this means that while both are good buys right now, Corporate Office Properties is a particular bargain. The stock price is at a mere 1.7 times the companys book value.

Now Ill get to the newer companies in the AI space. These are the companies that are in various stages of development. Some are private now, or are pending public listings. Others are waiting for larger companies to snap them up.

Most individual investors unless they have a net worth nearing $1 billion dont get access. But I have a company that brings this access, and its my stock for theInvestorPlaceBest Stocks for 2020 contest.

Hercules Capital (NYSE:HTGC) is set up as business development company (BDC) that provides financing to all levels of technology companies. Along the way, it takes equity participation in these companies.

It supports hundreds of current technology companies using or developing AI for products and services along with a grand list of past accomplishments. The current portfolio can be found here.

I have followed this company since its early days. I like that it is very investor focused, complete with big dividend payments throughout the many years. And it has returned 184.3% over just the past 10 years alone.

Source: Chart by Bloomberg

Hercules Capital (HTGC) Total Return

Who doesnt buy goods and services from Amazon? I am a prime member with video, audio and book services. And I also have many Alexa devices that I use throughout the day. While I dont contract directly with its AWS, I use its cloud storage as part of other services. Few major companies that are part daily life make use of AI more than Amazon.

The current lockdown mess has made Amazon a further necessity. Toilet paper, paper towels, cleaning supplies, toothpaste, soap and so many other items are sold and delivered by Amazon.

And I also use the platform for additional digital information from the Washington Post. Plus, I get food and other household goods from Whole Foods, and products for my miniature dachshund, Blue, come from Amazon.

This is a company that I have always liked as a consumer, but didnt completely get as an investor. Growth for growths sake was what it appeared to be from my perspective. But I have been coming to a different understanding of what Amazon means as an investment.

It really is more of an index of what has been working in the U.S. for cloud computing and goods and services. And the current mess makes it not just more relevant but a necessity. Its proof comes from the sales that keep rolling up for the company on real GAAP terms.

Source: Chart by Bloomberg

Amazon Sales Revenue (GAAP)

I know that my subscribers to my Profitable Investing dont pay to have me tell them about Amazon. But I am recommending buying shares as the company is really a leading index of the evolving U.S. It is fully engaged in benefitting from AI, like my other AI stocks.

Neil George was once an all-star bond trader, but now he works morning and night to steer readers away from traps and into safe, top-performing income investments. Neils new income program is a cash-generating machine one that can help you collect $208 every day the markets open. Neil does not have any holdings in the securities mentioned above.

Read the original here:

Theres Nothing Fake About These 10 Artificial Intelligence Stocks to Buy - InvestorPlace