Grid4C and Trilliant Partner to Deliver Next Generation Smart Meter Analytics Technologies, Powered by AI – Business Wire

AUSTIN, Texas--(BUSINESS WIRE)--Trilliant Networks, a global provider of leading Smart Grid and Smart City IoT solutions, today announced a groundbreaking partnership with Grid4C to deploy its AI-powered solutions to Trilliants smart metering platform. Utilizing Trilliants expanded IIOT platform for Smart Grid and Smart City solutions, the partnership will enhance Trilliants ability to deliver real-time intelligence and analytics across all layers of technology to new and existing customers.

Grid4Cs AI grid edge solutions provide energy managers with real-time predictions and actionable insights for both their operations and customer-facing applications. When combined with Trilliants robust communications and edge device integrations for metering, distribution automation, street lighting, environmental monitoring, and much more, the platform delivers valuable aggregated data for applications. The data analytics further enhance the visibility of operational efficiency goals, improve proactive management of devices and assets, deliver insights into new revenue streams, and enhance security, enabling business outcomes and customer success.

Partnering with Grid4C enhances Trilliants ability to provide accurate, real-time data and analytics to our customers, says Steven Lupo, managing director for Trilliant, Canada. Grid4Cs AI technology takes our solutions to the next level, enabling Trilliant to provide an even deeper level of customer service to our many trusted partners.

At its core, Trilliants partnership with Grid4C will further enable a continuous stream of intelligent data from all layers of a utilitys or citys deployed technologies. Unlike other products on the market, Trilliants platform will now provide managers with unlimited bandwidth to monitor the performance of applications in real-time.

For more than 20 years Trilliant has been supporting some of the worlds largest utilities with their AMI and smart meter networking technologies, and we are excited to partner with them, says Dr. Noa Ruschin-Rimini, Grid4C founder and CEO. Our machine learning insights will be embedded into Trilliants technology, providing enhanced reliability for providers and customers while simultaneously conserving energy and saving pivotal resources.

About Trilliant

Trilliant empowers the energy industry with the only purpose-built communications platform that enables utilities and cities to securely and reliably deploy any application - on one powerful network. With the most field-proven, globally compliant solution in the market, Trilliant empowers you by connecting the world of things. http://www.trilliant.com

About Grid4C

Selected as a leader in AI solutions for the energy industry by Greentech Media and Navigant Research, Grid4C empowers energy providers and consumers by enabling the power to foresee, leveraging advanced machine learning capabilities to deliver accurate, granular predictions, which are crucial for tackling the rising challenges of today's energy industry. Grid4C's plug-and-play solutions analyze the massive amounts of sub-hourly data collected from millions of smart meters and IoT data, and together with customer data, pricing information and more, deliver new revenue streams, enhance customer value, improve the efficiency of energy operations, and maximize profit. Its portfolio consists of Predictive Home Advisor, which includes non-intrusive household appliance fault prediction and load disaggregation capabilities, Predictive Operational Analytics, enabling better decisions for coordination of distributed energy resources with meter, sub-meter, and asset-level forecasting, and Predictive Customer Analytics, which targets and predicts adoption of new rate plans and utility programs, and more. For more information, please visit http://www.grid4c.com

View original post here:

Grid4C and Trilliant Partner to Deliver Next Generation Smart Meter Analytics Technologies, Powered by AI - Business Wire

Report on AI in UK public sector: Some transparency on how government uses it to govern us would be nice – The Register

A new report from the Committee for Standards in Public life has criticised the UK government's stance on transparency in AI governance and called for ethics to be "embedded" in the frameworks.

The 74-page treatise noted that algorithms are currently being used or developed in healthcare, policing, welfare, social care and immigration. Despite this, the government doesn't publish any centralised audit on the extent of AI use across central government or the wider public sector.

Most of what is in the public realm at present is thanks to journalists and academics making Freedom of Information requests or rifling through the bins of public procurement data, rather than public bodies taking the proactive step of releasing information about how they use AI.

The committee said the public should have access to the "information about the evidence, assumptions and principles on which policy decisions have been made".

In focus groups assembled for the review, members of the public themselves expressed a clear desire for openness, as you'd expect.

"This serious report sadly confirms what we know to be the case that the Conservative government is failing on openness and transparency when it comes to the use of AI in the public sector," shadow digital minister Chi Onwurah MP said in a statement.

"The government urgently needs to get a grip before the potential for unintended consequences gets out of control," said Onwurah, who argued that the public sector should not accept further AI algorithms in decision-making processes without introducing further regulation.

Simon Burall, senior associate with the public participation charity Involve, commented: "It's important that these debates involve the public as well as elected representatives and experts, and that the diversity of the communities that are affected by these algorithms are also involved in informing the trade-offs about when these algorithms should be used and not."

Predictive policing programmes are already being used to identify crime "hotspots" and make individual risk assessments where police use algorithms to determine the likelihood of someone committing a crime.

But human rights group Liberty has urged police to stop using these programmes because they entrench existing biases. Using inadequate data and indirect markers for race (like postcodes) could perpetuate discrimination, the group warned. There is also a "severe lack of transparency" with regard to how these techniques are deployed, it said.

The committee's report noted that the "application of anti-discrimination law to AI needs to be clarified".

In October 2019, the Graun reported that one in three local councils were using algorithms to make welfare decisions. Local authorities have bought machine learning packages from companies including Experian, TransUnion, Capita and Peter Thiel's data-mining biz Palantir which has its fans in the US public sector to support a cost-cutting drive.

These algorithms have already caused cock-ups. North Tyneside council was forced to drop TransUnion, whose system it used to check housing and council tax benefit claims, when welfare payments to an unknown number of people were delayed thanks to the computer's "predictive analytics" wrongly classifying low-risk claims as high risk.

The report stopped short of recommending an independent AI regulator. Instead it said: "All regulators must adapt to the challenges that AI poses to their specific sectors."

The committee endorsed the government's intention to establish the Centre for Data Ethics and Innovation as "an independent, statutory body that will advise government and regulators in this area". So that's all right then.

Sponsored: Detecting cyber attacks as a small to medium business

Continued here:

Report on AI in UK public sector: Some transparency on how government uses it to govern us would be nice - The Register

Why eBay believes in open-sourcing Krylov, its AI platform – VentureBeat

Its hard to find a tech company that isnt attempting some sort of AI-related product, service, or initiative these days, but eBay went all-in by building its own AI platform, called Krylov. Sanjeev Katariya, eBays VP and chief architect of AI and platforms, described Krylov in an interview with VentureBeat: At the very highest level, Krylov is a machine learning platform that enables data scientists and machine learning engineers to ship all different kinds of models for all kinds of data quickly into production, which gets integrated into user experiences that eBay ships globally.

Its a multi-tenant, cloud-based platform that involves technologies like computer vision and natural language processing (NLP), techniques including distributed training and hyper-parameter tuning, and tools germane to eBays services, like merchandising recommendations, buyer personalization, seller price guidance, and shipping estimates.

eBay representatives would not share how much money the company has put into Karlov. Even if they did, the hard costs however significant they may or may not be wouldnt fully capture what eBay has invested to build the platform over years of internal organizational efforts around the globe. And after all that, eBay is now open-sourcing Krylov. Katariya described how the company built the platform and explained why he believes in open-sourcing it.

Its fair to ask why eBay bothered to create an entire platform in the first place. There are other AI platforms out there already: An eBay blog post mentions some, including Google, Facebooks FBLearner, and Ubers Michelangelo.

But, Katariya said, When you take a look at stuff that lives in open source and stuff that lives in public cloud, one of the things is that they are components. Its not a well-stitched system that puts it all together.

Youve got the data pieces separately. You have bits and pieces of machine learning systems, and training systems. You have runtimes like TensorRT. But when you need to ship a production system and make something work for your customers, you need to operationalize them, stitch them together, and build innovation into that system, he said.

Katariya said that some of what eBay built in-house doesnt exist anywhere else. And eBay does use some of the publicly available tools, But we add our secret sauces to it.

The company has a massive amount of data to handle some 1.4 billion user interactions globally (up from 1.2 billion in March) across different languages, in about 190 markets. And then it has to train systems in a distributed fashion and enable both experimentation and production. With so much invested in hiring data scientists and engineers again, the company wouldnt share numbers eBay needed to create a platform to maximize their efficiency and output.

Developing AI within a company is not a simple task its been well documented that many AI initiatives fail. There are various reasons for this, along with plenty of advice on how to succeed. Yet theres no real guidance for a company the size of eBay, with such grand ambitions. Katariya discussed how eBay approached the challenge of creating a unified AI platform in a company that wasnt yet sufficiently unified. Its a little bit of a chicken and egg problem, and it took years.

The first step was breaking down silos across the company. In the beginning, things were done relatively in some degree of isolation. I wont say that they were completely isolated, but there was a degree of isolation, he said.

But eBay recognized that it had to bring together people who were distributed globally, first and foremost, and then set them up for success. You have to provide the right tools, the right forum, the right practices, the right mechanics to bring these ideas together. Its not only about code, but its about collaboration. Its about agile practices, he said.

eBay created what it called the Unified AI Initiative Core Team (ICT). It included people representing all parts of the eventual platform in hardware, compute, network, storage, and data services as well as the AI domain teams, which eBay defines as internal customers of the platform. These included people within eBay doing AI research and engineering in ads, computer vision, NLP, risk, trust, and marketing.

The companys approach was what it describes as internal open source, promoting collaboration across geographies and departments that kept all parties close to their domain-specific tasks and problems while remaining connected to the larger effort.

As an additional incentive, eBay created the ML Engineering Fellowship program. It was essentially a way for engineers within the company to do a sort of internship, working on the AI platform under the tutelage of senior domain experts in machine learning engineering.

After all that effort and investment in an AI platform that should ostensibly start giving eBay some return on investment, its fair to ask why the company is going the open source route. Katariya said that diversity, of multiple kinds, is the impetus.

One of the fundamental principles that we know, [that] we have leveraged, is that we have a global internal audience building a machine learning platform for us. With that comes the strength of diversity [of] thought and execution, he said, pointing out that many of the components of the platform came from collaboration with global teams from Europe and Asia and other regions.

Imagine doing that at the company level, of discovering that in diversity lies great power. Now lets extend that to planet Earth and open-source it, he said. Not only will we be solving an eBay issue or challenge, we will be extending the goodness to the rest of the planet, and we expect that to just evolve kind of [at a] very rapid rate, he said.

Katariya comes off as completely believable and earnest when he says these things. But serving humanity is never the sole goal of a for-profit company. There are business reasons for eBays decision to open-source Krylov.

At the end of the day, we at eBay are all about bringing about a global marketplace with computer vision and machine translation and natural language processing and personalization recommendations. We want to bring our best to our users. We want to revive retail stores, mom and pop businesses. We want to give them AI, he said. And the company wants to constantly grow its base of users and customers.

When [we] increase the velocity of our sellers, we can give them new technologies to sell faster and really discover the value that we have to offer. This is the platform that makes that happen, he added.

See the rest here:

Why eBay believes in open-sourcing Krylov, its AI platform - VentureBeat

Future of AI in video games focuses on the human connection – TechTarget

App developers, students and researchers are using the transformative power of AI technologies to develop people's emotional connection to video games.

Since the 2001 introduction of the first AI digital helper Cortana in Halo, technology and AI have become pivotal to gameplay. With all the buzz around the release of a new iteration of the popular GPT-3 video game tool, IT developers are more in tune than ever with the needs of creative deployments of popular AI technology. The future of AI in video games lies in the ability of the technology to increase the human connection.

Since the dawn of chatbots and digital assistant creation, one critique has been universal: the helper is not human-like enough. This issue spans enterprises, and IT developers and startups are now developing AI that is human-like, emotional and responsive.

Christian Selchau-Hansen, CEO of enterprise software company Formation and former manager of product at social game development company Zynga, said that one of the major uses of AI in video games is the implementation of generative adversarial network (GAN) technology, image recognition and replication in character design. The ability of an algorithm to read emotion, generate emotion from text and accurately portray emotion enables a heightened level of gameplay.

"Whether it's GPT-3 or the processing and techniques of developments like deepfakes the good things that come from [these developments] are more immersive worlds," Selchau-Hansen said.

"For people to be able to interact with more immersive and complex characters, and not just have the ability to interact with them -- but create new responses based on interactions through facial expressions, language, dialogue and actions," Selchau-Hansen said.

Danny Tomsett, CEO of UneeQ, a digital assistant platform creator, said emotional connection is the creation of a feeling between you and the story or character, and AI allows for the closest representation of visual humans.

Visual representations of humans are not as good as meeting in real life, but a model that can see your emotion and vice versa lets you respond dynamically, Tomsett said.

When looking toward the future of gameplay, Selchau-Hansen imagines a world where you have something akin to a physics engine during the game -- one that controls gravity, wind-resistance and thermal conductivity -- but for emotional interactions.

"You could have an emotional engine where your interactions with a [character] can make them sad, confused, scared, jealous -- and their dialogue would spring from those emotions," Selchau-Hansen said.

The gamification of AI has been a driver of technology, with iterations of DeepBlue and AlphaGo teaching developers that perhaps the most important part of augmenting gameplay is the ability to find the spot between competition and demolition. Gamers want to be challenged but still have a chance to win because their competitors are making human-like decisions.

This idea of competition between humans and computers, a friendly tussle between players, is central to creating brand loyalty -- returning players need to be challenged with dynamic, human-like bots on the other side of the game.

Creating brand loyalty in gaming is also about eschewing flat, two-dimensional, text-based digital interfaces to unlock the power of emotion and story, Tomsett said.

Another crossover between AI and gameplay is the ability to personalize. Much like marketing campaigns and personalized promotions, the future of AI in the video game industry depends on monetizing the emotional connection between the game and the consumer. Algorithms collect data from the game -- what the player collects, what quests they follow, what skins they use -- and suggest and alter additional downloads that have the highest chance of winning over the player.

From gameplay to retail to IT personalization, AI is being used to create and strengthen the idea of product value. That value -- monetary, recreational or business-related -- is offered to the consumer to increase the likelihood of brand loyalty, Selchau-Hansen said.

While the future of AI in video games would naturally point to automation and generated text, the AI-generated video games now testing the fringes of current gaming technology also highlight their limits.

Independent designers are toying with open-source technology to use natural language generation to create virtual games without a gaming studio. Developer Nick Walton's AI Dungeon storytelling game throws you into the development of the decision tree -- your choices change the outcome and help train the game for future players. This interactive virtual role-playing game is modeled on Open-AI's machine learning-based GPT-2 natural language generator. Walton tuned more than 117 parameters and crafted neural networks to output this unique story text.

But the game reflects many of the major issues of language generation. The game is a chaotic story as the program cannot tell what you know or if you have seen a character before. Some of the language is nonsensical. There is no human emotion or human decision making.

Michael Cook, a research fellow at the Royal Academy of Engineering, developed Angelina, an AI digital assistant who is trained to develop intelligently designed videogames.

Angelina is designed to make games based on simple theme inputs and is the first system to make 3D games within the game design engine Unity. Despite the nonsensical gameplay, somewhat comical instability and terrible UX, games by Angelina are an interesting foray into what it means to train an AI or machine learning system -- it's a peek into the mechanics of how to train computational creativity. When you input a word or phrase, Angelina accesses a word association database to create a framework for creation. A "secret" theme leads to word associations like "crypt," "dark," "hidden" and "dungeon," but it can also lead to a tangled web of characters, color and ineffective jump-scares.

It's clear that the future of AI in video games lies somewhere between generated text and finely crafted human emotion to wrangle consumers.

Here is the original post:

Future of AI in video games focuses on the human connection - TechTarget

Whats the best way to measure the smarts of AI systems? Researchers are developing an IQ test – GeekWire

WSU Professor Diane Cook is one of the universitys researchers working to create a test for measuring AI as part of a project funded by DARPA. (WSU Photo)

Artificial intelligence can do a lot of impressive things, like find snow leopards among Himalayan grasses captured by remote cameras, maneuver self-driving cars through traffic, and defeat world-class opponents in the game Go.

But are these systems actually intelligent, as humans perceive the concept?

Researchers at Washington State University in Pullman are developing an IQ test to challenge AI systems to see what they really know.

We have AI systems out there that are getting really good at a variety of tasks, said WSU regents professor Diane Cook. But those feats tend to be narrow within each system. Is it really intelligent because its just learned to do that one task?

Cook and Larry Holder, both of whom are professors in WSUs School of Electrical Engineering and Computer Science, recently received a $1 million grant that will run up to five years to tackle the question. The money comes from the U.S. militarys Defense Advanced Research Projects Agency, or DARPA.

The funding began a month ago, and the researchers are starting with basic questions about the scope of intelligence, which could include recognizing images, understanding and generating natural language, reasoning, and using planning in problem solving. The scientists want to use rigorous measures, such as the ability to respond to novel experiences and transfer knowledge to different situations. They also want to test for bias in a machines knowledge; bias can lead to racial, gender and other forms of discrimination, depending on the algorithms application.

Its a difficult task to define and measure intelligence. Just look at how hard it has been to come up with effective standardized tests to measure the full range of smarts for students or job applicants.

If youre trying to see if your machine has general intelligence, you have to define what you mean by general intelligence and make sure your test is really testing that, said Melanie Mitchell, a Portland State University professor in the Department of Computer Science who is not part of the DARPA project.

One of the challenges in the field is the way in which machines learn. Mitchell gave an example of a student in her lab who was teaching a program to recognize photos that contain animals. It appeared to be learning the skill until the researchers realized that it wasnt the image of the creature that the algorithm was keying into, but rather the background blurriness. It turned out that the animals were typically in focus while the background was not, while landscape-only scenes had crisp backgrounds.

A lot of misunderstanding is that the machine learned to do a certain thing like play Go or recognizing objects, so we assume it learned it in the same way we do, Mitchell said. Were surprised when it didnt learn in the way we do, and it cant transfer its knowledge.

The WSU project is part of DARPAs Science of Artificial Intelligence and Learning for Open-world Novelty (SAIL-ON) program.

For AI systems to effectively partner with humans across a spectrum of military applications, intelligent machines need to graduate from closed-world problem solving within confined boundaries to open-world challenges characterized by fluid and novel situations, says the SAIL-ON website. The programs goal is to create and test high-performing AI systems to meet the militarys needs.

There are other organizations working to expand and understand AI abilities. In the Northwest that includes Seattles Allen Institute for Artificial Intelligence (AI2) and the AI group at the University of Washingtons Paul G. Allen School of Computer Science and Engineering. In September, AI2 announced that it had built an AI program called Aristo that is smart enough to pass an eighth-grade, multiple-choice science test.

WSUs Holder has an Artificial Intelligence Quotient or AIQ website with some initial tests for AI developers to quiz their systems. The site is a publicly available tool that will also provide data to the researchers.

We are focused on testing and improving systems that can be more general-purpose, like a robot assistant that can help you with many of your day-to-day tasks, Holder said in a prepared release.

The WSU scientists aim to create a test that will grade AI technology according the difficulty of the problems it can solve. Initial plans for tests include video games, answering multiple choice problems and solving a Rubiks cube.

Its an opportunity, said Cook, to get back the grassroots and say what AI is.

Read more:

Whats the best way to measure the smarts of AI systems? Researchers are developing an IQ test - GeekWire

The journey that organizations should embark on to realize the true potential of AI – The Indian Express

New Delhi | Updated: July 13, 2020 4:33:10 pm

Implementing Artificial Intelligence (AI) in an organization is a complex undertaking as it involves bringing together multiple stakeholders and different capabilities. Many companies make the mistake of treating AI as a pure play technology implementation project and hence end up encountering many challenges and complexities peculiar to AI. There are three big reasons for increased complexity in an AI program implementation (1) AI is a portfolio based technology (example, comprising sub-categories such as Natural Language Processing (NLP), Natural Language Generation (NLG), Machine Learning) as compared to many standalone technology solutions (2) These sub-category technologies (example, NLP) in turn have many different products and tool vendors with their own unique strengths and maturity cycles (3) These sub-category technologies (example, NLG) are specialists in their functionality and can solve certain specific problems only (example, NLG technology helps create written texts similar to how a human would create it). Hence, organizations need to do three important things Define Ambitious and Achievable Success Criteria, Develop the Right Operating Rhythm, and Create and Celebrate Success Stories to realize the true potential of AI.

Most companies have very narrow or ambiguous success criteria definition of their AI program. These success criteria are not defined holistically and hence may end up providing sub-optimal benefits to the organization. We suggest that the success criteria of an AI program need to not only be ambitious, achievable, and actionable but also tightly integrated with the overall key strategic objectives and priorities of the organization. For example, a bank which is trying to reduce the number of customer complaints and improve the customer experience as key strategic goals can benefit immensely from integrating AI program goals with the goals of this important program (example, leverage machine learning and analytics to analyze past complaints data and better understand customer complaints patterns and journeys and decision points). This interlocking of success criteria will help AI program leaders with the right yardsticks to align and measure their progress and contribution. Additionally, it also helps them get the right visibility and sponsorship at the senior leadership levels in the organization that further improves the chances of success of the AI program.

Also Read: Can Humans and AI coexist to create a hyper-productive HumBot organisation?

A successful AI program requires four key ingredients Right Data, Diverse Skills, Scalable Infrastructure and Seamless Stakeholder Alignment. It is said that Data is the food of an AI program and hence having the right data (example, the volume of data, type of data, and quality of data) at the right time is critical to ensure AI programs have the required fuel and energy to complete their intended journey. While good AI skills are in short supply, leveraging constructs such as having a Nimble CoE (Centers of Expertise) increases chances of optimal utilization of these rare and expensive skills across the organization. Finally, getting various important stakeholders (example, Global Process Owners, IT Leaders, Internal Control & Risks, Continuous Improvement, and HR) seamlessly work together is important to reduce friction and increase AI program velocity.

Also Read:With the power of AI, India can reimagine delivery of public services

It is said that success breeds more success. While AI programs typically focus a lot on efficiency and productivity improvements, many AI programs also generate significant non-direct-quantifiable benefits (for example, improvement in stakeholder experience, improvement in employee engagement and morale). A recent Deloitte survey indicates that 44 per cent organizations felt AI has increased the quality of their products/services while 35 per cent organizations found that AI has enabled them to make better decisions in their organizations. Successful companies find a way to identify these simple, holistic stories and narrate them compellingly and consistently in multiple forums at all levels in their organizations. Humans, by design, are inspired better by stories (than by just numbers) and hence creating a powerful story that combines the quantifiable (example, number of hours saved) with other benefits (example, better decision making ability) can galvanize the entire organization and facilitate rapid and increased adoption of AI at all levels and in all units of the organization.

The revered Chinese saint, Lao Tzu, once famously remarked that A journey of thousand miles begins with a single step. The AI journey in an organization is no exception. While AI implementations are typically more complex and nuanced, companies can leverage the 3-pronged approach mentioned above to realize the true and full potential of AI. While a successful AI program implementation can bestow significant financial benefits on an organization, it also activates the divine journey of freeing up humans to do what they do best leverage their sophisticated brains to introspect, explore, learn, love, empathize and solve the most intricate and defining problems of our generations.

The authors are Ravi Mehta, Partner; Sushant Kumaraswamy, Director; Sudhi. H, Associate Director; and Prashant Kumar, Senior Consultant, Deloitte India.

The Indian Express is now on Telegram. Click here to join our channel (@indianexpress) and stay updated with the latest headlines

For all the latest Technology News, download Indian Express App.

IE Online Media Services Pvt Ltd

Continue reading here:

The journey that organizations should embark on to realize the true potential of AI - The Indian Express

How will AI shape the workforce of the future? – Utah Business

Will artificial intelligence bring a utopia of plenty? Or a dystopic hellscape? Will we, jobless and destitute, scavenge for scraps outside the walls of a few techno-trillionaires? Or will we work alongside machines, achieving new levels of productivity and fulfillment? The tech world has no lack of prognosticators: Bill Gates and Elon Musk, for example, see in AI an existential threat to the human species, while Ray Kurzweil thinks it cant come soon enough.

Silicon Slopes and big data

In fact, artificial intelligence is already here, and has been for some time. While many mistakenly equate AI with consciousnessHollywood has done the robot-gains-consciousness plot to deaththe two are distinct phenomena. As Noah Yuval Harari discusses in Homo Deus, AI need not be conscious to possess superhuman intelligence. Nor is it likely to be. Already, in domain-specific tasks, non-conscious computers are far beyond humans in intelligence. Watson beat humans at Jeopardy back in 2011; more recently, Googles AlphaGo AI beat Korean Grandmaster Lee Sodol for the fifth consecutive time at the incredibly complex game of Go. And, to those who point out the narrow scope within which such AIs can function, just remember how rapidly the scope has expanded in only a few years.

AI depends on intelligent algorithms, and such algorithms depend on the analysis of vast amounts of data. Which is why Utah is on the map with regard to AI advancement. The so-called Silicon Slopes has become, per Mark Gorenberg, a world leader in data analytics. Gorenberg should know. He serves as managing director of Zetta Venture Partners, an AI-focused venture capital firm based in San Francisco, and has invested in a number of Utah companies. The notion of analytics has become a cornerstone of Utah technology, he says.

Utah boasts high-profile data firms like Domo, Omniture (now part of Adobe) and Qualtrics, to be sure. But it also has an ecosystem of lesser-known players. Teem, for example, started by putting software on an iPad so that corporate teams could book conference rooms, Gorenberg explains. In the process, they gathered a ton of data that allows them to predict the digital workplace of the future. One Click Retail (my employerfull disclosure) uses machine learning and Amazon.com data points to help sellers optimize ecommerce operations. InsideSales employs data analytics to accelerate sales productivity by identifying the highest ROI accounts, contacts and action steps. Verscend, a healthcare analytics company, utilizes data in meaningful ways to bring our customers smarter and more effective analytics, per the company website.

But will robots dispossess us of gainful employment?

Utahs tech sector is clearly positioned to benefit from the emergence of data-driven intelligent algorithms. Well and goodbut were still left with the trillion-dollar question: Will smart machines eventually take our jobs? Are we fated to be like the typewriterthe human being as obsolete technologywhile artificial intelligence becomes, metaphorically, the word processor and laser printer? In many areas, yes. According to Gorenberg, however, there will be just as many areas in which the new AI frontier creates jobs. Sure, well lose jobs, he says. But what people arent seeing is the jobs well be gaining.

Take autonomous vehicles, Gorenberg continues by way of example. Sure, a lot of people who drive for a livingtaxi drivers, truckers, etc.will no longer be needed. At the same time, think of the downtown areas of cities. Traffic-congested urban centers no longer need be congested; sophisticated algorithms will route traffic for maximum flow. Intelligent cars, free from human errorand human distractionwill travel faster and in tighter formations, with far fewer accidents.

Then theres the issue of parking. The average downtown area uses 30 percent of its space for parking, Gorenberg notes. Those cars just sit there all day while their owners work. If the hive mind of the autonomous vehicle system knows exactly what transit is needed, and when, it can provide it at a moments notice. Fewer cars will be needed, and they can be kept outside the city center and brought in to meet demand.

Thirty percent of a citys downtown is a lot of area. Gorenberg describes the construction frenzy that will occur as the whole nature of downtowns change from all that prime acreage suddenly available. He imagines a city center could include gardens, urban manufacturing and much more.

Sure, well lose jobs. But what people arent seeing is the jobs well begaining. Mark Gorenberg, managing director, Zetta Venture Partners

And, in his vision of urban reconfiguration, Gorenberg sees beyond the myriad blue-collar jobs that such massive projects will create. Not only will you need construction workers and the like; youll need architects, city designers and planners, software development and IOT implementation, he says. There will be a need for energy experts and water experts and all of the various disciplines it takes to make a city highly functional. In short, the reuse of urban space for the next generation of cities will be a multi-trillion-dollar opportunity and will create millions of jobs at all levels.

Would you like your automation full or partial?

Economist James Bessen would agree with Gorenberg. In his article How computer automation affects occupations: Technology, jobs and skills, he concedes that full automation might indeed result in job losses. However, most automation ispartialonly some tasks are automated. In fact, as he details in his study, out of 270 occupations listed in the 1950 Census, only onethat of elevator operatorhas disappeared. Bessen claims that most job losses are not the result of machines replacing humans, but of humans using machines to replace other humans, as graphic designers with computers replaced typesetters. Or, as Mark Gorenberg puts it, this [artificial intelligence revolution] is no different than any other technology wave.

Are Bessen and Gorenberg overly optimistic, perhaps even nave, about the potential of artificial intelligence to replace humans? Or are AI alarmists a bunch of Luddites? Such questions can only be answered retrospectively. In the present, however, the incontrovertible fact is that intelligent algorithms are helping humans get better at their jobs. We dont know whether, as Alibaba CEO Jack Ma predicts, algorithms will one day be CEOs. What we do know, in the words of Gorenberg, is that a [human] CEO empowered with data is a better CEO.

So the short-to-medium-term prognosis is that human plus machine equals a better work unit than either on its own. Humans empowered by machine learning, data and sophisticated algorithms can outcompete regular old humans in the knowledge economy.

InsideSales has a dataset of over 100 billion sales interactions, says CEO Dave Elkington. The firms intelligent algorithms use this ocean of data to guide salespeople. Often, the lift provided by our software is so extreme as to make our users wonder if there might have been a reporting error. Data-powered, AI-guided salespeople. How can regular salespeople, doing things the old-fashioned way, compete? Most likely, they wont be able to.

Intelligent machines will also extend human abilities in important ways. To illustrate: the developed world (to say nothing of the developing world) faces a shortage of doctors, both generalists and specialists. I believe that AI augmenting healthcare will allow more people to perform healthcare services that today only a few can do, says Gorenberg, adding that, for example, an AI could work side by side with nurses and allow them to take expert ultrasounds and other medical images that today have to be done by a select set of experts. Thousands of high-skill nursing jobs would open up. Whats more, if lower-level professionals can do advanced medical work that is currently the exclusive domain of doctors, doctors will be free to focus on aspects of medicine for which a human with 710 years of medical training is uniquely suited.

Often, the lift provided by our software is so extreme as to make our users wonder if there might have been a reporting error. Dave Elkington, CEO, InsideSales.com

The third wave of tech revolution

If steam power was the first technological wave, and software/internet the second, artificial intelligence could well be the third. In Gorenbergs vision, the huge number of new data science and analytics positions that this upheaval will demand will compare with the millions of developer jobs created by software 25 years ago.

Over the next 510 years and beyond, well see in exactly which ways AI revolutionizes industry and business. One thing, however, is clear: Its happening, and its going to be big. And, here in Utah, were smack in the technological middle.

Jacob Andra is a writer andcontent marketing consultantin Salt Lake City, Utah. You can find him onLinkedInandTwitter.

View original post here:

How will AI shape the workforce of the future? - Utah Business

AI File Extension – What is a .ai file and how do I open it?

An AI file is a drawing created with Adobe Illustrator, a vector graphics editing program. It is composed of paths connected by points, rather than bitmap image data. AI files are commonly used for logos and print media.

AI file open in Adobe Illustrator CC 2019

Since Illustrator image files are saved in a vector format, they can be enlarged without losing any image quality. Some third-party programs can open AI files, but they may rasterize the image, meaning the vector data will be converted to a bitmap format.

To open an Illustrator document in Photoshop, the file must first have PDF Content saved within the file. If it does not contain the PDF Content, then the graphic cannot be opened and will display a default message, stating, "This is an Adobe Illustrator file that was saved without PDF Content. To place or open this file in other applications, it should be re-saved from Adobe Illustrator with the "Create PDF Compatible File" option turned on."

The rest is here:

AI File Extension - What is a .ai file and how do I open it?

AI Weekly: UiPath wants you to know its job cuts are strategic, despite the optics – VentureBeat

Its usually at least mildly newsworthy when a large or important or particularly hot company cuts a chunk of its workforce, as UiPath did this week when it cut about 400 jobs from its total of about 3,200. But the timing and optics of this particular round of cuts from this particular company was jarring, especially with the WeWork debacle still fresh on peoples minds.

UiPath is a unicorn, and the sturdy robotic process automation (RPA)-powered steed has been galloping fast. The company added ProcessGold and StepShot to its stable earlier this month, after a fantastical $568 million funding round earlier this year pushed its total funding to over a billion dollars and a $7 billion valuation. It would seem that UiPath has cash to spend, and its recent Las Vegas event, which reportedly cost an estimated $8 million, reinforced the image that the companys coffers overflow.

And then came the job cuts. The timing and price tag of the event makes for poor optics (even though $8 million wouldnt pay 400 people for more than a few months), but on its face its also just rather alarming that a company freshly flush with gobs of cash would need to make cuts of any kind, let alone to a chunk of its workforce.

Shouldnt the opposite be happening? Shouldnt UiPath be on a breathless hiring spree right now? It just doesnt sit right.

A blog post from UiPath CEO and founder Daniel Dines arguably made things worse. The audience for Dines post is clearly investors; he spins the layoffs as a net positive. He confusingly pats UiPath on the back for increasing its workforce by 60% over the last ten months, when its just dumped around 400 of those people, and he wraps up a quick bullet list of good news for the company by saying we will still end 2019 with almost 50% more employees than when we started the year. Its a mite tone deaf for the average reader.

On top of that, theres some in-the-wind concern about cash burn at the company, per an unnamed source cited by Information Age.

And on top of that, the optics of a job-eliminating automation company eliminating its own jobs makes people feel panicky. What if its not a unicorn at all, but an ouroboros, a snake eating its own tail? And what if thats indicative of a larger problem in the RPA market?

Deep breaths. Deep breaths. A good bit of the sturm und drang around these layoffs really does appear to come down to optics and timing (most likely).

(And it is a little unfair to paint companies like UiPath, Blue Prism, and Automation Anywhere as job-eliminating, because although automation will of course displace jobs, it ostensibly will replace them with new ones but thats a topic for another day.)

In an interview with VentureBeat, UiPath CMO Bobby Patrick reiterated some of the general themes of Dines blog post: that these cuts are about getting more efficient as a company, as it pushes towards profitability and potentially going public. UiPath is at the end of its manic push for growth, growth, growth. Its in 30+ countries now, he said, and the company is focusing on becoming more effective, instead of justbigger.

The company is truly healthy, he assured us. Were only going down to 2,860 employees. Thats still amazing growth, he said, echoing Dines. I think the story is that were going to go into next year focused not only on growth, but on productivity and efficiency.

Thats a certain amount of spin, but Patrick also said that UiPath has 90 open job postings which would make up for about 23% of the lost jobs and some are in different areas, like on the tech side of things rather than sales. He also said that the job cuts had nothing to do directly with recent acquisitions and confirmed that none of the jobs were lost due to UiPaths own automation tools. (We had to ask.)

More to the point, Patrick said that UiPaths annual recurring revenue (ARR) is now at $300 million, up from $25 million two years ago.

None of the above speaks to the potential cash burn inside UiPath, nor does it salve the wounds of the hundreds of now-former UiPath employees. But its not unreasonable to believe UiPath when it says that these job cuts are in service of focusing on Act 2 of the unicorns play worrying more about developing long-term sustainability rather than just growth. If thats the case, the cuts could be considered shrewd, regardless of the iffy optics and timing.

The rest is here:

AI Weekly: UiPath wants you to know its job cuts are strategic, despite the optics - VentureBeat

HIMSS 2017 buzz ranges from patient engagement to AI, machine … – TechTarget

ORLANDO -- HIMSS 2017 buzz centered on health data cybersecurity, but that hot topic of recent years' gatherings of the health IT universe simmered alongside emerging trends such as patient engagement and artificial intelligence and machine learning.

The progression of healthcare IoT, or the Internet of Medical Things, is not without its challenges. Download a PDF of this exclusive guide now and learn how to overcome the obstacles: security, data overload, regulations, and more.

By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.

You also agree that your personal information may be transferred and processed in the United States, and that you have read and agree to the Terms of Use and the Privacy Policy.

Opening day at the Orange County Convention Center, Feb. 20 was marked by news that a major medical center had suspended its joint project with cognitive computing giant IBM Watson; ironically, IBM CEO Ginni Rometty gave the opening keynote.

While that surprising development cast a shadow at HIMSS 2017 on the widely heralded promise of cognitive computing and AI in healthcare, AI and machine learning remain at the core of technology strategies of many health IT companies.

One of them, Salt Lake City-based Health Catalyst, long known as a data center provider, has grown into a diverse health IT services and systems vendor with a wide portfolio of care management, analytics, population health and financial decision support products.

"We know that machine learning and AI are having an impact on healthcare and it's going to continue and grow and it's going to be big," Dale Sanders, executive vice president at Health Catalyst, said at the firm's expansive booth on the bustling HIMSS 2017 floor. "I have never seen an acceleration of technology in my 33-year career like I've seen in just the last two years."

Meanwhile, technologies for patient engagement, which have been bubbling under the surface of other health IT systems for several years, appear to be taking a more prominent role.

A survey of 500 patients and 400 physicians released at HIMSS 2017 by West Corporation, a vendor of communication and network infrastructure services, reported that 91% of patients say they need help managing their chronic disease, and 75% want their providers to check in with them regularly.

Executives at the company, which sells tools such as automated text, email and phone messaging to patients for medication adherence and regular check-ins, said the survey showed that patient engagement is critical.

"Patient engagement is the core of everything," said Allison Hart, vice president of marketing communications for interactive services and healthcare at West. "We really strongly feel that organizations that are prioritizing patient engagement are going to be the organizations best positioned to reduce their cost of care, increase care quality and reduce readmissions."

At an offsite media and industry breakfast hosted by health IT research and consulting firm Chilmark Research, patient engagement analyst Brian Eastwood said he views patient engagement as a kind of patient, or consumer, relationship management approach.

"Engagement is not just the process, but a step in the process, one that requires more than just these individualized touchpoints related to specific care episodes, but much more as a longitudinal process between care episodes," Eastwood said.

In addition to showcasing major health IT players from the EHR, data analytics, cybersecurity and population health worlds and up and coming health IT firms, HIMSS 2017 was popular among big non-health IT-specific tech and communications companies making aggressive forays into healthcare.

Among these were AT&T, Cannon, Honeywell, Mitre, Oracle, SAP and Verizon.

AT&T, for example, has been busy in recent years putting its earth-spanning communications networks and mobility services to use for a range of internet of things applications in healthcare.

A year ago, the communications giant opened a connected health "foundry," or lab, at the Texas Medical Center Innovation Institute in Houston; it is among several such AT&T labs for various niche industries.

At HIMSS 2017, AT&T proudly displayed some IoT in healthcare devices made by companies with which AT&T has either partnered with or simply sold its communications and mobile services to.

These included Google Glass eyeglasses for the blind that allow an "agent" to assist a blind person who wears them by directing them remotely and "seeing' what the blind person would see if he or she could; a connected wheelchair with remote diagnostics and analytics; and personal alert systems that look like wrist watches.

As for cybersecurity, Mac McMillan, co-founder and CEO of health data security and privacy company CynergisTek, said healthcare providers will continue to be bombarded by cyber hackers in 2017, as they were the last two years.

Top defenses include a culture of security in the healthcare organization, and using advanced tools such as identity analytics, McMillan said.

Cybersecurity and value-based care on the agenda for HIMSS17

Emerging uses for AI in healthcare

How wearables and mHealth apps can support patient engagement

View post:

HIMSS 2017 buzz ranges from patient engagement to AI, machine ... - TechTarget

This New Atari-Playing AI Wants to Dethrone DeepMind – WIRED

vF w} tKkBUnZ%ZsdUABe|Tvyo2OF@BEHddFwFfg_w?PV:_Y'Y(:j7UW,GGi#'=?r0ea AC1=E _*8K;:5;?vis.Sw5q1h'N}7`{o-tssf[c23,!}`m{Im8<`M MGa bkGyRsgO 4=Q(STad ]'T~Py;S6N(N)^//4FQ vAw+?In}muF,]Qy YQ%9;=]3wF[?fUFe5Gw'ick3 T3]Na`ed)D:h`"JYxu8hv-M.NA5DYJ R+eVT"#J_qxc1?TnM ~*=%zHK0iN0{xXDz- _iTW'Z;~x_)/GAK(%}JIjM} JV%87>U#M[$vdBo|rFh%PlMK,J,h 4xp']};H;qowpxT}%Q_Gr*PH:A8:atI%+R9S$3=/wv>t;3?q]}z;#/r5H'yd'9#7meLQ]lj99]=Hg~co'Qe4OOGOO'Khdrp;g`vei<_P16X3X#T,av?(nKRr%;/a_hc0v#XJ,^"{e~y':e_O#7F Y>g _v|O)W}i~xO9BG_"u@l4Z}V&S~gGg#XQvW5'x7?/'Qezv&^GX>06@?qt V_G99 [wq?d3_ |j|Ot#P'aA#{D.=r_Yli}>~1%E.&m_GJhBNxb}x/y;}B0JgvrDwsURxg-r+cA5I4S 9IdW`/|kxG<6p OM}klpATg (.b;ui6ZN5Mr]A'2D{-:)grG!K"N@9;(GN!BZ7o#Q}EDKPW{ii<@0~H5 kbBZ|JYuWJ1Rhxhr?TL|9?zGoOG 9m+_CK >T"A~A#+`0*aBqeAS9*B77Od[X;6w}FgB(:V_ AhViEv"}dGD4P4vpwo@Ept; w;=AU`P'E!dYeyd% G$x6=j[2"f/GL$MXA`r7>>_Vn^+s*gB:&>GrPTjxo2O&t^L'fz PHRgYa~9|1_?M?:i: MeDl?G]YeG"l(4k[&1Y^qcc,}NG,s]:F;,pLr3ks!cLYVT?JCUG%Z $]RFk76};h9+ya**mstQ~SDRuTmpD-ReNJh9KJW<+.UR?!6O%TAB*R.NIy9 _ uj}T;k(s@!Q|WGAH&.8]f-KKC^Bw /Gc8 ` '[_J Yu[VQ uw2GATeY_V`r*sj|C 3?m)Lnr.*M#B{,5zv+@e~}0[oM%4LkTK83$UNTTd#2UOPOI,*qJ~{'1u1%$!bH=sNB Y'1Iv6oqt(7uq%]q*&)SB.1m2 sYe Njv7K9!!tC%KOUn [gY'yYaTOvh_@*Q7HrE]s2lZsT GKy~ YuO;Y4>vJUpHO"W:7wPl@sZBQ-v cWEG!gSF bd@C p5Zw0ZO.eMq!2`RM}Ja%AA%@vk >19YVWB W W^'$W # he6w3O,O=dHrmo+"RWnaT]sYa}-,9ToYA/'2)TbO4nSGj1+*tJK!An=rRll_+ $p+.X@/V!*f{%`@ CnE${4OP/l?*U&>&)[Y@yX=$d o8t^;?[<( P7p:Y!(84[/|a(JoH-kDr)}iG/Y-Ny$t, yu!^~$nv+W4/B"CT3Sa,NeC*tFcl8dP^p[_XcW&xD>9_7`GZFeP1t{Mc%(v1 hHQW ,?J,>hn!PQawftkt?=K``Tk/[?*[ZKHWV];@RRERl vM9|!?oc4 *E-d$$$&-dN },eJR3 "}c`h#~=>mH<}ZgU:1kp:p<>vBosIP'8J"(D e7#);|)XtWgO#)DkEkGmlF"_ *N[R&(VL }II(!O'-Kt,Jw5ZcN.:5UPH"nP5VW9W-Iu*ub,'[Y7=x(y,{f4zZ0dQ;e|l[Fw 'kHUV_vc8Pn D*6?Ao7xrrp4>^:?zCfF#~] QTNk8xtSV#N^$;6 WQAi$OKKOb>*hRK(;cp?MFo0vya_.4CGbzFOzg8vwb-a9MmNw{^(pG7F$d0%oSx;q&q]o.Wz}7?}<>C|@3c}*gr^e Vwx6isgvri=oO_oN?"; Rs}.?NB6_=@S["[0FJEnd&Dw!E$h # & neh Q PgfYM~{r%'Wxr%+%[='{ubd2h67#t/| }7lRh1_pBx~Sd0<G~7mSOx~Ox~nm-GX~J,?b1^&b5bj4z'b :<,RpAHH}Z7f7ZZ{F*/n=Y'Yv?Op5MI;&`L*th:! 6sN]sw_ U,H+7JXk+[tmE&|6 We%ukq4{T Iqwp&6 '&nK[)q8:*0~ 9-Xjk6UC:i@PQbA.,pVyzPP0NJ!ZL"TtQCpA2Z%geHZd-t+pO:{B"]Y+ dO!1Z,Gz,054_:u9 7m[w?!:XZbve2l6#VI_8ky~Z)0X-EZy%g2@Q@W&{d >!}9h $Dl+a=e8E#@}w izv4}u{ a0uH_etGt{|n-+]icf*4[7U+jDyIBlx o`M$"8UlE6,; Cx8`/ xgC`:RgFkL3x{UVuZS;C]%7Ri+0s>]"$O ilRjeAJ >Rk~-l -#WTvj5"Sg:N>Uk-^R3bo]mjk$lsk!@?mpLxm`K} pX2vU?11ELWTo6){D:~bD8FI5VRa)W>#z29!b8 +8/h @xv8~R%MZ?5{B ~:FyQMZ("3*Q*%NVtkp|Pm.U7KUPDF [[ 6`;)w1($h=e<`6o9BAJSTC FK4s 9#pbW$JD`9 {L%':Dv+,BxNx@2qXP$mw -2r.sx<-+FD=SAg32*o@J"I?W2hwwX;)ZmZ&8$aLf$50MdVB8Zt`4l-Y:9.D=3W~O~*#2v=$m&_$K9g+m^F^]gtq7L<+SqAVuTgp@Xq )X2Lmt^=*/fJ*4("D|,lq2 ~4' oupTeVL NrQ.q0 |5[e)5IoAj"gI#R>b_Rx!a]E;4B}nw_92NVN5:Ot#{"9t.ADAV vU5|CE%y}5 !5ZywfP0!COs%~3lF`K{s!ZxW)0p@2VEFH` BCgHkM!`ys54h;xPAcb d-WTkRr+QOt5&4*FhW%E;Z`hT@7~A

Originally posted here:

This New Atari-Playing AI Wants to Dethrone DeepMind - WIRED

Frost & Sullivan Recognizes Berkshire Grey for Its Innovation and Leadership in AI-Based Robotic Fulfillment – Business Wire

BOSTON--(BUSINESS WIRE)--Berkshire Grey (www.berkshiregrey.com), a robotics and artificial intelligence (AI) company developing retail, eCommerce, and logistics fulfillment automation for global companies, was recently named a 2020 Enabling Technology Leader in industrial fulfillment systems by Frost & Sullivan.

The COVID-19 pandemic has dramatically changed behavior as consumers move online in search of more convenient, contactless, and safer shopping methods. This has strained eCommerce and retail supply chains, forcing structural changes in fulfillment operations as they double down on omnichannel strategies and reconfigure existing space to meet new demands. In selecting Berkshire Grey, Frost & Sullivan called out its holistic approach to delivering robotic solutions that pick, pack, and sort products and packages for delivery to consumers and retail establishments.

In addition to gaining more capacity from existing distribution centers to meet surging demand for faster shipments of smaller, more frequent orders, retail and third-party logistics fulfillment operations require systems that provide immediate value upon deployment. Berkshire Greys offerings are complete systems that reimagine critical workflows and processes rather than piece parts that organizations must source and integrate to build systems that automate actual work.

Frost & Sullivan applauds Berkshire Greys holistic approach to robotic automation, said David Frigstad, chairman of Frost & Sullivan. Businesses that adopt Berkshire Grey solutions are seeing fast ROI, enabling them to improve operational capacity and be more agile in serving their customers. As a result, Berkshire Grey customers can innovate, adapt quickly to changing market conditions, and meet evolving consumer preferences amid intensifying competition.

Frost & Sullivan also lauded Berkshire Grey for its AI-guided automated picking capabilities, which continue to improve in performance the longer the system is in use. Berkshire Grey solutions excel at identifying, perceiving, and grasping while achieving the reach, speed, and conveyance necessary to automate accurate picking at high speed and with great efficiency.

Berkshire Greys autonomous, AI-powered systems can efficiently handle millions of stock-keeping units (SKUs) in a variety of form factors, including instantly recognizing and handling goods or items that the systems have never processed or even seen before.

Consumers have steadily embraced omnichannel shopping practices for some time. Today we are seeing retailers and fulfillment houses facing two to three times the demand compared to previous periods driven by online purchasing for safety and convenience reasons, said Steve Johnson, president and COO of Berkshire Grey. Our solutions enable supply chain leaders to scale to meet these challenges while positioning them to successfully dominate their sector.

Download a complimentary copy of Frost & Sullivans analysis of Berkshire Grey.

In contrast to traditional robotic solutions, Berkshire Greys holistic approach is multidisciplinary. It combines software and hardware, threading AI technologies through the solution for perception (computer vision), learning (machine learning), motion planning, grasping, and sensing; AI-enabled robotics; flexible systems integration; and mobility. Taken together, these technologies deliver the fastest, most accurate picking and placing of objects, packaged for fast deployment into customer operations and rapid realization of ROI. Berkshire Grey customers have optimized picking labor costs, overcome worker shortage constraints, and seen improvements in throughput ranging from 25% to 50%.

About Frost & Sullivan

For almost six decades, Frost & Sullivan has been world-renowned for its role in helping investors, corporate leaders, and governments navigate economic changes and identify disruptive technologies, Mega Trends, new business models, and companies to action, resulting in a continuous flow of growth opportunities to drive future success. Contact us: Start the discussion.

Each year, Frost & Sullivan presents a Company of the Year award to the organization that demonstrates excellence in terms of growth strategy and implementation in its field. The award recognizes a high degree of innovation with products and technologies and the resulting leadership in customer value and market penetration. Industry analysts compare market participants and measure performance through in-depth interviews, analyses, and extensive secondary research to identify best practices in the industry.

About Berkshire Grey

RADICALLY ESSENTIAL. FUNDAMENTALLY FORWARD.

Berkshire Grey helps customers radically change the essential way they do business by delivering game-changing technology that combines AI and robotics to automate omnichannel fulfillment. Berkshire Grey solutions are a fundamental engine of change that transforms pick, pack, and sort operations to deliver competitive advantage for enterprises serving today's connected consumers. Berkshire Grey customers include Global 100 retailers and logistics service providers. More information is available at http://www.berkshiregrey.com.

See the article here:

Frost & Sullivan Recognizes Berkshire Grey for Its Innovation and Leadership in AI-Based Robotic Fulfillment - Business Wire

VIDEO: Role of AI in breast imaging with radiomics, detection of breast density and lesions – Health Imaging

"I think AI is still in its relatively early phase of adoption," Lehman said. "We do have some centers that are not academic centers that are very forward thinking and really wanting to bring AI into their practices. However, we are also seeing a story that is very familiar when we are bringing computer-aided detection (CAD) into both academic and community centers. The technology is being incorporated into clinical care, but we are still studying what the actual outcomes are on patients who are being screened with mammography where AI tools are or are not being used."

This includes AI for automated detection of breast cancer lesions and flagging these to show the areas of interest on mammogram images, or to flag studies that need closer attention. AI also can take a first pass look at mammograms to determine if they appear to be normal, so radiologists can prioritize which exams need to be read first and which may be more complex.

This technology will likely become more important as the number of breast imaging exams switches over from traditional four-image mammogram studies to much larger 3D mammogram digital breast tomosythesis (DBT) exams of 50 or more images that are more time consuming to read. AI is already being used to flag images that deserve a closer look in these datasets.

AI is also finding use as an automated way to grade breast density to help eliminate the variation of grading the same patient by human readers.

However, the most exciting area of AI for breast imaging is in the potential of radiomics, where the AI will view medical imaging in ways that human readers cannot to identify very complex and small patterns that will help better assess patient risk scores, or what the best outcomes will be based on various cancer treatments.

"What I am really excited about is the domain where investigators are considering the power of artificial intelligence to do things that humans cannot or are not very good at, and then to allow the humans to really focus on those tasks where humans excel. As of today, these AI tools have not even really scratched the surface," Lehman explained.

She said this area of research using radiomics moves beyond training AI to look at images like a human radiologist and to instead pull out details that are usually hidden from the human eye. This includes rapid computer segmentation and analysis of the morphology of disease or tissue patterns seen in images, looking for minute regional structures that can be detected by AI.

"This is not to train AI to look at mammograms like I do, but to train the AI to look for patterns and signals that my human eyes and human brain cannot detect or process," Lehman said.

She said today, we are just scratching the surface of the data potential of AI analysis of cancers in imaging. Deeply embedded patterns within cancers on imaging may be able to tell us a lot about which concerns will or will not respond to different drugs or therapies. AI may be able to tell us this from a much deeper analysis of the imaging, including the subtypes of that particular cancer. This would enable much better tailored, personalized medicine and treatments for each patient.

Read the rest here:

VIDEO: Role of AI in breast imaging with radiomics, detection of breast density and lesions - Health Imaging

Ai-Da the robot painter, Iranian epics and a gaze at God the week in art – The Guardian

Exhibition of the week

Ai-Da: Portrait of the Robot Enter the uncanny valley with this realistic humanoid robot who can draw herself. Is that art? So what is art? Plenty to think about. Read more.Design Museum, London until 29 August

Epic Iran Theres enough beauty here to fill several exhibitions - but this trip through 5,000 years of cultural history works because of the sheer quality of the exhibits. An eye-opener. Read our five-star review. V&A, London, 29 May-12 September

Royal Portraits: From Tudors to Windsors We seem as fascinated by the monarchy as ever, one way or another. This exhibition reveals how the images of British royals have been shaped since the Renaissance. National Maritime Museum, Greenwich, 28 May-31 October

Conversations With God A free exhibition about the 19th-century Polish artist Jan Matejkos history painting of the revolutionary astronomer Copernicus, this is the first time the National Gallery has ever shown Polish art. National Gallery, London, until 22 August

Nero: The Man Behind the Myth Some wonderful things here, from statues of Nero and other members of the imperial family to Pompeiian frescoes, whatever you think of the exhibitions thesis that Nero was not the monster history has made of him. Read more. British Museum, London, until 24 October.

An enormous space rocket could be next up on Trafalgar Squares fourth plinth or a Ghanaian grain silo, a bobbly man, a giant jewellery tree, missionaries in Africa, or a memorial to murdered transgender women. Six shortlisted ideas have been unveiled at Londons National Gallery for the sculpture commission, which rotates normally every 18 months, and the public can help pick two winners, to be installed in 2022 and 2024.

Google is changing its photo algorithms to better reflect skin tones of colour

Celebrity merchandise is flooding art auctions

while the rush for digital NFTs comes at an environmental cost

Tacita Dean was baffled by the pandemic lockdown

Will Londons vast 22 Bishopsgate office block ever be full?

British mosques have a starring role at this years Venice Architecture Biennale

Art historian Laurence des Cars is the first female president of the Louvre

Ming Smith was one of the few women in Kamoinge, a collective of black photographers

while a Beverly church is putting inspiring women up near the rafters

Melbournes venerable Flinders Street station is reopening as a gallery space

but the Rising festival it forms part of paused after one day as Melbourne closed down

Female sculptors challenge art world sexism in joint show Breaking the Mould

Beware wolves and bears in Matthew Barneys intriguing new film

Kenyan artist Michael Armitage is reinventing the European oil-painting tradition

Matador academies, Ukrainian prom night and other adolescent rites of passage have caught the eye of photographer Michal Chelbin

Australias Archibald portrait prize is 100 and still controversial

This years finalists were unveiled in Sydney

and we looked back at some past highlights

The race is on for the best Milky Way portrait

Derbys new Museum of Making is a temple to manufacturing

Award-winning film-maker Ayo Akingbade is charting Londons changing face

Nero was framed for the burning of Rome

Coal and Georgian terraces were inextricably interlinked, according to a new book on architectures environmental impact

Wynn Bullock made the Monterey peninsula look mythic

Theres been a outbreak of public art on the UKs south-east coast

while Hastings is full of FILTH (failed in London, try Hastings) with beautiful homes

Scotland needs knitters

Art loves a crowd

Technology is not doing David Hockney many favours

New Yorks Spring Valley suburb photographed by Al J Thompson is another victim of gentrification

Tony Hall has resigned from the National Gallery following the fallout from the Martin Bashir row

Jen Orpin has painted the motorway journey she took to visit her dying father

Heather Phillipson worships the UK weather forecast

Amish girls like to paddle at the beach

Memento mori applies to animals too

Paul Graham returned us to Thatchers Britain

Eric Carle, writer-illustrator of The Very Hungry Caterpillar, has died

The late Mary Beth Edelson was a key figure in feminist art

We also remembered Brazilian architect Paolo Mendes da Rocha

avant garde emissary Mark Lancaster

and landscape painter Leslie Marr

The Abb Scaglia adoring the Virgin and Child, 1634-35, by Anthony van DyckTwo centuries of Flemish art lie behind this emotional encounter between a man and the mother of God. Van Dyck portrayed his patron Scaglia for a church in Antwerp, putting his fretful and careworn praying presence in a direct and intimate reciprocal relationship with Mary and Jesus. Its a move that epitomises the passionate, unbuttoned baroque style that flourished in 17th-century Catholic Europe. Yet it is also a nod to Van Dycks local Flemish forerunners; 200 years earlier, Jan van Eyck was painting wealthy people in similar close encounters with the Virgin, including in his great Madonna of Chancellor Rolin in the Louvre. Van Dyck updates the genre with a waft of Baroque silks and a breath of sky.National Gallery, London

To follow us on Twitter: @GdnArtandDesign.

If you dont already receive our regular roundup of art and design news via email, please sign up here.

If you have any questions or comments about any of our newsletters please email newsletters@theguardian.com

Here is the original post:

Ai-Da the robot painter, Iranian epics and a gaze at God the week in art - The Guardian

Legal Research And AI: Looking Toward The Future – Above the Law

I read with interest the recent post by my fellow Above the Law columnist, Bob Ambrogi, on a study about the disparity of results found when using various legal research tools. Those findings caught my attention because Id encountered that very phenomenon when conducting research for this article. As I tested the built-in AI features of Westlaw and LexisNexis, I noticed that identical queries entered into each platform typically led to very different sets of results.

Of course, thats one of the legal research problems artificial intelligence (AI) has the potential to solve. When natural language processing is based not just on the words entered into the search box, but on the past behavior of the user and other users whove made similar inquiries, the results should ultimately be more uniform across the board. The idea is that since the results are based on a broad set of data analytics rather than just an analysis of the terms entered, the results will be more precisely aligned to the information that the user was seeking to obtain.

Or, as Jamie Buckley, Chief Product Officer at LexisNexis explained to me when we discussed the newly released AI feature, Lexis Answers: One primary area of focus isusing our content, along with natural language processing and customer data to provide better search results. Our goal is to use customer data in aggregate to make the information we provide better. So we try to help the user to understand what the user really needs.

Lexis Answers is the first step toward achieving this goal. Using natural language processing and AI, this tool, which is built into the Lexis Advance legal research platform, identifies key phrases from the users query and provides responsive results, sometimes including a Lexis Answers Card in cases where there is a single best result, followed by the more familiar list of additional relevant results.

Lexis Answers is currently limited to 5 types of queries legal definitions, standards of review, burdens of proof, legal doctrine, and elements of a claim although there are plans to expand the number of categories in the future.

Not surprisingly, Thomson Reuters[1] is also incorporating AI technologies into its legal research platform. In 2015, it rolled out Westlaw Answers, which provides a specific response to certain set types of queries, just as Lexis Answers does. The Westlaw Answers results are triggered if the user clicks on a suggested search query.

Westlaw also includes two other features powered by AI: Research Recommendations and Folder Analysis. Both are driven by the users interaction with the search results. Research Recommendations are made as you click on results and are intended to point you in the right direction by suggesting certain documents or Key Numbers that may be relevant to your research.

Folder Analysis is a feature that I find to be particularly useful. After youve placed a few documents into a folder, the folder contents are analyzed and additional cases are recommended to you based on the issues identified as a result of the folder analysis.

When I spoke to Thomson Reuters representatives (Mike Dahn Senior Vice President, Westlaw Product Management, Khalid Al-Kofahi Vice President, Research & Development; andErik Lindberg Senior Director, Westlaw Product Management) about their AI features and plans for the future, I learned that one of their top goals is to continue along this vein and use AI tools to analyze user interaction to provide the most relevant and efficient search results. As Mike Dahn explained, We want to disambiguate user queries by listening throughout the session for signals of relevance so that we can identify the topics theyre most interested in, and thats whereresearch recommendations and folder analysis come in.

Following these calls, I found myself excited about the future of legal research. While legal research is not a particularly sexy topic, AI is. AI has the potential to dramatically change the practice of law in a number of different ways in the very near future, and legal research is one of the areas that will be affected the most.

Legal research has been and continues to be a relatively mundane and tedious aspect of our daily lives as lawyers. If AI can drastically reduce the amount of time lawyers spend conducting research by providing increasingly relevant results, much more quickly, then lawyers can (happily) focus on the more interesting aspects of practicing law.

Were on the cusp of an AI revolution and each company has access to hoards of data, both in terms of content and user interaction with their products. In other words, each of these legal industry behemoths is uniquely positioned to take advantage of the next stage of technological advancement.

As a result, it was heartening to hear representatives from both LexisNexis and Thomson Reuters discuss AI and its potential impact on the future of law with such passion, precision, and vision. Of course, the real question is: can their visions come to fruition? As we all know, its not always easy for large, established companies (think Kodak and Xerox) to shift gears and pivot with the times. Will Thomson Reuters and LexisNexis be different? That remains to be seen. Tune in tomorrow and see.

[1] By way of disclaimer, I am a Thomson West author and as a result I receive complimentary access to Westlaw. I also received a 30-day complimentary pass to Lexis Advance for purposes of this post.

Nicole Black is a Rochester, New York attorney and the Legal Technology Evangelist at MyCase, web-based law practice management software. Shes been blogging since 2005, has written a weekly column for the Daily Record since 2007, is the author of Cloud Computing for Lawyers, co-authors Social Media for Lawyers: the Next Frontier, and co-authors Criminal Law in New York. Shes easily distracted by the potential of bright and shiny tech gadgets, along with good food and wine. You can follow her on Twitter @nikiblack and she can be reached at niki.black@mycase.com.

The rest is here:

Legal Research And AI: Looking Toward The Future - Above the Law

Octane AI boldly bets that Convos are the future of content – TechCrunch

Octane AI boldly bets that Convos are the future of content
TechCrunch
One billion people use Facebook Messenger every month. And no matter how bad the current perceptions of the bot scene are, that number is hard to ignore. Octane AI is counting on celebrity content creators to build conversational experiences that ...

View post:

Octane AI boldly bets that Convos are the future of content - TechCrunch

Turning AI onto itself: AI algorithm detects when medical images will be difficult for radiologists or AI to make an effective diagnosis – PRNewswire

When applied to images of x-rays to detect pneumonia, errors by radiologists were rare when the x-ray images had clear features. However, UDC found the diagnosis (or label) for several x-ray images to be neither correct nor an error. Verification of these images by an independent radiologist also agreed that they were indeed difficult images to diagnose, with their independent assessment often disagreeing with the original diagnosis provided in the public dataset. Similarly, AI that was trained to diagnose pneumonia also found the assessment difficult for these images.

Removal of poor-quality (difficult) images identified by UDC from the training dataset improved AI accuracy for diagnosing pneumonia in x-rays images by over 10% as measured on a hold out blind test set, and the AI was shown to be more scalable (generalizable). The accuracy also exceeded benchmarks set by the current literature for that public dataset.

The AI Scientist that led the project, Dr Milad Dakka, said "Our results suggest these poor-quality images are uninformative, counter-productive or confusing when used in training AI. The ability to identify when new images are poor-quality is important to prevent an inaccurate AI clinical assessment, but also to alert the radiologist when the scan is likely to be difficult to diagnose or when a new scan should be taken."

Presagen Co-Founder and Chief Strategy Officer, Dr Don Perugini said "Many AI practitioners believe that AI performance and scalability can be solved with more data. This is not true, and we call it the AI data fallacy. Even 1% poor-quality data can impact the performance of the AI. Building accurate and scalable AI is about using the right data."

Presagen has recently developed a range of patent-pending AI technologies that drive a fundamental paradigm shift in developing commercially scalable AI products for real-world problems, which apply beyond healthcare and to AI more generally.

Dr Michelle Perugini said "We are excited to present to the world the suite of technologies, which we believe advance the field of AI. These technologies will allow Presagen to build scalable 'out of the box' AI products that are more commercially viable and technically superior, and thus market dominating. This is vital in Presagen's journey to become world-leaders in AI Enhanced Healthcare and a dominant player in the AI-in-Femtech market globally. More importantly, we see it as an opportunity to change, lead, and dominate the AI industry."

SOURCE Presagen

https://www.presagen.com/

Go here to see the original:

Turning AI onto itself: AI algorithm detects when medical images will be difficult for radiologists or AI to make an effective diagnosis - PRNewswire

Facebook battles bad content ‘cloaking’ with AI – Marketing Dive

Dive Brief:

Protecting News Feed users from inadvertently accessing unwanted content is a noble goal, but Facebook has a vested interest in ensuring the content meets its stated policies because any offensive content that makes it through the review process due to cloaking could lead to a scenario similar to what Google faced earlier this year when a number of advertisers boycotted YouTube for fear of their ads appearing next to content filled with hate speech or that supported terrorists.

It is unclear how big an impact the boycott had on YouTube from a monetization standpoint, with Google reporting strong Q2 results for the platform even as other reports suggest big advertisers like P&G and Unilever have accelerated their ad spend reductions across the digital landscape. However, it is clear that brands are taking the safety issue seriously, resulting in a flurry of announcements from Google, Facebook and others delineating how they are trying to close the gaps through which offensive content may be slipping.

Providing a good user experience has been a long-stated goal of Facebook, particularly with its advertising, and because cloaking can lead users to pages filled with unwanted ads, scams and offensive material, it is an issue the platform needs to deal with. Given the attention being paid to brand safety by advertisers and the media, Facebook is wise to get ahead of its cloaking problem as well as make it known it is doing so.

See the rest here:

Facebook battles bad content 'cloaking' with AI - Marketing Dive

Frost & Sullivan Radar Ranks Wolters Kluwer as a Top 20 AI Innovation Leader in Healthcare IT – Business Wire

WALTHAM, Mass.--(BUSINESS WIRE)--Wolters Kluwer, Health, a leading global provider of trusted clinical technology and evidence-based solutions, is recognized by Frost & Sullivan as a Frost Radar global leader in artificial intelligence (AI) for healthcare IT. The independent analysis evaluated a field of more than 200 healthcare IT companies and ranked Wolters Kluwer among the top 20 for continuous innovation and growth focusing on areas where AI solutions are most relevant for hospitals, physicians and payers.

In a market forecasted to reach more than $34 billion globally by 2025, Wolters Kluwer is one of the top growth performers in AI for healthcare IT and poised to move higher on the Radar, commented Koustav Chatterjee, report author and analyst for Frost & Sullivans Global Transformational Health team. In innovation metrics, Wolters Kluwer delivered remarkable results at scale for both payers and providers.

Wolters Kluwer is coupling the expansive knowledge of its trusted clinical experts with impactful AI solutions that target complex problems in healthcare. According to the Frost report, the top-right Radar positioning of Wolters Kluwer, adjacent to well-recognized tech giants, highlights its superior deep learning and NLP capabilities, and showcases how Wolters Kluwer is reimagining predictive clinical surveillance.

A Global Powerhouse for AI in Healthcare Frost & Sullivan forecasts Wolters Kluwer will expand its AI-enabled healthcare IT footprint, working closely with large health systems, government agencies, and leading start-ups from Europe, the Middle East, and Southeast Asia in the next 2 to 3 years. Frost sees growth from healthcare stakeholders with the need and incentive to embrace full-fledged AI to improve clinical efficacy, augment financial performance, and streamline operational agility.

AI is deeply woven into the Wolters Kluwer DNA and it fully spans our Health solutions, commented Jean-Claude Saghbini, Chief Technology Officer, Wolters Kluwer, Health. The Frost Radar report validates years of effort and investment building a world-class AI ecosystem with a unique combination of data scientists, clinicians, and product teams that can make a meaningful impact on healthcare.

Indicators of this AI innovation in Health solutions include:

Other rapid response innovations in clinical content and data science:

To learn more, download the Frost Radar report: Artificial Intelligence for Healthcare IT, Global, 2020.

Read this story on our website.

About Wolters Kluwer

Wolters Kluwer (WKL) is a global leader in professional information, software solutions, and services for the clinicians, nurses, accountants, lawyers, and tax, finance, audit, risk, compliance, and regulatory sectors. We help our customers make critical decisions every day by providing expert solutions that combine deep domain knowledge with advanced technology and services.

Wolters Kluwer reported 2019 annual revenues of 4.6 billion. The group serves customers in over 180 countries, maintains operations in over 40 countries, and employs approximately 19,000 people worldwide. The company is headquartered in Alphen aan den Rijn, the Netherlands.

Wolters Kluwer provides trusted clinical technology and evidence-based solutions that engage clinicians, patients, researchers and students in effective decision-making and outcomes across healthcare. We support clinical effectiveness, learning and research, clinical surveillance and compliance, as well as data solutions.

For more information about our solutions, visit https://www.wolterskluwer.com/en/healthand follow us on LinkedIn and Twitter @WKHealth.

For more information, visit http://www.wolterskluwer.com, follow us on Twitter, Facebook, LinkedIn, and YouTube.

Link:

Frost & Sullivan Radar Ranks Wolters Kluwer as a Top 20 AI Innovation Leader in Healthcare IT - Business Wire

Adobe CTO says AI will ‘democratize’ creative tools – TechCrunch

Adobe CTO Abhay Parasnis sees a shift happening.

A shift in how people share content and who wants to use creative tools. A shift in how users expect these tools to work especially how much time they take to learn and how quickly they get things done.

I spoke with Parasnis in December to learn more about where Adobes products are going and how theyll get there even if it means rethinking how it all works today.

What could we build that makes todays Photoshop, or todays Premiere, or todays Illustrator look irrelevant five years from now? he asked.

In many cases, that means a lot more artificial intelligence; AI to flatten the learning curve, allowing the user to command apps like Photoshop not only by digging through menus, but by literally telling Photoshop what they want done (as in, with their voice). AI to better understand what the user is doing, helping to eliminate mundane or repetitive tasks. AI to, as Parasnis puts it, democratize Adobes products.

Weve seen some hints of this already. Back in November, Adobe announced Photoshop Camera, a free iOS/Android app that repurposes the Photoshop engine into a lightweight but AI-heavy interface that allows for fancy filters and complex effects with minimal effort or learning required of the user. I see it as Adobes way of acknowledging (flexing on?) the Snapchats and Instas of the world, saying oh, dont worry, we can do that too.

But the efforts to let AI do more and more of the heavy lifting wont stop with free apps.

We think AI has the potential to dramatically reduce the learning curve and make people productive not at the edges, but 10x, 100x improvement in productivity, said Parasnis.

The last decade or two decades of creativity were limited to professionals, people who really were high-end animators, high-end designers why isnt it for every student or every consumer that has a story to tell? They shouldnt be locked out of these powerful tools only because theyre either costly, or they are more complex to learn. We can democratize that by simplifying the workflow.

Read more:

Adobe CTO says AI will 'democratize' creative tools - TechCrunch