AI is not yet a slam dunk with sentiment analytics – ZDNet

When we look at how big data analytics has enhanced Customer 360, one of the first disciplines that comes to mind is sentiment analytics. It provided the means for expanding the traditional CRM interaction view of the customer with statements and behaviors voiced on social networks.

And with advancements in natural language processing (NLP) and artificial intelligence (AI)/machine learning, one would think that this field is pretty mature: marketers should be able to decipher with ease what their customers are thinking by turning on their Facebook or Twitter feeds.

One would be wrong.

While sentiment analytics is one of the most established forms of big data analytics, there's still a fair share of art to it. Our take from this year's Sentiment Analytics Symposium held last week in New York is that there are still plenty of myths about how well AI and big data are adding clarity to analyzing what consumers think and feel.

Sentiment analytics descended from text analytics, which was all about pinning down the incidence of keywords to give an indicator of mood. That spawned the word clouds that at one time were quite ubiquitous across the web.

However, with languages like English, where words have double and sometimes triple meanings, keywords alone weren't adequate for the task. The myth emerged that if we assemble enough data, that we should be able to get a better handle on what people are thinking or feeling. By that rationale, advances in NLP and AI should've proven icing on the cake.

Not so fast, said Troy Janisch, who leads the social insights team at US Bank. NLP won't necessarily differentiate whether iPhone mentions represent buzz or customers looking for repairs. You'd think that AI could ferret out the context, yet none of the speakers indicated that it was yet up to the task. Janisch stated you'll still need human intuition to parse context by formulating the right Boolean queries.

The contribution of big data is that it frees analysts of the constraints of having to sample data, and so we take for granted that you can sample the entire Twitter firehose, if you need it. But for many marketers, big data is still intimidating.

Tom H.C. Anderson, founder of text analytics firm OdinText observed that many firms were blindly collecting data and throwing queries at it without a clear objective for making the results actionable. He pointed to the shortcomings of social media analytic technologies and methodologies providing reliable feedback loops with actual events or occurrences.

For that reason, said Anderson, social media analytics have fallen short in predicting future behavior. There's still plenty of human intuition rather than AI involved in connecting the dots and making reliable predictions.

Many firms are still overwhelmed by big data and being overly "reactive" to it, according to Kirsten Zapiec, co-founder of market research consulting firm bbb Mavens. Admittedly, big data has largely made sampling and reliance on focus groups or detailed surveys obsolete. But, warned Zapiec, as data sets get bigger, it becomes all too easy to lose the human context and story behind the data. That surprised us, as it has run counter to the party line of data science.

Zapiec made several calls to action that sounded all too familiar. First, validate the source, and then cross validate it with additional sources. For instance, a Twitter feed alone won't necessarily tell the full story. Then you need to pinpoint the roles of actors with social graphs to determine whether the voice is thought leader, follower, or bot.

Zapiec then made a pitch for data quality: companies should shift from data collection to data integration mode. We could have heard the same line of advice coming out of data warehousing conferences of the 1990s. Some things never change.

Of course, there is concern over whether social marketers are totally missing the signals from their customers where they live. For instance, the "camera company" Snapchat only provides APIs for advertising, not for listening. So could other sources or data elements make up the difference? Keisuke Inoue, VP of data science at Emogi, made the case that emojis are often far more expressive about sentiment than words.

But that depends on whether you can understand them in the first place.

See the original post here:

AI is not yet a slam dunk with sentiment analytics - ZDNet

How AI will revolutionize manufacturing – MIT Technology Review

Ask Stefan Jockusch what a factory might look like in 10 or 20 years, and the answer might leave you at a crossroads between fascination and bewilderment. Jockusch is vice president for strategy at Siemens Digital Industries Software, which develops applications that simulate the conception, design, and manufacture of products like cell phones or smart watches. His vision of a smart factory is abuzz with independent, moving robots. But they dont stop at making one or three or five things. Nothis factory is self-organizing.

This podcast episode was produced by Insights, the custom content arm of MIT Technology Review. It was not produced by MIT Technology Reviews editorial staff.

Depending on what product I throw at this factory, it will completely reshuffle itself and work differently when I come in with a very different product, Jockusch says. It will self-organize itself to do something different.

Behind this factory of the future is artificial intelligence (AI), Jockusch says in this episode of Business Lab. But AI starts much, much smaller, with the chip. Take automaking. The chips that power the various applications in cars todayand the driverless vehicles of tomorroware embedded with AI, which support real-time decision-making. Theyre highly specialized, built with specific tasks in mind. The people who design chips then need to see the big picture.

You have to have an idea if the chip, for example, controls the interpretation of things that the cameras see for autonomous driving. You have to have an idea of how many images that chip has to process or how many things are moving on those images, Jockusch says. You have to understand a lot about what will happen in the end.

This complex way of building, delivering, and connecting products and systems is what Siemens describes as chip to citythe idea that future population centers will be powered by the transmission of data. Factories and cities that monitor and manage themselves, Jockusch says, rely on continuous improvement: AI executes an action, learns from the results, and then tweaks its subsequent actions to achieve a better result. Today, most AI is helping humans make better decisions.

We have one application where the program watches the user and tries to predict the command the user is going to use next, Jockusch says. The longer the application can watch the user, the more accurate it will be.

Applying AI to manufacturing can result in cost savings and big gains in efficiency. Jockusch gives an example from a Siemens factory of printed circuit boards, which are used in most electronic products. The milling machine used there has a tendency to goo up over timeto get dirty. The challenge is to determine when the machine has to be cleaned so it doesnt fail in the middle of a shift.

We are using actually an AI application on an edge device that's sitting right in the factory to monitor that machine and make a fairly accurate prediction when it's time to do the maintenance, Jockusch says.

The full impact of AI on businessand the full range of opportunities the technology can uncoveris still unknown.

There's a lot of work happening to understand these implications better, Jockusch says. We are just at the starting point of doing this, of really understanding what can optimization of a process do for the enterprise as a whole.

Business Lab is hosted by Laurel Ruma, director of Insights, the custom publishing division of MIT Technology Review. The show is a production of MIT Technology Review, with production help from Collective Next.

This podcast episode was produced in partnership with Siemens Digital Industries Software.

Siemens helps Vietnamese car manufacturer produce first vehicles, Automation.com, September 6, 2019

Chip to city: the future of mobility, by Stefan Jockusch, The International Society for Optics and Photonics Digital Library, September 26, 2019

Laurel Ruma: From MIT Technology Review, I'm Laurel Ruma, and this is Business Lab, the show that helps business leaders make sense of new technologies coming out of the lab and into the marketplace. Our topic today is artificial intelligence and physical applications. AI can run on a chip, on an edge device, in a car, in a factory, and ultimately, AI will run a city with real-time decision-making, thanks to fast processing, small devices, and continuous learning. Two words for you: smart factory.

My guest is Dr. Stefan Jockusch, who is vice president for strategy for Siemens Digital Industries Software. He is responsible for strategic business planning and market intelligence, and Stefan also coordinates projects across business segments and with Siemens Digital Leadership. This episode of Business Lab is produced in association with Siemens Digital Industries. Welcome, Stefan.

Stefan Jockusch: Hi. Thanks for having me.

Laurel: So, if we could start off a bit, could you tell us about Siemens Digital Industries? What exactly do you do?

Stefan: Yeah, in the Siemens Digital Industries, we are the technical software business. So we develop software that supports the whole process from the initial idea of a product like a new cell phone or smartwatch, to the design, and then the manufactured product. So that includes the mechanical design, the software that runs on it, and even the chips that power that device. So with our software, you can put all this into the digital world. And we like to talk about what you get out of that, as the digital twin. So you have a digital twin of everything, the behavior, the physics, the simulation, the software, and the chip. And you can of course use that digital twin to basically do any decision or try out how the product works, how it behaves, before you even have to build it. That's in a nutshell what we do.

Laurel: So, staying on that idea of the digital twin, how do we explain the idea of chip to city? How can manufacturers actually simulate a chip, its functions, and then the product, say, as a car, as well as the environment surrounding that car?

Stefan: Yeah. Behind that idea is really the thought that in the future, and today already we have to build products, enabling the people who work on that to see the whole, rather than just a little piece. So this is why we make it as big as to say from chip to city, which really means, when you design a chip that runs in a vehicle of today and more so in the future, you have to take a lot of things into account while you are designing that chip. You have to have an idea if the chip, for example, controls the interpretation of things that the cameras see for autonomous driving, you have to have an idea how many images that chip has to process or how many things are moving on those images and obvious pedestrians, what recognition do you have to do? You have to understand a lot about what will happen in the end. So the idea is to enable a designer at the chip level to understand the actual behavior of a product.

And what's happening today, especially is that we don't develop cars anymore just with a car in mind, we more and more are connecting vehicles to the environment, to each other. And one of the big purposes, as we all know, that is of course, to improve the contamination in cities and also the traffic in cities, so really to make these metropolitan areas more livable. So that's also something that we have to take into account in this whole process chain, if we want to see the whole as a designer. So this is the background of this whole idea, chip to city. And again, the way it should look like for a designer, if you think about, I'm designing this vision module in a car, and I want to understand how powerful it has to be. I have a way to immerse myself into a simulation, a very accurate one, and I can see what data my vehicle will see, what's in them, how many sensor inputs I get from other sources, and what I have to do. I can really play through all of that.

Laurel: I really like that framing of being able to see the whole, not just the piece of this incredibly complex way of thinking, building, delivering. So to get back down to that piece level, how does AI play a role at the chip level?

Stefan: AI is a lot about supporting or even making the right decision in real time. And that's I think where AI and the chip level become so important together, because we all know that a lot of smart things can be done if you have a big computer sitting somewhere in a data center. But AI and the chip level is really very targeted at these applications that need real-time performance and a performance that doesn't have time to communicate a lot. And today it's really evolving to that the chips that do AI applications are now designed already in a very specialized way, whether they have to do a lot of compute power or whether they have to conserve energy as best as they can, so be very low power consumption or whether they need more memory. So yeah, it's becoming more and more commonplace thing that we see AI embedded in tiny little chips, and then probably in future cars, we will have a dozen or so semiconductor-level AI applications for different things.

Laurel: Well, that brings up a good point because it's the humans who are needing to make these decisions in real time with these tiny chips on devices. So how does the complexity of something like continuous learning with AI, not just help the AI become smarter but also affect the output of data, which then eventually, even though very quickly, allows the human to make better decisions in real time?

Stefan: I would say most applications of AI today are rather designed to help a human make a good decision rather than making the decision. I don't think we trust it quite that much yet. So as an example, in our own software, like so many makers of software, we are starting to use AI to make it easier and faster to use. So for example, you have these very complex design applications that can do a lot of things, and of course they have hundreds of menus. So we have one application where the program watches the user and tries to predict the command the user is going to use next. So just to offer it and just say, "Aren't you about to do this?" And of course, you talked about the continuous improvement, continuous learningthe longer the application can watch the user, the more accurate it will be.

It's currently already at a level of over 95%, but of course continuous learning improves it. And by the way, this is also a way to use AI not just to help a single user but to start encoding a knowledge, an experience, a varied experience of good users and make it available to other users. If a very experienced engineer does that and uses AI and you basically take those learned lessons from that engineer and give it to someone less experienced who has to do a similar thing, that experience will help the new user as well, the novice user.

Laurel: That's really compelling because you're rightyou're building a knowledge database, an actual database of data. And then also this all helps the AI eventually, but then also really does help the human because you are trying to extend this knowledge to as many people as possible. Now, when we think about that and AI at the edge, how does this change opportunities for the business, whether you're a manufacturer or the person using the device?

Stefan: Yeah. And in general, of course, it's a way for everyone who makes a smart product to differentiate, to create differentiation because all these, the functions enabled by AI of course are smart, and they give some differentiation. But the example I just mentioned where you can predict what a user will do, that of course is something that many pieces of software don't have yet. So it's a way to differentiate. And it certainly opens lots of opportunities to create these very highly differentiated pieces of functionality, whether it's in software or in vehicles, in any other area.

Laurel: So if we were actually to apply this perhaps to a smart factory and how people think of a manufacturing chain, first this happens, and then that happens and a car door is put on and then an engine is put in or whatever. What can we apply to that kind of traditional way of thinking of a factory and then apply this AI thinking to it?

Stefan: Well, we can start with the oldest problem a factory has had. I mean, factories have always been about producing something very efficiently and continuously and leveraging the resources. So any factory tries to be up and running whenever it's supposed to be up and running, have no unpredicted or unplanned downtime. So AI is starting to become a great tool to do this. And I can give you a very hands-on example from a Siemens factory that does printed circuit boards. And one of the steps they have to do is milling of these circuit boards. They have a milling machine and any milling machine, especially one like that that's highly automated and robotic, it has a tendency to goo up over time, to get dirty. And so one challenge is to have the right maintenance because you don't want the machine to fail right in the middle of a shift and create this unplanned downtime.

So one big challenge is to figure out when this machine has to be maintained, without of course, maintaining it every day, which would be very expensive. So we are using actually an AI application on an edge device that's sitting right in the factory, to monitor that machine and make a fairly accurate prediction when it's time to do the maintenance and clean the machine so it doesnt fail in the next shift. So this is just one example, and I believe there is hundreds of potential applications that may not be totally worked out yet in this area of really making sure that factories produce consistent high quality, that there's no unplanned downtime of the machines. There's of course, a lot of use already of AI in visual quality inspections. So there's tons and tons of applications on the factory floor.

Laurel: And this has massive implications for manufacturers, because as you mentioned, it saves money, right? So is this a tough shift, do you think, for executives to think about investing in technology in a bit of a different way to then get all of those benefits?

Stefan: Yeah. It's like with every technology, I wouldn't think it's a big block, there's a lot of interest at this point and there's many manufacturers with initiatives in that space. So I would say it's probably going to create a significant progress in productivity, but of course, it also means investment. And I can say since it's fairly predictable to see what the payback of this investment will be. As far as we can see, there's a lot of positive energy there, to make this investment and to modernize factories.

Laurel: What kind of modernizations you need for the workforce in the factories when you are installing and applying, kind of retooling to have AI applications in mind?

Stefan: That's a great question because sometimes I would say many users of artificial intelligence applications probably don't even know they're using one. So you basically get a box and it will tell you, is recommended to maintain this machine now. The operator probably will know what to do, but not necessarily know what technology they're working with. But that said of course there will probably will be some, I would say, almost emerging specialties or emerging skills for engineers to really, how to use and how to optimize these AI applications that they use on the factory floor. Because as I said, we have these applications that are up and running and working today, but to get to those applications to be really useful, to be accurate enough, that of course, to this point needs a lot of expertise, at least some iteration as well. And there's probably not too many people today who really are experienced enough with the technologies and also understand the factory environment well enough to do this.

I think this is a fairly, pretty rare skill these days and to make this a more commonplace application of course we will have to create more of these experts who are really good at making AI factory-floor-ready and getting it to the right maturity.

Laurel: That seems to be an excellent opportunity, right? For people to learn new skills. This is not an example of AI taking away jobs and that more negative connotations that you get when you talk about AI and business. In practice, if we combine all of this and talk about VinFast, the Vietnamese car manufacturer that wanted to do things quite a bit differently than traditional car manufacturing. First, they built a factory, but then they applied that kind of overarching thinking of chip to factory and then eventually to city. So coming back full circle, why is this thinking unique, especially for a car manufacturer and what kind of opportunities and challenges do they have?

Stefan: Yeah. VinFast is an interesting example because when they got into making vehicles, they basically started on a green field. And that is probably the biggest difference between VinFast and the vast majority of the major automakers. That all of them are a hundred or more years old and have of course a lot of history, which then translates into having existing factories or having a lot of things that were really built before the age of digitalization. So VinFast started from a greenfield, and that of course is a big challenge, it makes it very difficult. But the advantage was that they really have the opportunity to start off with a full digitalized approach, that they were able to use software. Because they were basically constructing everything, and they could really start off with this fairly complete digital twin of not only their product but also they designed the whole factory on a computer before even starting to build it. And then they build it in record time.

So that's probably the big, unique aspect that they have this opportunity to be completely digital. And once you are at that state, once you can already say my whole design, of course, my software running on the vehicle, but also my whole factory, my whole factory automation. I already have this in a fully digital way and I can run through simulations and scenarios. That also means you have a great starting point to use these AI technologies to optimize your factory or to help the workers with the additional optimizations and so on.

Laurel: Do you think it's impossible to be one of those hundred-year-old manufacturers and slowly adopt these kinds of technologies? You probably don't have to have a greenfield environment, it just makes everything easy or I should say easier, right?

Stefan: Yeah. All of them, I mean the auto industry has traditionally been one of the one that invested most in productivity and in digitalization. So all of them are on that path. Again, they don't have this very unique situation that you, or rarely have this unique situation that you can really start from a blank slate. But a lot of the software technology of course, also is adapted to that scenario. Where for example, you have an existing factory, so it doesn't help you a lot to design a factory on the computer if you already have one. So you use these technologies that allow you to go through the factory and do a 3D scan. So you know exactly how the factory looks like from the inside without having it designed in a computer, because you essentially produce that information after the fact. So that's definitely what the established or the traditional automakers do a lot and where they're also basically bringing the digitalization even into the existing environment.

Laurel: We're really discussing the implications when companies can use simulations and scenarios to apply AI. So when you can, whether or not it's greenfield or you're adopting it for your own factory, what happens to the business? What are the outcomes? Where are some of the opportunities that are possible when AI can be applied to the actual chip, to the car, and then eventually to the city, to a larger ecosystem?

Stefan: Yeah. When we really think about the impact to the business, I frankly think we are at the beginning of understanding and calculating what the value of faster and more accurate decisions really is, which are enabled by AI. I don't think we have a very complete understanding at this point, and it's fairly obvious to everybody that digitalizing like the design process and the manufacturing process. It not only saves R&D effort and R&D money, but it also helps optimize the supply chain inventories, the manufacturing costs, and the total cost of the new product. And that is really where different aspects of the business come together. And I would frankly say, we start to understand the immediate effects, we start to understand if I have an AI-driven quality check that will reduce my waste, so I can understand that kind of business value.

But there is a whole dimension of business value of using this optimization that really translates to the whole enterprise. And I would say there's a lot of work happening to understand these implications better. But I would say at this point, we are just at the starting point of doing this, of really understanding what can optimization of a process do for the enterprise as a whole.

Laurel: So optimization, continuous learning, continuous improvement, this makes me think of, and cars, of course, The Toyota Way, which is that seminal book that was written in 2003, which is amazing, because it's still current today. But with lean manufacturing, is it possible for AI to continuously improve that at the chip level, at the factory level, at the city to help these businesses make better decisions?

Stefan: Yeah. In my view, The Toyota Way, again, the book published in the early 2000s, with continuous improvement, in my view, continuous improvement of course always can do a lot, but there's a little bit of recognition in the last, I would say five to 10 years, somewhere like that, that continuous improvement might have hit the wall of what's possible. So there is a lot of thought since then of what is really the next paradigm for manufacturing. When you stop thinking about evolution and optimization and you think about more revolution. And one of the concepts that have been developed here is called industry 4.0, which is really the thought about turning upside down the idea of how manufacturing or how the value chain can work. And really think about what if I get two factories that are completely self-organizing, which is kind of a revolutionary step. Because today, mostly a factory is set up around a certain idea of what products it makes and when you have lines and conveyors and stuff like that, and they're all bolted to the floor. So it's fairly static, the original idea of a factory. And you can optimize it in an evolutionary way for a long time, but you'd never break through that threshold.

So the newest thought or the other concepts that are being thought about are, what if my factory consists of independent, moving robots, and the robots can do different tasks. They can transport material, or they can then switch over to holding a robot arm or a gripper. And depending on what product I throw at this factory, it will completely reshuffle itself and work differently when I come in with a very different product and it will self-organize itself to do something different. So those are some of the paradigms that are being thought of today, which of course, can only become a reality with heavy use of AI technologies in them. And we think they are really going to revolutionize at least what some kinds of manufacturing will do. Today we talk a lot about lot size one, and that customers want more options and variations in a product. So the factories that are able to do this, to really produce very customized products, very efficiently, they have to look much different.

So in many ways, I think there's a lot of validity to the approach of continuous improvement. But I think we right now live in a time where we think more about a revolution of the manufacturing paradigm.

Laurel: That's amazing. The next paradigm is revolution. Stefan, thank you so much for joining us today in what has been an absolutely fantastic conversation on the Business Lab.

Stefan: Absolutely. My pleasure. Thank you.

Laurel: That was Stefan Jockusch, vice president of strategy for Siemens Digital Industry Software, who I spoke with from Cambridge, Massachusetts, the home of MIT and MIT Technology Review, overlooking the Charles River. That's it for this episode of Business Lab. I'm your host, Laurel Ruma. I'm the director of Insights, the custom publishing division of MIT Technology Review. We were founded in 1899 at the Massachusetts Institute of Technology. And you can find us in prints, on the web, and at events online and around the world. For more information about us and the show, please check out our website at technologyreview.com. The show is available wherever you get your podcasts. If you enjoyed this episode, we hope you'll take a moment to rate and review us. Business Lab is a production of MIT Technology Review. This episode was produced by Collective Next. Thanks for listening.

Read the original here:

How AI will revolutionize manufacturing - MIT Technology Review

Are these the edge-case trends of AI in 2020? – Tech Wire Asia

Artificial intelligence (AI) continues to hold its title as the top buzzword of enterprise tech, but its appeal is well-founded. We now seem to be shifting from the era of businesses simply talking about AI, to actually getting hands-on, exploring the ways it can be used to tackle real-world challenges.

AI is increasingly providing a solution to problems old and new, then again, while the technology is proving itself incredibly powerful, not all of its potential is necessarily positive. Here, we explore some of the more edge-case applications of AI taking place this year.

Advances in deep-learning and AI continue to make deepfakes more realistic. This technology has already proven itself dangerous in the wrong hands; many predict that deepfakes could provide a dangerous new medium for information warfare, helping to spread misinformation or fake news. The majority of its use, however, is in the creation of non-consensual pornography which most frequently targets celebrities, owed to large amounts of data samples in the public domain.Deepfake technology has also been used in highly-sophisticated phishing campaigns.

Beyond illicit ingenuity in shady corners of cyberspace, the fundamental technology is proving itself a valuable tool in a few other disparate places. Gartners Andrew Frank called the technology a potential asset to enterprises in personalized content production: Businesses that utilize mass personalization need to up their game on the volume and variety of content that they can produce, and GANs [Generative Adversarial Network] simulated data can help.

Last year, a video featuring David Beckham speaking in nine different languages for a Malaria No More campaign was released. The content was a result of video manipulation algorithms and represented how the technology can be used for a positive outcome reaching a multitude of different audiences quickly with accessible, localized content in an engaging medium.

Meanwhile, a UK-based autonomous vehicle software company has developed deepfake technology that is able to generate thousands of photo-realistic images in minutes, which helps it train autonomous driving systems in lifelike scenarios, meaning the vehicle makers can accelerate the training of systems when off the road.

The Financial Times also reported on a growing divide between traditional computer-generated graphics which are often expensive and time-consuming and the recent rise in deepfake tech, while Disney used deepfake technology to include the young version of Harrison Ford as Han Solo in the recent Star Wars films.

Facial recognition is enabling convenience, whether its a quick passport check-in process at the airport (remember those?) or the swanky facial software in newer phone models. But AIs use in facial recognition extends now to surveillance, security, and law enforcement. At best, it can cut through some of the noise of traditional policing. At worst, its susceptible to some of its own in-built biases, with recorded instances of systems trained on misrepresentative datasets leading to gender and ethnicity biases.

Facial recognition has been dragged to the fore of discussion, following its use at BLM protests and the wrongful arrest of Robert Julian-Borchak Williams at the hand of faulty AI algorithms earlier this year. A number of large tech firms, including Amazon and IBM,have withdrawn their technology from use by law enforcement.

AI has a long way to go to match the expertise of our human brains when it comes to recognizing faces. These things on the front of us are complex and changeable; algorithms can be easily confused. Theres a roadmap of hope for the format, though, thanks to further advances in deep-learning. As an AI machine matches two faces correctly or incorrectly, it remembers the steps and creates a network of connections, picking up past patterns and repeating them or altering them slightly.

Facial recognitions controversies have furthered discussions around ethical AI, allowing us to clearly understand the tangible impact of misrepresentative datasets in training AI models, which are equally worrying in other applications and use cases, such as recruitment.As the technology is deployed into more and more areas in the world around us, its dependability, neutrality and compliance with existing laws becomes all the more critical.

With every promising advance in technology comes another challenge, and a recent CBInsights paper warns of AIs role in the rise of new-age hacks.

Sydney-based researchers Skylight Cyber reported finding an inherent bias in an AI model developed by cybersecurity firm Cylance, and were able to create a universal bypass that allowed malware to go undetected. They were able to understand how the AI model works, the features it uses to reach decisions, and create tools to fool it time and again. Theres also the potential for a new crop of hackers and malware to poison data corrupting AI algorithms and disrupting the usual detection of malicious/normal network behaviour. This problematic level of manipulation doesnt do a lot for the plaudits that many cybersecurity firms give to products that use AI.

AI is also being used by the attackers themselves. In March last year, scammers were thought to have leveraged AI to impersonate the voice of a business executive at a UK-based energy business, requesting from an employee the successful transfer of hundreds and thousands of dollars to a fraudulent account.More recently, its emerged that these concerns are valid, and not a whole lot of sophistication is required to pull them off. As seen in the case of Katie Jones a fake LinkedIn account used to spy and phish information from her connections an AI-generated image was enough to dupe unsuspecting businessmen into connecting and potentially sharing sensitive information.

Meanwhile, some believe AI-driven malware could be years away if on the horizon at all but IBM has researched how existing AI models can be combined with current malware techniques to create challenging new breeds in a project dubbed DeepLocker. Comparing its potential capabilities to a sniper attack as opposed to traditional malwares spray and pray approach, IBM said DeepLocker was designed for stealth: It flies under the radar, avoiding detection until the precise moment it recognizes a specific target.

Theres no end to innovation when it comes to cybercrime, and we seem set for some sophisticated, disruptive activity to emerge from the murkier shadows of AI.

Automated machine learning, or AutoML (a term coined by Google), reduces or completely removes the need for skilled data scientists to build machine learning models. Instead, these systems allow users to provide training data as an input, and receive a machine learning model as an output.

AutoML software companies may take a few different approaches. One approach is to take the data and train every kind of model, picking the one that works best. Another is to build one or more models that combine the others, which sometimes give better results. Businesses ranging from motor vehicles to data management, analytics and translation are seeking refined machine learning models through the use of AutoML. With a marked shortage of AI experts, this technology will help democratise the tech and cut down computing costs.

Despite its name, AutoML has so far relied a lot on human input to code instructions and programs that tell a computer what to do. Users then still have to code and tune algorithms to serve as building blocks for the machine to get started. There are pre-made algorithms that beginners can use, but its not quite automatic.

Google computer scientists believe they have come up with a new AutoML method that can generate the best possible algorithm for a specific function, without human intervention. The new method is dubbed AutoML-Zero, which works by continuously trying algorithms against different tasks, and improving upon them using a process of elimination, much like Darwinian evolution.

AI and machine learning may be streamlining processes, but they are doing so at some cost to the environment.

AI is computationally intensive (it uses a whole load of energy), which explains why a lot of its advances have been top-down. As more companies look to cut costs and utilize AI, the spotlight will fall on the development and maintenance of energy-efficient AI devices, and tools that can be used to turn the tide by pointing AI expertise towards large-scale energy management.

Artificial Intelligence also has a role in augmenting energy efficiency. Tech giants are using systems that can gather data from sensors every five minutes, and use algorithms to predict how different combinations of actions will positively or negatively affect energy use.

In 2018, Chinas data centers produced 99 million metric tons of carbon dioxide (thats equivalent to 21 million cars on the road). Worldwide, data centers consume 3 to 5 percent of total global electricity, and that will continue to rise as we rely more on cloud-bases services. Savvy to the need to go green, tech giants are now employing AI systems that can gather data from sensors every five minutes, and use algorithms to predict how different combinations of actions will positively or negatively affect energy use. AI tools can also spot issues with cooling systems before they happen, avoiding costly shutdowns and outages for cloud customers.

From low power AI processors in edge technologies to large scale renewable energy solutions (thats AI dictating the angle of solar panels, and predicting wind power output based on weather forecasts), there are positive moves happening as we enter the 2020s. More green-conscious, AI-intensive tech firms are popping all the time, and we look forward to seeing how they navigate the double-edged sword of energy-guzzling AI being used to mitigate the guzzling of energy.

See more here:

Are these the edge-case trends of AI in 2020? - Tech Wire Asia

Researchers at Szeged Use AI to Screen for Coronavirus – Hungary Today

A new technology developed by the Szeged Biological Research Center (SBRC) is using artificial intelligence (AI) to test for coronavirus in blood samples. The new method involves automatic microscope and AI technology; so far, it has been able to identify infections with almost 100% accuracy.

In collaboration with the University of Szeged, the University of Helsinki and Single-Cell Technologies Kft., the SBRC developed a new serological (blood) test to screen for the coronavirus.

The test identifies both currently infected and already recovered patients and calculates the degree of the patients immunity with exceptional accuracy. Thousands of tests already completed have been almost 100% accurate. The test has not yet produced any false positive results. The new technology enables the completion of five to ten thousand tests per day.

Coronavirus: Szeged Research Team Identifies New COVID-19 Receptor

The technology behind the test relies on the identification of the immunoglobulins produced by the patients own body. These proteins build up quickly after the contraction of the disease and they stay in the blood of those recovered for months. The test involves the addition of the blood sample to cells which are then studied by the AI which uses deep learning technology to train itself to detect the immunoglobulins more precisely.

The completion of the test requires six to eight hours. Furthermore, it is relatively cost effective and it is also capable of identifying the infection in cases of low levels of immune response.

Featured photo illustration by Gyrgy Varga/MTI.

Follow this link:

Researchers at Szeged Use AI to Screen for Coronavirus - Hungary Today

LitLingo Advocates AI-driven Prevention as the Key to Modernizing the $45B Litigation and Compliance Industry – Business Wire

AUSTIN, Texas--(BUSINESS WIRE)--LitLingo Technologies, a startup utilizing AI/NLP to manage context-driven communication and prevent conduct risk, announces that it has closed a $2 million seed round led by LiveOak Venture Partners. Krishna Srinivasan, Founding Partner at LiveOak, will join the Board of Directors of the company. The funds will be used to expand its product and engineering teams in order to accelerate growth.

LitLingo was formed in 2019 to develop a new approach to help legal and compliance executives and operational leaders prevent unforced errors in communications and allow companies to enhance value in employee interactions. To solve this challenge, LitLingo developed a machine-learning platform and proprietary, out-of-the-box models focused on training and prevention. LitLingos approach differs from existing solutions in that it offers the ability to encode existing policies and best practices, enforce them in real-time, and provide corrective action to users prior to the creation of written material that could result in adverse consequences. The company believes that real-time prevention is the key to disrupting the $45B risk & compliance industry.

The traditional solutions to mitigating legal, compliance, or cultural risks with employee communications are retroactive and expensive - engaging outside counsel, hiring more lawyers, company-wide quarterly trainings - wed like to flip that paradigm on its head and help our customers prevent risk before it is created, said Kevin Brinig, co-Founder and CEO of LitLingo. Fewer HR issues, stronger culture, and improved compliance are all byproducts of the LitLingo solution. If a company can prevent a single lawsuit or regulatory action on its own recognizance, it avoids millions of dollars in costs. He added, Its counterintuitive, but our favorite analogy is: the average speed of a racecar goes up when you improve the brakes. Wed like all our customers to achieve that.

LitLingo has already helped several companies optimize their business communications while in stealth mode. The company leverages integrations across several email, office chat, and customer service ticketing platforms.

The LitLingo team has deep expertise in NLP/AI, risk management, fraud/waste/abuse, and product development from their careers in the sharing economy, autonomous vehicles, risk management consulting, and the healthcare industries. They have combined all of this expertise in order to benefit the corporate risk industry.

The inflection point we see within AI and Natural Language Understanding (NLU) offers an incredible opportunity to create solutions that are remarkably powerful and incredibly cost effective. Were entering a new era with what AI can do in areas previously thought impossible, said co-founder and CTO, Todd Sifleet.

As LiveOaks Srinivasan noted, We have seen first-hand the importance of better written communication to drive down risks of litigation, improve compliance and operational KPIs and importantly elevate the overall cultural tone in an organization. Enabling that in real time with a delightful user experience is an incredibly hard challenge. Kevin and Todd with their deep product technology capabilities and background are uniquely qualified to tackle this problem. They have tapped into a rich vein of demand that is particularly relevant for these times. As such, we are really enthusiastic about building a successful company in this arena, said Srinivasan.

About LitLingo

LitLingo helps organizations minimize risks associated with electronic communications. By providing AI-powered monitoring, prevention, and training solutions in real-time across the industry-leading communication channels, LitLingo allows customers to target known risks, identify blind spots, and maximize the productivity of their workforce. The company provides out-of-the-box or custom-tailored models relating to litigation and compliance risk mitigation, the promotion of inclusive culture, and customer service optimization. Founded in 2019, LitLingo is headquartered in Austin, Texas. For more information, visit http://www.litlingo.com.

About LiveOak Venture Partners

LiveOak Venture Partners is a venture capital fund based in Austin, Texas. With 20 years of successful venture investing in Texas, the founders of LiveOak have helped create nearly $2 billion of enterprise value. While almost all of LiveOak's investments begin at the Seed and Series A stages, LiveOak is a full life cycle investor focused on helping create category leading technology and technology-enabled service companies headquartered in Texas. LiveOak Venture Partners has been the lead investor in over 30 exciting high-growth Texas-based companies in the last seven years including ones such as CS Disco, Digital Pharmacist, OJO Labs, Opcity and TrustRadius.

More here:

LitLingo Advocates AI-driven Prevention as the Key to Modernizing the $45B Litigation and Compliance Industry - Business Wire

A ‘potentially deadly’ mushroom-identifying app highlights the dangers of bad AI – The Verge

Theres a saying in the mushroom-picking community that all mushrooms are edible but some mushrooms are only edible once.

Thats why, when news spread on Twitter of an app that used revolutionary AI to identify mushrooms with a single picture, mycologists and fungi-foragers were worried. They called it potentially deadly, and said that if people used it to try and identify edible mushrooms, they could end up very sick, or even dead.

Part of the problem, explains Colin Davidson, a mushroom forager with a PhD in microbiology, is that you cant identify a mushroom just by looking at it. The most common mushroom near me is something called the yellow stainer, he told The Verge, and it looks just like an edible horse mushroom from above and the side. But if you eat a yellow stainer theres a chance youll be violently ill or even hospitalized. You need to pick it up and scratch it or smell it to actually tell what it is, explains Davidson. It will bruise bright yellow or it will smell carbolic.

And this is only one example. There are plenty of edible mushrooms with toxic lookalikes, and when identifying them you need to study multiple angles to find features like gills and rings, while considering things like whether recent rainfall might have discolored the cap or not. Davidson adds that there are plenty of mushrooms that live up to their names, like the destroying angel or the deaths cap.

but then your organs will start failing.

One eighth of a death cap can kill you, he says. But the worst part is, youll feel sick for a while, then you might feel better and get on with your day, but then your organs will start failing. Its really horrible.

The app in question was developed by Silicon Valley designer Nicholas Sheriff, who says it was only ever intended to be used as a rough guide to mushrooms. When The Verge reached out to Sheriff to ask him about the apps safety and how it works, he said the app wasnt built for mushroom hunters, it was for moms in their backyard trying to ID mushrooms. Sheriff added that hes currently pivoting to turn the app into a platform for chefs to buy and sell truffles.

When we tried the iOS-only software this morning, we found that Sheriff had changed its preview picture on the App Store to say identify truffles instantly with just a pic. However, the name of the app remains Mushroom Instant Mushroom Plants Identification, and the description contains the same claim that so worried Davidson and others: Simply point your phone at any mushroom and snap a pic, our revolutionary AI will instantly identify mushrooms, flowers, and even birds.

In our own tests, though, the app was unable to identify either common button or chestnut mushrooms, and crashed repeatedly. Motherboard also tried the app and found it couldnt identify a shiitake mushroom. Sheriff says he is planning on adding more data to improve the apps precision, and tells The Verge that his intention was never to try and replace experts, but supplement their expertise.

claims about revolutionary AI can be dangerous

And, of course, if you search the iOS or Android app stores, youll find plenty of mushroom identifying apps, most of which are catalogues of pictures and text. Whats different about this one, is that it claims to use machine vision and revolutionary AI to deliver its results terms that seem specifically chosen to give people a false sense of confidence. If youre selling an app to identify flowers, then this sort of language is merely disingenuous; when its mushrooms youre spotting, it becomes potentially dangerous.

As Davidson says: Im absolutely enthralled by the idea of it. I would love to be able to go into a field and point my phone at a mushroom and find out what it is. But I would want quite a lot of convincing that it would be able to work. So far, were not convinced.

Visit link:

A 'potentially deadly' mushroom-identifying app highlights the dangers of bad AI - The Verge

Elon Musk says AI harbors ‘vastly more risk than North Korea’ – CNET

Technically Incorrect offers a slightly twisted take on the tech that's taken over our lives.

He's worried. Very worried.

The mention of several place-names currently invokes shudders.

Whether it be North Korea, Venezuela or even Charlottesville, Virginia, it's easy to get a shivering feeling that something existentially unpleasant might happen, with North Korea still topping many people's lists.

For Tesla and SpaceX CEO Elon Musk, however, there's something far bigger that should be worrying us: artificial intelligence.

In a Friday afternoon tweet, he offered, "If you're not concerned about AI safety, you should be. Vastly more risk than North Korea."

He accompanied this with a poster of a worried woman and the words, "In the end, the machines will win."

The machines always win, don't they? Look how phones have turned us into neck-craning zombies. And, lo, here was Musk also tweeting on Fridayabout a bot created by OpenAI -- the nonprofit he backs -- beating real humans at eSports.

Still, Musk thinks humanity can do something to fight the robots.

Indeed, he followed his North Korea message witha renewed call for AI regulation: "Nobody likes being regulated, but everything (cars, planes, food, drugs, etc) that's a danger to the public is regulated. AI should be too."

Musk brought up this idea last month at a meeting of the National Governors Association. On Friday, he explained in the Twitter comments that AI really does pose an immediate threat.

"Biggest impediment to recognizing AI danger are those so convinced of their own intelligence they can't imagine anyone doing what they can't," he tweeted.

You really can't trust humans to do good, even supposedly intelligent humans.

Especially in these times when few appear to agree what good even looks like.

CNET Magazine: Check out a sample of the stories in CNET's newsstand edition.

iHate: CNET looks at how intolerance is taking over the internet.

Go here to see the original:

Elon Musk says AI harbors 'vastly more risk than North Korea' - CNET

Hollywood Is Banking That a Robot Named Erica Can Be the Next Movie Star – Variety

She cant get sick or be late to the set, and her hair and makeup needs are minimal: Her name is Erica, and Hollywood is hoping that a sophisticated robot can be its next big star. The synthetic actor has been cast in b, a $70 million science-fiction movie which producer Sam Khoze describes as a James Bond meets Mission Impossible story with heart.

Scribe Tarek Zohdy (1st Born), says, the story is about scientists who create an AI robot named Erica who quickly realize the danger of this top-secret program that is trying to perfect a human through a non-human form.

Variety caught up with the filmmakers Zohdy and Khoze to discuss b the $70 million film that plans to finish shooting next year, after a director and human star have been brought on.

Tarek Zohdy: The producers, Sam Khoze and Anoush Sadegh, in association with professors Hiroshi Ishiguro and Kohei Ogawa of the University of Osaka and Telecommunication Research Institute, took on the task of training Erica to act.

We wanted to create a story, and we wanted to do it in a revolutionary way. A robot doesnt have life experiences so they created this persona about those experiences, and we taught her how to act.

We found her to be the most capable of performing as an actor. Erica has the ability for natural interaction with people by integrating various technologies such as voice recognition, human tracking, and natural motion generation. She is almost human. Visually, her human-like appearance made her the best-known candidate to play this character in the movie.

We are artists and, we are artists of color who are able to do something with our art. We want to have a very diverse cast and as a diverse filmmaking group, I think that is essential.

Sam Khoze: We went through several rewrites. VFX Supervisor Eric Pham ( Sin City) joined us later to help develop the final version of the story . Its a really beautiful story because, at its heart, Ericas father who spent his life developing her wants her to serve humanity and change the way people look at AI and robots.

Khoze: It took about two years. Shes 23-years old. She has experience. She goes to the museums once a month to meet people, so shes a fun robot.

Khoze: When we started this project in 2018, we had a director who wasnt comfortable with having VFX in the movie.

We went off and started researching and we found Ericas creator. We started training her and shes been performing flawlessly. Shes probably the closest AI ever made to be an artist. We wanted to experiment to see if she would learn acting. And we basically started to training her and she performs flawlessly and very well.

We dont want to replace actors with AI, but its an interesting opportunity for the entertainment industry to look at AI and robots in Hollywood.

By creating her, weve learned shes fully capable of communicating with people and interacting.

What does that mean? Weve created this algorithm to digitally preserve people. So, if an actor doesnt want to risk their life, this allows you to create a digital version of that human being, and she has her own personality without anyone needs to program her. So now were using this algorithm to bring actors to set using this AI technology.

Visit link:

Hollywood Is Banking That a Robot Named Erica Can Be the Next Movie Star - Variety

Will AI-as-a-Service Be the Next Evolution of AI? — The Motley Fool – Motley Fool

With all the excitement surrounding the advent of artificial intelligence (AI), there are still a great many things we don't know. Could it lead to the frightening futures depicted in films like Ex Machina, Terminator, and 2001: A Space Odyssey, or might we see less threatening iterations like Data on Star Trek: The Next Generation, Samantha in Her, or TARS from Interstellar?

The current reality of AI is much less cinematic -- it possesses the learned ability to sift through reams of data in short order and recognize patterns. This has led to breakthroughs in the areas of image recognition, language translation, and beating humans at the age-old game of Go. Some of the biggest advances are ongoing in the areas of medical imaging, cancer research, and self-driving cars.

Still, with plenty of developments thus far, it's hard to know what will be the next groundbreaking application of the technology.

Will AI-as-a-Service be the next killer app? Image source: Getty Images.

Small Canadian start-up Element AI believes it has the answer: It wants to democratize AI by offering "AI-as-a-Service" to businesses that can't afford to develop the systems themselves. Tech giants Microsoft Corp. (NASDAQ:MSFT), Intel Corp. (NASDAQ:INTC), and NVIDIA Corp. (NASDAQ:NVDA) believe that Element is on the right track and have invested millions to back up that belief.

Currently, AI requires massive quantities of data in order to train the system. Element AI wants to improve on this by reducing the size of the data sets required, which would make the technology accessible to a wider range of businesses, not just those with massive budgets. Element is improving on the AI concept of leveraging. By using a previously trained system and then introducing smaller data sets, the system applies what it learned previously to the new sets of data.

Element is currently working on a consulting basis with a very small group of large companies that want to leverage AI without developing the systems in-house. In this way, the company can strategically choose its initial customers and train its systems on the larger data sets, which it will later leverage for smaller clients.

The major players investing in AI have primarily been applying the tech to augment their principal businesses. Microsoft has used the technology to improve its Bing search and to power its Cortana virtual assistant, and has built AI into its Azure cloud computing services. Intel has been working to develop an AI-based CPU and has made numerous acquisitions in the field, hoping to get a leg up.

NVIDIA is the only one to date that has been able to quantify the value of AI to its business, as its GPUs have been used to accelerate the training of AI systems. In its most recent quarter, NVIDIA saw revenue of $1.9 billion, which grew 48% year over year, on the back of a 186% increase in its AI-centric data-center revenue.

Element is providing a novel approach to the AI trend. Image source: Pixabay.

Still, none has emerged as a pure play, selling AI-as-a-Service. Element hopes to change that by being the first company of its kind to provide predictive modeling, recommendations systems, and consumer engagement optimization, available to any business, without them having to start their AI efforts from scratch. Providing access to experts in the field that can analyze a business and determine how best to apply AI to solve specific problems will prove beneficial to a wide range of companies without their own AI resources. By filling that void, Element AI hopes to make its mark.

International Business Machines Corp. (NYSE:IBM) provides the closest example, pivoting from its legacy hardware and consulting businesses to selling cloud and cognitive computing solutions via its AI-based Watson supercomputer. Thus far, these newer growth technologies haven't been able to compensate for the shortfall in its legacy business, though the company is applying AI to a wide variety of business processes and has assembled an impressive array of big-name partners. By casting its net into cybersecurity, tax preparation, and a variety of healthcare-related applications, IBM hopes to capitalize on this emerging trend.

It is still early days in AI research and technology, and how the future plays out is yet to be determined. Element AI is taking a unique approach -- and the backing of these three godfathers of tech shows that it might be on the right track.

Teresa Kersten is an employee of LinkedIn and is a member of The Motley Fool's board of directors; LinkedIn is owned by Microsoft. Danny Vena has the following options: long January 2018 $25 calls on Intel. The Motley Fool owns shares of and recommends Nvidia. The Motley Fool recommends Intel. The Motley Fool has a disclosure policy.

Read the original post:

Will AI-as-a-Service Be the Next Evolution of AI? -- The Motley Fool - Motley Fool

Baidu Leads the Way in Innovation With 5,712 AI Patent Applications – AiThority

Baidu, Inc. has filed the most AI-related patent applications in China, a recognition of the companys long-term commitment to driving technological advancement, a recent study from the research unit of Chinas Ministry of Industry and Information Technology (MIIT) has shown.

Baidu filed a total of 5,712 AI-related patent applications as of October 2019, ranking No.1 in China for the second consecutive year. Baidus patent applications were followed by Tencent (4,115), Microsoft (3,978), Inspur (3,755), and Huawei (3,656), according to the report issued by the China Industrial Control Systems Cyber Emergency Response Team, a research unit under the MIIT.

Read More: Portal By Facebook Takes WhatsApp Closer To AI-Based Video Applications

Baidu retained the top spot for AI patent applications in China because of our continuous research and investment in developing AI, as well as our strategic focus on patents, said Victor Liang, Vice President and General Counsel of Baidu.

In the future, we will continue to increase our investments into securing AI patents, especially for high-value and high-quality patents, to provide a solid foundation for Baidus AI business and for our development of world-leading technology, he said.

The report showed that Baidu is the patent application leader in several key areas of AI. These include deep learning (1,429), natural language processing (938), and speech recognition (933). Baidu also leads in the highly competitive area of intelligent driving, with 1,237 patent applications, a figure that surpasses leading Chinese universities and research institutions, as well as many international automotive companies. With the launch of the Apollo open source autonomous driving platform and other intelligent driving innovations, Baidu has been committed to pioneering the intelligent transformation of the mobility industry.

Read More: Governance and Stability are the Keys to Sustaining AI and ML Projects

After years of research, Baidu has developed a comprehensive AI ecosystem and is now at the forefront of the global AI industry. Moving forward, Baidu will continue to conduct research in the core areas of AI, contribute to scientific and technological innovation in China, and actively push forward the application of AI into more vertical industries. Baidu is positioned to be a global leader in a wave of innovation that will transform industries.

Read More: Artificial Intelligence In Restaurant Business

Read the original here:

Baidu Leads the Way in Innovation With 5,712 AI Patent Applications - AiThority

5 pitfalls AI healthcare start-ups need to avoid – – pharmaphorum

Artificial intelligence (AI) has now moved beyond its initial hype towards becoming a key part of the pharma industry with many companies looking to partner with AI drug discovery start-ups.

Pharma and healthcare are data-rich industries and AI helps by turning data into actionable insights, allowing us to solve complex, intricate problems. Using machine learning, AI algorithms can generate patterns that will enable us to predict toxicity, find potential combination treatments, identify and predict new drugs and expand usage of current drugs in other diseases.

However, only a handful of companies from the swarm of AI start-ups have successfully gained traction within the pharma industry.

Over the past month, Ive had conversations with several growing AI drug discovery companies and have analysed some critical strategic shortcomings that can frustrate the upwards journey of these start-ups:

1) Living in a technology bubble and failing to understand the language of pharma

AI cannot be built in isolation without understanding the nuances and the complexity of the business needs it will address. A constant challenge faced by all AI drug discovery companies is sanitisation of the unstructured but useful data in life sciences.

Often start-ups focus on how powerful and transformative their platform is and lack an overall business workflow

Drug discovery is inherently a high-risk endeavour due to poorly understood mechanisms of diseases and lack of negative data (on experiments that do not work) in the public domain.

With high reproducibility issues in life sciences, nuances such as the validation of experimental data, complex networks of interacting proteins, poor quality publications etc., add another layer of difficulty.

Companies will spend much of their time filtering the essential insights from a pile of useless data.

Diversifying the team and having pharma executives on-board who can understand the relevance of a dataset and ontologies created by the platform is critical to success.

Because of a lack of thorough understanding, start-ups often fail to attract investors.

The AI drug discovery ecosystem has been only partially successful in addressing this challenge till now. AI depends critically on the quality and standardisation of data used. Garbage in, garbage out holds true for AI.

2) Prioritising technology over business strategy

AI alone is not enough to succeed. A start-up at the interface of technology and pharma needs a solid business strategy to thrive.

Often these start-ups focus on how powerful and transformative their platform is and lack an overall business workflow. Most start-ups struggle to find the right positioning in the market whether thats providing data analysis as a service, licensing the AI platform, acting as a consultant or developing a new drug/repurposing drugs either alone or in partnership.

Only a handful of these start-ups have internal discovery programmes, while the rest of the companies struggle to find the right business model, often taking a mix of several approaches.

Narrowing down the business strategy while understanding the core strengths early in the journey can address challenges of efficient resource allocation. Likewise, identifying target markets and an effective sales and marketing approach can help clearly define success criteria with defined milestones in the journey.

3) Get stuck in a never ending development loop

AI is very expensive to build and maintain, owing to the need for big data and computational infrastructure. The scarcity of data scientists with domain knowledge in pharma and the large overheads required to hire them adds to the cost.

Many start-ups keep working on improving prediction scores with data and ultimately end up realising that it only adds marginal value for the customer. The benefits of moving to market quickly often outweigh the downsides with early entry, companies can get immediate and frequent feedback from the end-users of the platform, and they can work with academic and industry end-users early in the journey to validate the platforms and generate the required proof-of-concept.

Rather than staying in a continuous development loop, taking one application to market while you build the rest is a stronger approach.

4) Assuming customers are technologically savvy

Developers should keep the end user in mind when designing the UI/UX of the platform while still addressing the key technical components.

End users in pharma are often biologists, with minimal exposure to developer tools. While some computational biologists write scripts, most drug discovery scientists are habituated to one-click software. Moreover, there are certain standard ways of data plotting that scientists are habituated to, owing to requirements in international peer reviewed journals.

An easy, graphical results sheet can communicate information better than the strings and scores preferred by software developers.

5) Taking a do-it-all approach over finding a niche

Start-ups often get starry eyed about the capabilities of their platforms and want to do everything.

The majority of these algorithms can be repurposed for diverse functions depending on the training data and required outcomes. As the Russian proverb goes, If you chase two rabbits, you will not catch either one.

Building new functions requires resources and attention to detail. Start-ups should begin by pioneering one functionality and then building in other directions. Prioritise offerings based on core strengths, get client feedback and revenue flowing and then continue building on other applications.

While a solid foundation has been laid, the AI community will have to be patient and not interpret pharmas conservatism as a lack of interest. They need to engage continuously with the biopharma leaders and regulatory bodies to build a multi-dimensional strategy involving innovation, privacy, compliance, standardisation, and behavioural change management.

About the author

Amandeep Singh is a life science consultant at MP Advisors.

View original post here:

5 pitfalls AI healthcare start-ups need to avoid - - pharmaphorum

AI technology will soon replace error-prone humans all over the world but here’s why it could set us all free – Gulf Today

The photo has been used for illustrative purpose.

It has been oft-quoted albeit humouredly that the ideal of medicine is the elimination of the physician. The emergence and encroachment of artificial intelligence (AI) on the field of medicine, however, puts an inconvenient truth on the aforementioned witticism. Over the span of their professional lives, a pathologist may review 100,000 specimens, a radiologist more so; AI can perform this undertaking in days rather than decades.

Visualise your last trip to an NHS hospital, the experience was either one of romanticism or repudiation: the hustle and bustle in the corridors, or the agonising waiting time in A&E; the empathic human touch, or the dissatisfaction of a rushed consultation; a seamless referral or delays and cancellations.

Contrary to this, our experience of hospitals in the future will be slick and uniform; the human touch all but erased and cleansed, in favour of complete and utter digitalisation. Envisage an almost automated hospital: cleaning droids, self-portered beds, medical robotics. Fiction of today is the fact of tomorrow, doesnt quite apply in this situation, since all of the above-mentioned AI currently exists in some form or the other. But then, what comes of the antiquated, human doctor in our future world? Well, they can take consolation, their unemployment status would be part of a global trend: the creation displacing the creator. Mechanisation of the workforce leading to mass unemployment. This analogy of our friend, the doctor, speaks volumes; medicine is cherished for championing human empathy if doctors arent safe, nobody is. The solution: socialism.

Open revolt against machinery seems a novel concept set in some futuristic dystopian land, though, the reality can be found in history: the Luddites of Nottinghamshire. A radical faction of skilled textile workers protecting their employment through machine destruction and riots, during the industrial revolution of the 18th century. The now satirised term Luddite, may be more appropriately directed to your fathers fumbled attempt at unlocking his iPhone, as opposed to a militia.

What lessons are to be learnt from the Luddites? Much. Firstly, the much-fictionalised fight for dominance between man and machine is just that: fictionalised. The real fight is within mankind. The Luddites fight was always against the manufacturer, not the machine; machine destruction simply acted as the receptacle of dissidence. Secondly, government feeling towards the Luddites is exemplified through 12,000 British soldiers being deployed against the Luddites, far exceeding the personnel deployed against Napoleons forces in the Iberian Peninsula in the same year.

Though providing clues, the future struggle against AI and its wielders will be tangibly different from that of the Luddite struggle of the 18th century, next; its personal, its about soul. Our higher cognitive faculties will be replaced: the diagnostic expertise of the doctor, decision-making ability of the manager, and (if were lucky) political matters too.

The monopolising of AI will lead to mass unemployment and mass welfare, reverberating globally. AI efficiency and efficacy will soon replace the error-prone human. It must be the case that AI is to be socialised and the means of production, the AI, redistributed: in other words, brought under public ownership. Perhaps, the emergence of co-operative groups made up of experienced individuals will arise to undertake managerial functions in their previous, now automated, workplace. Whatever the structure, such an undertaking will require the full intervention of the state; on a moral basis not realised in the Luddite struggle.

Envisaging an economic system of nationalised labour of AI machinery performing laborious as well as lively tasks shant be feared. This economic model, one of abundance, provides a platform of the fullest of creative expression and artistic flair for mankind. Humans can pursue leisurely passions. Imagine the doctor dedicating superfluous amounts of time on the golfing course, the manager pursuing artistic talents. And what of the politician? Well, thats anyones guess

An abundance economy is one of sustenance rather than subsistence; initiating an old form of socialism fit for a futuristic age. AI will transform the labour market by destroying it; along with the feudalistic structure inherent to it.

Thought-provoking questions do arise: what is to become of human aspiration? What exactly will it mean to be human in this world of AI?

Ironically; perhaps it will be the machine revolution that gives us the resolution to age-old problems in society.

More:

AI technology will soon replace error-prone humans all over the world but here's why it could set us all free - Gulf Today

Will Artificial Intelligence Ever Live Up to Its Hype? – Scientific American

When I started writing about science decades ago, artificial intelligence seemed ascendant. IEEE Spectrum, the technology magazine for which I worked, produced a special issue on how AI would transform the world. I edited an article in which computer scientist Frederick Hayes-Roth predicted that AI would soon replace experts in law, medicine, finance and other professions.

That was in 1984. Not long afterward, the exuberance gave way to a slump known as an AI winter, when disillusionment set in and funding declined. Years later, doing research for my book The Undiscovered Mind, I tracked Hayes-Roth down to ask how he thought his predictions had held up. He laughed and replied, Youve got a mean streak.

AI had not lived up to expectations, he acknowledged. Our minds are hard to replicate, because we are very, very complicated systems that are both evolved and adapted through learning to deal well and differentially with dozens of variables at one time. Algorithms that can perform a specialized task, like playing chess, cannot be easily adapted for other purposes. It is an example of what is called nonrecurrent engineering, Hayes-Roth explained.

That was 1998. Today, according to some measures, AI is booming once again. Programs such as voice and face recognition are embedded in cell phones, televisions, cars and countless other consumer products. Clever algorithms help me choose a Christmas present for my girlfriend, find my daughters building in Brooklyn and gather information for columns like this one. Venture-capital investments in AI doubled between 2017 and 2018 to $40 billion,according to WIRED. A Price Waterhouse study estimates that by 2030 AI will boost global economic output by more than $15 trillion, more than the current output of China and India combined.

In fact, some observers fear that AI is moving too fast. New York Times columnist Farhad Manjoo calls an AI-based reading and writing program, GPT-3, amazing, spooky, humbling and more than a little terrifying. Someday, he frets, he might be put out to pasture by a machine. Neuroscientist Christof Koch has suggested that we might need computer chips implanted in our brains to help us keep up with intelligent machines.

Elon Musk made headlines in 2018 when he warned that superintelligent AI, much smarter than we are, represents the single biggest existential crisis that we face. (Really? Worse than climate change? Nuclear weapons? Psychopathic politicians? I suspect that Musk, whohas invested in AI, is trying to promote the technology with his over-the-top fearmongering.)

Experts are pushing back against the hype, pointing out that many alleged advances in AI are based on flimsy evidence. Last January, for example, a team from Google Health claimed in Nature that their AI program had outperformed humans in diagnosing breast cancer. In October, a group led by Benjamin Haibe-Kains, a computational genomics researcher, criticized the Google health paper, arguing that the lack of details of the methods and algorithm code undermines its scientific value.

Haibe-Kains complained to Technology Review that the Google Health report is more an advertisement for cool technology than a legitimate, reproducible scientific study. The same is true of other reported advances, he said. Indeed, artificial intelligence, like biomedicine and other fields, has become mired in a replication crisis. Researchers make dramatic claims that cannot be tested, because researchersespecially those in industrydo not disclose their algorithms. One recent review found that only 15 percent of AI studies shared their code.

There are also signs that investments in AI are not paying off. Technology analyst Jeffrey Funk recently examined 40 start-up companies developing AI for health care, manufacturing, energy, finance, cybersecurity, transportation and other industries. Many of them were not nearly as valuable to society as all the hype would suggest, Funk reports in IEEE Spectrum. Advances in AI are unlikely to be nearly as disruptivefor companies, for workers, or for the economy as a wholeas many observers have been arguing.

Science reports that core progress in AI has stalled in some fields, such as information retrieval and product recommendation. A study of algorithms used to improve the performance of neural networks found no clear evidence of performance improvements over a 10-year period.

The longstanding goal of general artificial intelligence, possessing the broad knowledge and learning capacity to solve a variety of real-world problems, as humans do, remains elusive. We have machines that learn in a very narrow way, Yoshua Bengio, a pioneer in the AI approach called deep learning, recently complained in WIRED. They need much more data to learn a task than human examples of intelligence, and they still make stupid mistakes.

Writing in The Gradient, an online magazine devoted to tech, AI entrepreneur and writer Gary Marcus accuses AI leaders as well as the media of exaggerating the fields progress. AI-based autonomous cars, fake news detectors, diagnostic programs and chatbots have all been oversold, Marcus contends. He warns that if and when the public, governments, and investment community recognize that they have been sold an unrealistic picture of AIs strengths and weaknesses that doesn't match reality, a new AI winter may commence.

Another AI veteran and writer, Erik Larson, questions the myth that one day AI will inevitably equal or surpass human intelligence. In The Myth of Artificial Intelligence: Why Computers Cant Think the Way We Do, scheduled to be released by Harvard University Press in April, Larson argues that success with narrow applications gets us not one step closer to general intelligence.

Larson says the actual science of AI (as opposed to the pseudoscience of Hollywood and science fiction novelists) has uncovered a very large mystery at the heart of intelligence, which no one currently has a clue how to solve. Put bluntly: all evidence suggests that human and machine intelligence are radically different. And yet the myth of inevitability persists.

When I first started writing about science, I believed the myth of AI. One day, surely, researchers would achieve the goal of a flexible, supersmart, all-purpose artificial intelligence, like HAL. Given rapid advances in computer hardware and software, it was only a matter of time. And who was I to doubt authorities like Marvin Minsky?

Gradually, I became an AI doubter, as I realized that our mindsin spite of enormous advances in neuroscience, genetics, cognitive science and, yes, artificial intelligenceremain as mysterious as ever. Heres the paradox: machines are becoming undeniably smarterand humans, it seems lately, more stupid, and yet machines will never equal, let alone surpass, our intelligence. They will always remain mere machines. Thats my guess, and my hope.

Further Reading:

How Would AI Cover an AI Conference?

Do We Need Brain Implants to Keep Up with Robots?

The Many Minds of Marvin Minsky (R.I.P.)

The Singularity and the Neural Code

Who Wants to Be a Cyborg?

Mind-Body Problems

Continued here:

Will Artificial Intelligence Ever Live Up to Its Hype? - Scientific American

SimCam 1S security camera review: Superb AI features are the highlight of this affordable camera – TechHive

The SimCam 1S is a relative raritya home security camera chockablock with advanced AI features, no cloud subscription requirement for them to work, and a modest price tag. That it works as well as it does almost seem like gravy. The 1S is not only a big improvement over the original Kickstarter-funded SimCam, its features and performance put it in a league with cameras from leadingand more-expensivebrands such as Nest.

The SimCam 1S looks fairly identical to the original SimCam I reviewed at its launch. A spherical body houses a 1080p camera with a 120-degree field of view. An LED indicator is set into the face of the camera above the lens, and a light sensor and microphone are beneath that. Ten infrared LED emitters ringing the lens provide up to 50 feet of night vision. A speaker takes up much of the back of the camera, and beneath that t is a panel concealing a MicroSD card slot and a reset button.

The camera body can rotate 360 degrees and tilt 22 degrees on its base when you pan it or enable the automatic tracking feature. On the back of the camera are the power cord port and a slot to attach the wall mount. The camera can be used outside, but it carries only an IP54 rating so you might want to keep away from too much dust and direct water exposure (click this link for our in-depth explanation of IP codes).

The 1Ss includes person, animal, and vehicle detection.

Also like the original SimCam, the SimCam 1Ss main attraction is its system of AI-powered smart alerts. The camera can detect people up to 60 feet, vehciles up to 20 feet, and animals up to 10 feet. It can also recognize faces at up to 18 feet. You further hone detection by setting activity zones and object monitoring areas.

The SimCam 1S performs all AI processing on board, so the manufacturer doesnt burden users with the additional ongoing costs of a cloud subscription. That means all event-detected video clips are store locally as well. The camera comes with a 16GB microSD card installed, but it supports up to 128GB cards.

The camera works with Amazon Alexa and Google Assistant, so you can use voice commands to view your feed on smart displays that support those digital assistants. Additionally, you can automate many of the SimCam 1Ss features and integrate it with other smart devices using IFTTT applets.

The SimCam 1S has excellent image quality and its companion app enables easy control of its AI features.

SimCams companion app walks you through the setup, but it was still one of the more bothersome Ive encountered. Thats largely because it starts with having to access the cameras reset button, which is secured behind a panel on the back of the camera stand. You must unscrew and remove this panel and then press the recessed button for five seconds to begin the connection process. SimCam provides a small Allen Wrench and a reset pin for this task, but I still had to employ a butter knife to pry off the panel (users with long fingernails might fare better with this step).

It gets easier after that. Once youve pressed the reset button, youre asked to login to your Wi-Fi and scan a QR code in the app with the cameras voice prompt providing status updates. The SimCam 1S connected immediately and was up and running in minutes.

The camera appears on the SimCam apps homescreen as a still shot of its current view, overlaid with a Play button to activate its live stream. Buttons beneath this are used to activate privacy mode, turn off motion detection and Wi-Fi, and access the cameras other settings. The Settings menu is the first place you should go, as this is where you turn on the various forms of AI detection and automatic tracking, choose a video clip length (15-, 30-, or 60 seconds), set a working schedule (the camera is active 24/7 by default) and set up activity zones and object monitoring areas.

Enabling the various detection features is as simple as toggling a switch in the app for each of them. Facial recognition requires you to enter familiar faces into a database by taking four pictures of a persons visagetwo frontal and a three-quarter profile of the left and right side of their faceand then entering their name and role (Mother, Father, Visitor, and so on). To create activity zones, youll create at least three points on the screen by pressing your finger on it; the app automatically connects all the dots into a shape and will detect activity only within those areas. Object monitoring operates in a similar fashion, but you simply create a bounding box over the object you wish to monitor by dragging your finger over it. After that, the app will notify you if the object moves or is moved from that area. You can also opt to have activity zones and monitoring areas visible in the live feed.

While you can view the cameras feed in the home screens streaming pane, you must enter fullscreen mode to see the cameras controls. Across the bottom are buttons for muting the speaker, recording video and taking screen shots, triggering the cameras microphone, and sounding the cameras siren to ward off an intruder.

You can pan the camera using swiping gestures on your phones screen. Each swipe only moves the camera a few inches, though, and theres no way to continuously pan. That makes using this feature a slow and somewhat noisy affair as the cameras motor is audible with each swipe.

You can easily add familiar faces to the SimCams database to have them identified in alerts.

That small criticism was really the only shortcoming I encountered. The SimCam 1Ss smart detection and alerts worked well in my tests, as did automatic tracking. I created an object monitoring area around my dogs bed so that I could be alerted when he strayed from it and use the two-way talk feature to tell him to return. It worked perfectly. The cameras image quality was excellent, displaying bright, accurate color in day mode and strong contrast and illumination in night mode.

I found the SimCam 1S to be a much more consistent performer than the original SimCam. Though its panning feature could be improved, its AI functions are top notch and this camera is a steal at its current retail price. All that plus superb image quality makes it easy to recommend.

Read the original:

SimCam 1S security camera review: Superb AI features are the highlight of this affordable camera - TechHive

Riiid raises $41.8 million to expand its AI test prep apps – VentureBeat

Riiid, a Seoul, South Korea-based startup developing AI test prep solutions, today closed a $41.8 million pre-series D financing round, bringing its total venture capital raised to date to $70.2 million. CEO YJ Jang says the funding will be used to advance Riiids technology that offers personalized study solutions based on big data analysis, and to bolster the companys expansion across the U.S., South America, and the Middle East as it establishes an R&D lab Riiid Labs in Silicon Valley.

The pandemic has forced the shutdown of schools in countries around the world; cramped indoor classrooms are seen as a major threat vector. Despite inequities with regard to internet access and the widening achievement gap, educators believe the health pros outweigh the cons. Riiid, which offers its services exclusively online, has benefited from the shift. The company claims sales have grown more than 200% since 2017 as over a million users joined its community.

Riiids platform Santa is a mobile study aid for the Test of English for International Communication (TOEIC) English proficiency exam. (Unlike the better-known TOEFL or IELTS tests, which are used by Western universities and colleges as part of their admissions process, TOEIC is primarily used by employers to assess the English proficiency of prospective hires.) Leveraging AI and machine learning algorithms, Santa analyzes responses to predict scores and recommend personalized review plans. A meta-learning log with over 100 million pools of information supplies insights to support Santa, as well as Riiids other systems.

Santa primarily lives on the web, but its also available as a chatbot for smart speakers from Kakao. In Japan, Riiid teamed up with game developer KLab Langoo to design a mobile-optimized version of Santa.

According to Jang, the goal is to help achieve learning objectives through continuous evaluation and feedback rather than specific prep. The engineers called the app Santa because it collects data on student performance in the way that Santa Claus famously keeps track of childrens good deeds and bad, he told VentureBeat via email. We launched Santa in the Korean market, focusing on preparing students for the TOEIC exam because it was an easy target that would validate or invalidate our research findings. More than a million students in Korea and Japan have now used the Santa app and I can proudly report that it works. We raise scores by an average of 129 points out of a possible 990 on the TOEIC exam at a fraction of the time and cost it takes with traditional test-prep courses or personal tutors.

After launching Santa in Korea, Japan, and Vietnam, Riiid plans to pivot to backend curriculum solutions for companies, school districts, and education ministries. Earlier this year, the company published EdNet, a data set of all student-system interactions collected over two years by Santa. Riiid is also expanding its platform to standardized tests like the SAT and ACT, and it claims its signed memorandums of understanding with customers in the Middle East and U.S. (for example, private education center company Point Avenue) to develop programs for specific courses of study.

Riiids latest funding round included an investment from the state-run Korea Development Bank (KDB), NVESTOR, and Intervest, as well as from existing investor IMM Investment. According to LinkedIn data, the startup employs about 80 people.

Here is the original post:

Riiid raises $41.8 million to expand its AI test prep apps - VentureBeat

AI in Agriculture market Comprehensive Analysis On Size (Value & Volume), Application | The Climate Corporation Agribotix LLC, Tule Technologies,…

With the watchful use of established and advanced tools such as SWOT analysis and Porters Five Forces Analysis, this market report has been structured. Meticulous hard work of skilled forecasters, well-versed analysts and knowledgeable researchers gives outcome of such premium AI in Agriculture market research report. This market report aids to unearth the general market conditions, existing trends and tendencies in the industry. The market study conducted in this report analyzes the market status, market share, growth rate, future trends, market drivers, opportunities and challenges, risks and entry barriers, sales channels, and distributors in the industry.

This AI in Agriculture market report studies the market and the industry thoroughly by considering several aspects. According to this market report, the global market is anticipated to observe a moderately higher growth rate during the forecast period. This makeover can be subjected to the moves of key players or brands which include developments, product launches, joint ventures, mergers and acquisitions that in turn change the view of the global face of the industry. With the actionable market insights included in this report, businesses can craft sustainable and cost-effective strategies. This AI in Agriculture market report provides all-inclusive study about production capacity, consumption, import and export for all the major regions across the world.

According to report published byData Bridge Market Research, TheAI in Agriculture marketsize is expected to reachUSD XX billionin the forecast period. This report provides in depth study of AI in Agriculture Market using SWOT analysis i.e. Strength, Weakness, Opportunities and Threat to the organization.

Get Exclusive FREE Sample Report with All Related Graphs & Charts @https://www.databridgemarketresearch.com/request-a-sample/?dbmr=global-ai-agriculture-market

Register Here @https://www.databridgemarketresearch.com/digital-conference/conference-on-content-moderation-solution

Key questions answered in the report:

What is the growth potential of this market?

Which product segment will grab a lions share?

Which regional market will emerge as a frontrunner in the coming years?

Which application segment will grow at a robust rate?

What are the growth opportunities that may emerge in this industry in the years to come?

What are the key challenges that the global market may face in the future?

Which are the leading companies in the global market?

Which are the key trends positively impacting the market growth?

Which are the growth strategies considered by the players to sustain hold in the global market?

Report includes Competitors Landscape:

Major trends and growth projections by region and country

Key winning strategies followed by the competitors

Who are the key competitors in this industry?

What shall be the potential of this industry over the forecast tenure?

What are the factors propelling the demand for this Industry?

What are the opportunities that shall aid in significant proliferation of the market growth?

What are the regional and country wise regulations that shall either hamper or boost the demand for this Industry?

How has the covid-19 impacted the growth of the market?

Has the supply chain disruption caused changes in the entire value chain?

Market segmentation

By Offering (Hardware, Software, Service, AI-As-A-Service), By Technology (Predictive Analytics, Machine Learning, Computer Vision), By Application (Livestock Monitoring, Precision Farming, Agriculture Robots, Livestock Monitoring, Drone Analytics), By Geography (North America, Europe, Asia-Pacific, Europe, South America, Middle East and Africa)

NOTE: Our report highlights the major issues and hazards that companies might come across due to the unprecedented outbreak of COVID-19.

The assessment provides a 360 view and insights, outlining the key outcomes of the industry, current scenario witnesses a slowdown and study aims to unique strategies followed by key players. These insights also help the business decision-makers to formulate better business plans and make informed decisions for improved profitability. In addition, the study helps venture or private players in understanding the companies more precisely to make better informed decisions.Some of the key players in the Global AI in Agriculture market are M, Microsoft Corporation, Descartes Labs, Deere & Company, Granular, aWhere, The Climate Corporation Agribotix LLC, Tule Technologies, Prospera, Mavrx Inc., Cropx, Harvest Croo, Farmbot, Trace Genomics, Spensa Technologies Inc., Resson, Vision Robotics and Autonomous Tractor Corporation among others

We can add or profile new company as per client need in the report. Final confirmation to be provided by research team depending upon the difficulty of survey

Why COVID-19 AI in Agriculture Research Insights is Interesting?

This report covers the current slowdown due to Coronavirus and growth prospects of COVID-19High AI in Agriculture for the period. The study is a professional and in-depth study with around n- no. of tables and figures which provides key statistics on the state of the industry and is a valuable source of guidance and direction for companies and individuals interested in the domain to better understand how players are fighting and preparing against COVID-19.

Regional Analysis:

This segment of the report covers the analysis of AI in Agriculture consumption, import, export, market value, revenue, market share and growth rate, market status and SWOT analysis, price and gross margin analysis by regions. It includes data about several parameters related to the regional contribution. From the available data, we will identify which area has the largest share of the market. At the same time, we will compare this data to other regions, to understand the demand in other countries. Market analysis by regions:North America (United States, Canada and Mexico), Europe (Germany, France, UK, Russia and Italy), Asia-Pacific (China, Japan, Korea, India and Southeast Asia), South America (Brazil, Argentina, etc.), Middle East & Africa (Saudi Arabia, Egypt, Nigeria and South Africa)

Answers That The Report Acknowledges:

To know more about this research, you can click @https://www.databridgemarketresearch.com/reports/global-ai-agriculture-market

Research objectives:

To study and analyze the global AI in Agriculture market size by key regions/countries, product type and application, history data, and forecast period

To understand the structure of AI in Agriculture market by identifying its various sub segments.

Focuses on the key global AI in Agriculture players, to define, describe and analyze the value, market share, market competition landscape, SWOT analysis and development plans in next few years

To analyze the AI in Agriculture with respect to individual growth trends, future prospects, and their contribution to the total market

To share detailed information about the key factors influencing the growth of the market (growth potential, opportunities, drivers, industry-specific challenges and risks)

To project the size of AI in Agriculture submarkets, with respect to key regions (along with their respective key countries)

To analyse competitive developments such as expansions, agreements, new product launches and acquisitions in the market

To strategically profile the key players and comprehensively analyze their growth strategies

Key Pointers Covered within the Global AI in Agriculture Market Industry Trends and Forecast

And More..Get Detailed Free TOC @https://www.databridgemarketresearch.com/toc/?dbmr=global-ai-agriculture-market

Data Bridge set forth itself as an unconventional and neoteric Market research and consulting firm with unparalleled level of resilience and integrated approaches. We are determined to unearth the best market opportunities and foster efficient information for your business to thrive in the market.

Contact:

Tel: +1-888-387-2818

Email: Corporatesales@databridgemarketresearch.com

More:

AI in Agriculture market Comprehensive Analysis On Size (Value & Volume), Application | The Climate Corporation Agribotix LLC, Tule Technologies,...

Tech companies are using AI to mine our digital traces – STAT

Imagine sending a text message to a friend. As your fingers tap the keypad, words and the occasional emoji appear on the screen. Perhaps you write, I feel blessed to have such good friends 🙂 Every character conveys your intended meaning and emotion.

But other information is hiding among your words, and companies eavesdropping on your conversations are eager to collect it. Every day, they use artificial intelligence to extract hidden meaning from your messages, such as whether you are depressed or diabetic.

Companies routinely collect the digital traces we leave behind as we go about our daily lives. Whether were buying books on Amazon (AMZN), watching clips on YouTube, or communicating with friends online, evidence of nearly everything we do is compiled by technologies that surround us: messaging apps record our conversations, smartphones track our movements, social media monitors our interests, and surveillance cameras scrutinize our faces.

advertisement

What happens with all that data? Tech companies feed our digital traces into machine learning algorithms and, like modern day alchemists turning lead into gold, transform seemingly mundane information into sensitive and valuable health data. Because the links between digital traces and our health emerge unexpectedly, I call this practice mining for emergent medical data.

A landmark study published in June, which used AI to analyze nearly 1 million Facebook posts containing more than 20 million words, shows how invasive this practice can be. According to the authors, Facebook status updates can predict many health conditions such as diabetes, hypertension, and gastrointestinal disorders.

Many of the words and phrases analyzed were not related to health. For instance, the presence of diabetes was predicted by religious language, including words related to God, family, and prayer. Those words often appeared in non-medical phrases such as, I am blessed to spend all day with my daughter.

Throughout history, medical information flowed directly from people with health conditions to those who cared for them: their family members, physicians, and spiritual advisers. Mining for emergent medical data circumvents centuries of well-established social norms and creates new opportunities for discrimination and oppression.

Facebook analyzes even the most mundane user-generated content to determine when people feel suicidal. Google is patenting a smart home that collects digital traces from household members to identify individuals with undiagnosed Alzheimers disease, influenza, and substance use disorders. Similar developments may be underway at Amazon, which recently announced a partnership with the United Kingdoms National Health Service.

Medical information that is revealed to health care providers is protected by privacy laws such as the Health Information Portability and Accountability Act (HIPAA). In contrast, emergent medical data receives virtually no legal protection. By mining it, companies sidestep privacy and antidiscrimination laws to obtain information most people would rather not disclose.

Why do companies go to such lengths? Facebook says it performs a public service by mining digital traces to identify people at risk for suicide. Google says its smart home can detect when people are getting sick. Though these companies may have good intentions, their explanations also serve as smoke screens that conceal their true motivation: profit.

In theory, emergent medical data could be used for good. People with undiagnosed Alzheimers disease could be referred to physicians for evaluation; those with substance use disorders could be served ads for evidence-based recovery centers. But doing so without explicit consent violates individuals privacy and is overly paternalistic.

Emergent medical data is extremely valuable because it can be used to manipulate consumers. People with chronic pain or substance use disorders can be targeted with ads for illicit opioids; those with eating disorders can be served ads for stimulants and laxatives; and those with gambling disorders can be tempted with coupons for casino vacations.

Informing and influencing consumers with traditional advertising is an accepted part of commerce. However, manipulating and exploiting them through behavioral ads that leverage their medical conditions and related susceptibilities is unethical and dangerous. It can trap people in unhealthy cycles of behavior and worsen their health. Targeted individuals and society suffer while corporations and their advertising partners prosper.

Emergent medical data can also promote algorithmic discrimination, in which automated decision-making exploits vulnerable populations such as children, seniors, people with disabilities, immigrants, and low-income individuals. Machine learning algorithms use digital traces to sort members of these and other groups into health-related categories called market segments, which are assigned positive or negative weights. For instance, an algorithm designed to attract new job candidates might negatively weight people who use wheelchairs or are visually impaired. Based on their negative ratings, the algorithm might deny them access to the job postings and applications. In this way, automated decision-making screens people in negatively weighted categories out of life opportunities without considering their desires or qualifications.

Last year, in a high-profile case of algorithmic discrimination, the Department of Housing and Urban Development (HUD) accused Facebook of disability discrimination when it allowed advertisers to exclude people from receiving housing-related ads based on their disabilities. But in a surprising turn, HUD recently proposed a rule that would make it more difficult to prove algorithmic discrimination under the Fair Housing Act.

Because emergent medical data are mined secretly and fed into black-box algorithms that increasingly make important decisions, they can be used to discriminate against consumers in ways that are difficult to detect. On the basis of emergent medical data, people might be denied access to housing, jobs, insurance, and other important resources without even knowing it. HUDs new rule will make that easier to do.

One section of the rule allows landlords to defeat claims of algorithmic discrimination by identifying the inputs used in the model and showing that these inputs are not substitutes for a protected characteristic This section gives landlords a green light to mine emergent medical data because its inputs our digital traces have little or no apparent connection to health conditions or disabilities. Few would consider the use of religious language on Facebook a substitute for having diabetes, which is a protected characteristic under the Fair Housing Act. But machine learning is revealing surprising connections between digital traces and our health.

To close gaps in health privacy regulation, Reps. Amy Klobuchar (D-Minn.) and Lisa Murkowski (R-Alaska) introduced the Protecting Personal Health Data Act in June. The bill aims to protect health information collected by fitness trackers, wellness apps, social media sites, and direct-to-consumer DNA testing companies.

Though the bill has some merit, it would put consumers at risk by creating an exception for emergent medical data: One section excludes products on which personal health data is derived solely from other information that is not personal health data, such as Global Positioning System [GPS] data. If passed, the bill would allow companies to continue mining emergent medical data to spy on peoples health with impunity.

Consumers can do little to protect themselves other than staying off the grid. Shopping online, corresponding with friends, and even walking along public streets can expose any of us to technologies that collect digital traces. Unless we do something soon to counteract this trend, we risk permanently discarding centuries of health privacy norms. Instead of healing people, emergent medical data will be used to control and exploit them.

Just as we prohibit tech companies from spying on our health records, we must prevent them from mining our emergent medical data. HUDs new rule and the Klobuchar-Murkowski bill are steps in the wrong direction.

Mason Marks, M.D. is an assistant professor of law at Gonzaga University and an affiliate fellow at Yale Law Schools Information Society Project.

Continued here:

Tech companies are using AI to mine our digital traces - STAT

Google’s Deep Mind Explained! – Self Learning A.I.

Subscribe here: https://goo.gl/9FS8uFBecome a Patreon!: https://www.patreon.com/ColdFusion_TVVisual animal AI: https://www.youtube.com/watch?v=DgPaC...

Hi, welcome to ColdFusion (formally known as ColdfusTion).Experience the cutting edge of the world around us in a fun relaxed atmosphere.

Sources:

Why AlphaGo is NOT an "Expert System": https://googleblog.blogspot.com.au/20...

Inside DeepMind Nature video:https://www.youtube.com/watch?v=xN1d3...

AlphaGo and the future of Artificial Intelligence BBC Newsnight: https://www.youtube.com/watch?v=53YLZ...

http://www.nature.com/nature/journal/...

http://www.ft.com/cms/s/2/063c1176-d2...

http://www.nature.com/nature/journal/...

https://www.technologyreview.com/s/53...

https://medium.com/the-physics-arxiv-...

https://www.deepmind.com/

http://www.forbes.com/sites/privacynotice/2014/02/03/inside-googles-mysterious-ethics-board/#5dc388ee4674

https://medium.com/the-physics-arxiv-...

http://www.theverge.com/2016/3/10/111...

https://en.wikipedia.org/wiki/Demis_H...

https://en.wikipedia.org/wiki/Google_...

//Soundtrack//

Disclosure - You & Me (Ft. Eliza Doolittle) (Bicep Remix)

Stumbleine - Glacier

Sundra - Drifting in the Sea of Dreams (Chapter 2)

Dakent - Noon (Mindthings Rework)

Hnrk - fjarlg

Dr Meaker - Don't Think It's Love (Real Connoisseur Remix)

Sweetheart of Kairi - Last Summer Song (ft. CoMa)

Hiatus - Nimbus

KOAN Sound & Asa - This Time Around (feat. Koo)

Burn Water - Hide

Google + | http://www.google.com/+coldfustion

Facebook | https://www.facebook.com/ColdFusionTV

My music | t.guarva.com.au/BurnWater http://burnwater.bandcamp.com or http://www.soundcloud.com/burnwater https://www.patreon.com/ColdFusion_TV Collection of music used in videos: https://www.youtube.com/watch?v=YOrJJ...

Producer: Dagogo Altraide

Editing website: http://www.cfnstudios.com

Coldfusion Android Launcher: https://play.google.com/store/apps/de...

Twitter | @ColdFusion_TV

Go here to see the original:

Google's Deep Mind Explained! - Self Learning A.I.

AI won’t kill you, but ignoring it might kill your business, experts say … – Chicago Tribune

Relax. Artificial intelligence is making our lives easier, but won't be a threat to human existence, according to panel of practitioners in the space.

"One of the biggest misconceptions today about autonomous robots is how capable they are," said Brenna Argall, faculty research scientist at the Rehabilitation Institute of Chicago, during a Chicago Innovation Awards eventWednesday.

"We see a lot of videos online showing robots doing amazing things. What isn't shown is the hours of footage where they did the wrong thing," she said. "The reality is that robots spend most of their time not doing what they're supposed to be doing."

The event at Studio Xfinity drew about 200 people, who mingled among tech exhibits before contemplating killer robot overlords.

Stephen Pratt, a former IBM employee who was then responsible for the global implementation of Watson, also was quick to swat down the notion that machines are poised to run the world.

The tech insteadgives better ways to improve services, products and business, hesaid besting humans in applications dealing with demand predictions, pricing, inventory, retail promotion, logistics and preventive maintenance.

"Amplifying human intelligence, and overcoming human cognitive biases I think that's where it fits," said Pratt, founder and CEO of business consultancy Noodle.ai. "Humans are really bad probabilistic thinkers and statisticians. That's where cognitive bias creeps in and, therefore, inefficiencies and lost profit."

But machineswon't replace humans when it comes to big-picture decisions, he said.

"Those algorithms are not going to set the strategy for the company. It'll help you make the decision once I come up with the idea," Pratt said. "But any executive that doesn't have a supercomputer in the mix now on their side and they're stuck in the spreadsheet era your jobs are going to be in jeopardy in a few years."

It'll be up to machines to decipher those spreadsheets anyway, as so much data is being collected it would be overwhelming for humans to understand, said Kris Hammond, co-founder of Chicago AI company Narrative Science.

"We're no longer looking at a world with a spreadsheet with 20 columns and 50 rows. We're now looking at spreadsheets of thousands of columns and millions of rows," said Hammond, founder of the University of Chicago's Artificial Intelligence Laboratory. "The only way we can actually understand what's going on in the world is to have systems that look at that data, understand what they mean and then turn it into something we can understand."

Mike Shelton, technical director for Microsoft's Azure Data Services, said it's also a time saver.

"What I see every day is it's giving time back," he said. "Through an AI interface, I can ask a question in speech or text and get a response through that without having to go search for a web page or hunt for information."

Julie Friedman Steele , CEO of the World Future Society, said her organization is focusing on the advances that could be made using AI in education, where teachers in crowded classrooms can't give much attention to students individually.

"As a human, can you actually learn all the knowledge that you might have a student interested in learning?" said Steele, who's also CEO and founder of The 3D Printer Experience. "I'm not talking about there not being a human in the room and it's all robots. I'm just saying that there's an opportunity in education with artificial intelligence so that if a teacher doesn't know something, it's OK."

Cheryl V. Jackson is a freelance writer. Twitter@cherylvjackson

Read more here:

AI won't kill you, but ignoring it might kill your business, experts say ... - Chicago Tribune

Total partners with Google to deploy AI-powered solar energy tool – The Hindu

(Subscribe to our Today's Cache newsletter for a quick snapshot of top 5 tech stories. Click here to subscribe for free.)

French energy company Total has developed a tool to determine solar energys potential of rooftops. Partnering with Google Cloud, the tool will help popularise the deployment of solar energy panels in households.

The tool Solar Mapper uses artificial intelligence (AI) algorithm to extract data from satellite images. AI helps facilitates sharper and quicker estimation of solar energy potential than present tools, the company said in an official statement.

The tool will also guide households to understand what technology would need to be installed depending on solar energy requirements.

Researchers from Total and Google Cloud took 6 months to devise the programme. At present, Solar Mapper is said to provide nearly 90% coverage in France.

Also read | Alphabet's robotic plant buggy can scan crops, gather data

The availability of the tool will expand all through Europe and the rest of the world soon, the team said. Solar Mapper will also expand its application to industrial and commercial buildings.

Total said this will help further its goal of becoming net-zero emission by 2050.

In September, Googles CEO Sundar Pichai said in a video message the company has ended its carbon legacy, making it the first major company to do so.

Here is the original post:

Total partners with Google to deploy AI-powered solar energy tool - The Hindu