Crypto market report: Bitcoin, Ethereum, Litecoin & Co .: How the crypto prices develop on Saturday | message – The Times Hub

The Bitcoin price fell on Saturday. At noon, Bitcoin fell to $ 46,887.78 after trading at $ 47,586.24 the day before.

Bitcoin Cash price fell to $ 558.30 after trading at $ 579.65 the previous day.

display

Do you want to invest in Bitcoin? We explain the possibilities to youHere you can easily buy and sell Bitcoin

Ethereum is in the red at $ 1,797.30.

The previous evening, the digital currency was still at $ 1,846.07.

The price of the digital currency Litecoin rose to 198.73 US dollars on Saturday.

The day before, the rate of the digital currency was put at 197.45 US dollars.

The ripple is worth $ 0.5817 on Saturday.

The Ripple price fell compared to the previous day when it was still at $ 0.6138.

The Cardano course has fallen compared to the previous day. A Cardano is currently worth $ 0.8810.

The price was yesterday at $ 0.9254.

The course of the digital currency Monero is quoted today at 214.78 US dollars in the plus.

The previous day the price was $ 201.19.

The IOTA course is stronger than the day before. One IOTA is currently worth $ 1.266. Yesterday the price was still at $ 1.232.

The Verge price runs sideways at 0.0255 US dollars compared to the previous days level.

The Stellar exchange rate rose to $ 0.5423 compared to the previous day.

There was still $ 0.5258 on the price board.

The NEM rate trades lighter at $ 0.3870.

The previous day the price was $ 0.3986.

The Dash price rose to $ 207.98.

The day before, the cryptocurrency was worth $ 168.16.

The price of the NEO rose to $ 36.86 today, while the previous day it was trading at $ 37.99.

Finanzen.net editorial team

Image sources: Wit Olszewski / Shutterstock.com

Natasha Kumar has been a reporter on the news desk since 2018. Before that she wrote about young adolescence and family dynamics for Styles and was the legal affairs correspondent for the Metro desk. Before joining The Times Hub, Natasha Kumar worked as a staff writer at the Village Voice and a freelancer for Newsday, The Wall Street Journal, GQ and Mirabella. To get in touch, contact me through my [emailprotected] 1-800-268-7116

More:

Crypto market report: Bitcoin, Ethereum, Litecoin & Co .: How the crypto prices develop on Saturday | message - The Times Hub

AI will have a huge impact on your healthcare. But there are still big obstacles to overcome – ZDNet

Healthcare has been one of the most promising testing grounds for artificial intelligence, thanks largely to the vast amounts of data, in the forms of medical records and scans, that these smart systems can analyse. But while there are plenty of AI projects underway, there are still barriers to rolling out the benefits further.

Moorfields Eye Hospital in London has been working with Deep Mind and Google Health to develop an algorithm that interprets scans of the back of the eye, which are known as optical coherence tomography scans.

The impact of this AI-led innovation is potentially revolutionary, says Peter Thomas, director of digital innovation at Moorfields Eye Hospital. The algorithm supports automated interpretation of patient scans and gives hospital staff access to excellent diagnostic information.

SEE: Managing AI and ML in the enterprise: Tech leaders increase project development and implementation (TechRepublic Premium)

Yet despite all this promise, the impact of AI isn't as wide as it could be, at least not yet.

If you deploy AI in a hospital, you're using the technology in a place where you already have a department full of clinical experts. Yes, they'll be able to use the interpretation the AI produces, but they'd probably have come up with a similar diagnostic decision themselves.

Thomas, who spoke atthe recent virtual HETT Reset event, says that AI will have a bigger impact when you can apply it to a situation where the level of expertise is different, like in the optometry practice on your high street.

However, that's a big challenge because, at present, the technical infrastructures to support the use of those algorithms in opticians do not exist.

Infrastructure issues aren't the only barrier to the development of more effective healthcare treatment through AI. Another key challenge is finding ways to bring together data from multiple clinical sources.

Right now, AI is usually applied to single decisions. Thomas gives the example of diabetic retinopathy screening in his own hospital, where every patient with diabetes gets an annual eye scan that determines the level of follow-up care. "We know that AI can deal with that single workflow pretty well," says Thomas.

Things get more complicated when hospital staff and their AI-based assistants need to go beyond a single source of data. That's a big issue, as effective healthcare for most patients relies on more than a single data source and usually involves a complex range of information.

If we fast-forward a few more years, says Thomas, and we anticipate a point at which there are multiple autonomous decision-making systems that might be involved in a single patient's healthcare journey, then there's going to be a lot of complexity around how staff are going to implement that information in hospitals and how they're going to monitor that data effectively.

"Each algorithm will need to be monitored for bias and performance as it changes. And there's the potential for complex interaction patterns when you have multiple algorithms involved in a single patient's care," says Thomas, who says the result is clear: the impact of AI in healthcare could be revolutionary, but we're not there yet.

"We're still a distance from being at a point where we can start deploying automated clinical management that goes beyond a single decision or a single interpretation. There's a lot of work to do in terms of getting the right workforce, expertise and structures within the hospital to support that."

Other experts agree. James Teo, clinical director of AI and data science, and consultant neurologist at Guys and St Thomas NHS Foundation Trust, joined Thomas at the HETT event and says one of the things his team has discovered through its research work is that "big data is really, really big".

Automated analysis by AI not only feeds the big-data beast but also sends it off in a new direction.

As people become more aware of automation, their expectations are raised. That hope creates more demand for AI systems, which might be implemented before the key use case around improving patient outcomes is actually identified.

"One fear I have is that the process of operating AI and data-driven technologies is that we'll create an even greater hunger for data, and we'll end up spending all our time clicking on menus and checkboxes. And that, I think, is the wrong way to travel. I think we need systems that allow us to capture data in a more human-friendly way," says Teo.

Moorfields' Thomas agrees, suggesting the main accelerator for AI in healthcare must be clinical usefulness. He says there's a tendency for healthcare providers to create AI-based point solutions. Startup companies target particular healthcare problems, but those aren't necessarily the key issues patients face and, as result, the tech fails to create benefits.

Teo says the result of this badly though-through deployment process is too many point solutions that need to be managed and maintained and that's unfeasible for healthcare organisations, especially when you add-in the risk that the startups that create these point solutions might disappear with their products a few years from now.

The answer, suggests Teo, is to create common platforms, or at least common standards, for handling these point solutions. Vendors need to sign up to these standards and the aim for hospital administrators and tech suppliers alike must be to avoid reinventing the wheel.

Indra Joshi, AI director at digital transformation unit NHSX, says her organisation has plans in this direction. It set up the NHS AI Lab in 2019, a 250m programme that aims to accelerate the safe and ethical development and deployment of AI into the health and care system.

SEE: Quantum computing: Strings of ultracold atoms reveal the surprising behavior of quantum particles

One of the Lab's key programmes of work is about creating projects that take a problem-focused approach to the healthcare challenges that organisations face, rather than simply focusing on the AI products that currently exist.

"We've flipped the traditional approach on its head. We ask, 'what problems are you facing and how can we take some of those problems and develop a solution?' And if we fail, that's OK, because AI might not be the solution to every problem," says Joshi.

TheAI Lab recently worked with Kettering General Hospitalto develop a process-automation tool to help staff produce complex situational reports that have to be filled out during the coronavirus pandemic. The system automatically reduces complexity, collecting information from a variety of sources, such as frontline capacity records and patient data, and frees up staff to focus on patient care rather than reporting.

This kind of data-enabled automation goes to show how the technology can boost staff productivity and patient healthcare. While AI can have a huge impact on diagnostics and decision-making processes, the biggest impact for now is likely to be around operational processes and that's something to celebrate, too.

"People often get excited about the clinical aspects of what AI can do people always love to talk about how AI can really help in diagnosis. But actually, there's a quite a lot of great work happening in the back-end processes," says Joshi.

Link:

AI will have a huge impact on your healthcare. But there are still big obstacles to overcome - ZDNet

When bad actors have AI tools: Rethinking security tactics – The Enterprisers Project

Cloud-stored data is suddenly encrypted, followed by a note asking for ransom or threatening public embarrassment. Corporate email addresses become conduits for malicious malware and links. An organizations core business platform abruptly goes offline, disrupting vital communications and services for hours.

Weve learned to recognize the familiar signs of a cyberattack, thanks to the growing array of well-publicized incidents when threat actors from nation-states or criminal enterprises breach our digital networks. Artificial Intelligenceis changing this picture.

[ Read also:5 approaches to security automationand How to automate compliance and security with Kubernetes: 3 ways. ]

With AI, organizations can program machines to perform tasks that would normally require human intelligence. Examples include self-driving trucks, computer programs that develop drug therapies, and software that writes news articles and composes music. Machine learning (ML) is an application of AI that uses algorithms to teach computers to learn and adapt to new data.

AI and ML represent a revolutionary new way of harnessing technology and an unprecedented opportunity for threat actors to sow even more disruption.

What do these emerging adversarial AI/ML threats look like? How can we take the appropriate measures to protect ourselves, our data, and society as a whole?

Step one in cybersecurity is to think like the enemy. What could you do as a threat actor with adversarial AI/ML? The possibilities are many, with the potential impact extending beyond cyberspace:

Step one in cybersecurity is to think like the enemy. What could you do as a threat actor with adversarial AI/ML?

You could manipulate what a device is trained to see for instance, corrupting training imagery so that a driving robot interprets a stop sign as 55 mph. Because intelligent machines lack the ability to understand context, the driving robot in this case would just keep driving over obstacles or into a brick wall if these things stood in its way. Closer to home, an adversarial AI/ML attack can fool your computers anti-virus software into allowing malware to run.

You could manipulate what humans see, like a phone number that looks like its from your area code. Deepfakes are a sophisticated and frightening example of this. Manufactured videos of politicians and celebrities, nearly indistinguishable from the real thing, have been shared over social media among millions of people before being identified as fake.

Furthermore, you can manipulate what an AI application does, like Twitter users did with Microsofts AI chatbot Tay. In less than a day, they trained the chatbot to spew misogynistic and racist remarks.

Once a machine learning application is live, you can tamper with its algorithms for instance, directing an application for automated email responses to instead spit out sensitive information like credit card numbers. If youre with a cybercriminal organization, this is valuable data ripe for exploitation.

You could even alter the course of geopolitical events. Retaliation for cyberattacks has already been moving into the physical world, as we saw with the 2016 hacking of Ukraines power grid. Adversarial AI ups the ante.

[ Check out our primer on 10 key artificial intelligence terms for IT and business leaders:Cheat sheet: AI glossary. ]

Fortunately, as adversarial AL/ML tactics evolve, so are cybersecurity measures against them. One tactic is training an algorithm to think more like a human. AI research and deployment companyOpen AI suggests explicitly training algorithms against adversarial attacks, training multiple defense models, and training AI models to output probabilities rather than hard decisions, which makes it more difficult for an adversary to exploit the model.

Training can also be used in threat detection for example, training computers to detect deepfake videos by feeding them examples of deepfakes compared with real videos.

IT teams can also achieve an ounce of prevention through baking security into their AI/ML applications from the beginning. When building models, keep in mind how adversaries may try to cause damage. A variety of resources, likeIBMs Adversarial Robustness Toolbox, have emerged to help IT teams evaluate ML models and create more robust and secure AI applications.

Where should organizations start their efforts? Identify the easiest attack vector and try to bake it directly into your AI/ML pipeline. By tackling concrete problems with bespoke solutions, you can mitigate threats in the short term while building the understanding and depth needed to track long-term solutions.

MORE ON ARTIFICIAL INTELLIGENCE AND SECURITY

Attackers armed with AI pose a formidable threat. Bad actors are constantly looking at loopholes and ways to exploit them, and with the right AI system, they can manipulate systems in new, insidious ways and easily perform functions at a scale unachievable by humans. Fortunately, AI is part of the cybersecurity solution as well, powering complex models for detecting malicious behavior, sophisticated threats, and evolving trends and conducting this analysis far faster than any team of humans could.

[ Get the eBook:Top considerations for building a production-ready AI/ML environment.]

Read this article:

When bad actors have AI tools: Rethinking security tactics - The Enterprisers Project

What CIOs need to know about adding AI to their processes – TechRepublic

AI can help many types of businesses get more from their data. In 2021, one expert believes adoption of AI will take leaps forward.

TechRepublic's Karen Roby spoke with Ira Cohen, chief data scientist with Anodot, about the tools CIOs need to implement artificial intelligence (AI) at their companies. The following is an edited transcript of their conversation.

Karen Roby: As we're heading into 2021, CIOs need to have a checklist of some things to keep in mind when making decisions for this coming year, whether that be about hiring or projects to consider. Let's start with the talent that's needed at companies now, to pull off some of these AI projects. What do you think CIOs need to keep in mind?

SEE: Natural language processing: A cheat sheet (TechRepublic)

Ira Cohen: As you said, 2020 was really special in all this disruption to so many businesses. And AI, actually, is now becoming even more important. The projects that maybe people talked about before have been accelerated now because the speed of movement to new paradigms, that has to be much faster. If you're talking about, for example, commerce, supply chains, need to move much faster. A lot of different projects that maybe before were slowly moving towards more e-commerce, and more shipment. I mean, you're getting your Amazon, but now, so many companies are sending what they're selling out, that you have to have a lot more automation and be a lot more mindful of the data, and be a lot more reactive to how things change constantly. Things are changing much faster, and AI is the perfect thing to manage all of that, if we talk about AI in a very, very global sense, because it has a capability of processing data very fast, giving you insights very fast of very high volumes of data, which is what's happening now, but that's what's needed.

What do you need to actually have in your company in order to actually be able to achieve these goals of these projects? The first order of business, and this is something that people and companies have been doing in the last few years is, put all your data together. Create these data lakes. Data lakes have been very popular, and growing at companies like Snowflake, and other types of companies that have grown tremendously in the last few years, because that's what they offer. But, now, to leverage those data lakes, you need data engineers that know how to pull data quickly out of them, and serve them to the data science team that can actually transform them with algorithms into meaningful insights.

Data engineers is something that is going to be required a lot more in the next year or so, because without those data pipelines, laying of data pipelines that will feed all these AI algorithms and projects, there is nothing. The AI doesn't work without data, at least the AI that we have today. And then comes the machine learning engineers. Today, data science has been something that has grown in the last few years. The data scientists are the ones that are developing the AI required for all these projects. But data scientists, a lot of what was hired was basically people that do analysis. They do kind of one-off projects.

And, now, because these things are starting to be more and more automated, you don't need just a data scientist who knows how to do a project well, and prototype something, but you need the engineers that will make it into products, even if they're internal products. It's not a project anymore, it's internal products that have to constantly work for the company to deliver the rate that they need to deliver. These two areas, the data engineers, and the machine learning engineers, and not just the scientist, these are probably the areas where we need to ... I believe, CIOs need to invest most in their companies.

SEE:Is AI just a fairy tale? Not in these successful use cases(TechRepublic)

Karen Roby: When you consider the talent pool, Ira, how much are we talking about here, as far as supply and demand, when it comes to these more specialized areas with AI and machine learning? I mean, do we have the talent to fill the positions that we're going to need?

Ira Cohen: No. I think there's still a big gap, but what's happening in the market, in general, is that the whole field of AI or machine learning is being democratized by all sorts of tools that are either being wrapped into loose products, or open source completely, either from Google, or from Facebook, from companies that are actually invested a lot in developing the, let's say, the foundations that you would need. And then, the talent pool that needs to use it, they don't have to know as deeply, they don't have to have the knowledge as deeply as the people who developed all these tools. So, there is hope of getting a lot more talent into the area without the need for them to get Ph.Ds, in order to be able to do this. And that is happening in parallel.

With good education, with good courses, you can actually get junior machine learning engineers that can start bringing value. Where the gap is, is in the more senior ones, the ones that do have experience, because you can't hire just junior people. They won't have a clue what to do. You do need some sense of the field. The gap is in the machine learning engineers that are kind of, I would say, the mid-tier, and the experts, of course, that will always be a gap. But, the mid-tier that can teach the juniors how to work, that's where most of the gap is today, I believe.

Karen Roby: There's no question that AI has been fast-tracked for many companies that may not have even been considering moving in that direction yet, until their digital transformation plans were really put on fast-forward as well, from March 2020. Is there any particular industry you're really seeing where it's being embraced even more?

SEE:3D scanning, lidar, and drones: Big data is helping law enforcement solve crimes(TechRepublic)

Ira Cohen: We're seeing it in all sorts of commerce, where even if it was half brick-and-mortar, half online, now, this has pushed them quite significantly. Supply chains and deliveries are definitely a big push in those types of companies. And, telcos, we've also seen in telcos that very big push towards AI, and it's driven by two things that happen now in parallel. One is the virus, right? The whole pandemic, which actually put a lot more pressure on networks, and made them even more important, and actually brought some of the telcos to ... Basically, that provide all the foundations for our communications, brought them to the front, and center.

5G, is the second one that's happening in parallel. So, 5G, changing, coming to play, creating a lot more data, a lot more complexities in the networks, is also pushing them to implement AI, to actually being able to manage all that complex infrastructure, which is becoming even more complex, and even more critical.

Karen Roby: When you look to say, nine months to a year from now, how do you see AI playing a role, even versus now? And, again, how is that going to change things overall for businesses, from small businesses to huge enterprise companies?

SEE:Healthcare is adopting AI much faster since the pandemic began(TechRepublic)

Ira Cohen: I think small businesses will leverage AI for particular tasks, small tasks, and probably, the adoption there will be less, because AI, at the end of the day, is fueled by data. And if you don't have a lot of data, you can make your own decisions fairly quickly anyway. But, for larger companies, the ones that do not embrace it, and do not start using it heavily to make better decisions, to forecast the future, they'll be left behind, because they are not going to benefit from the improved, either margins, by being more efficient, or improve the ability to sell more, because of what those tools will give them, they will start losing out.

There's definitely a race for them to actually do this, tools to embrace it quickly. For the small businesses, I think it will be slower to embrace, unless it's for very particular tasks that before, they could not do, because they could not hire the people to do it. But, now, they'll get the tool that already does it for a small fraction of that price that would be if they had to develop it themselves, and then they can run away with it.

I mean, even looking at just simple e-commerce sites, right? You're trying to sell something, and you want to have a recommendation engine, like Amazon has a recommendation engine on its website, which does improve how much you're selling. Today, a small website, or as a small seller, cannot develop it themselves. It's too expensive. But with it becoming available as a service from companies, they can actually start using it for a fraction of the price, and get the benefit of it even for themselves. For recruiting tools, it will give them a benefit. They'll probably want to buy it rather than trying to develop it themselves.

Learn the latest news and best practices about data science, big data analytics, and artificial intelligence. Delivered Mondays

TechRepublic's Karen Roby spoke with Ira Cohen, chief data scientist with Anodot, about the tools CIOs need to implement artificial intelligence (AI) at their companies.

Image: Mackenzie Burke

Original post:

What CIOs need to know about adding AI to their processes - TechRepublic

AI and deer – AG INFORMATION NETWORK OF THE WEST – AGInfo Ag Information Network Of The West

Friend of mine told me the other day that his good friend, Tee Green, was using artificial intelligence A.I. to help him hunt Whitetail. I asked him to explain the intersection between artificial intelligence and white tailed deer hunting. Well, that's the genesis really is a derivative of a business partner of mine and a couple of us that have been building software companies together for the last 20 years that are just passionate bow hunters. So we've had leases around the country and we own some of our own property as well. When the game cameras came out we said This is the greatest thing in the world. And we had a property that had five game cameras and we had a property with 60 game cameras. And then you start realizing that each camera is taking about eight hundred pictures a week and there's only a handful of pictures in there that are worth anything. And so we were spending enormous amounts of time scrolling through and cataloging pictures and Microsoft Excel Word documents trying to cross-reference them from year to year and the amount of time it spends just to go through was just incredible. So over the last several years we started with the artificial intelligence and machine learning in a way, we understand software and building software companies. We've been able to take the basic background of sorting images, using data and machine learning capability. So we load whether it's SD cards, whether it's cell camera, it doesn't really matter to us where the images come from.

See original here:

AI and deer - AG INFORMATION NETWORK OF THE WEST - AGInfo Ag Information Network Of The West

Cerner AI expert discusses important ‘misconceptions’ about the technology – Healthcare IT News

Dr. Tanuj Gupta, vice president at Cerner Intelligence, is an expert in healthcare artificial intelligence and machine learning. Part of his job is explaining, from his expert point of view, what he considers misconceptions with AI, especially misconceptions in healthcare.

In this interview with Healthcare IT News, Gupta discusses what he says are popular misconceptions with gender and racial bias in algorithms, AI replacing clinicians, and the regulation of AI in healthcare.

Q. In general terms, why do you think there are misconceptions about AI in healthcare, and why do they persist?

A. I've given more than 100 presentations on AI and ML in the past year. There's no doubt these technologies are hot topics in healthcare that usher in great hope for the advancement of our industry.

While they have the potential to transform patient care, quality and outcomes, there also are concerns about the negative impact this technology could have on human interaction, as well as the burden they could place on clinicians and health systems.

Q. Should we be concerned about gender and racial bias in ML algorithms?

A. Traditionally, healthcare providers consider a patient's unique situation when making decisions, along with information sources, such as their clinical training and experiences, as well as published medical research.

Now, with ML, we can be more efficient and improve our ability to examine large amounts of data, flag potential problems and suggest next steps for treatment. While this technology is promising, there are some risks. Although AI and ML are just tools, they have many points of entry that are vulnerable to bias, from inception to end use.

As ML learns and adapts, it's vulnerable to potentially biased input and patterns. Existing prejudices especially if they're unknown and data that reflects societal or historical inequities can result in bias being baked into the data that's used to train an algorithm or ML model to predict outcomes. If not identified and mitigated, clinical decision-making based on bias could negatively impact patient care and outcomes. When bias is introduced into an algorithm, certain groups can be targeted unintentionally.

Gender and racial biases have been identified in commercial facial-recognition systems, which are known to falsely identify Black and Asian faces 10 to 100 times more than Caucasian faces, and have more difficulty identifying women than men. Bias is also seen in natural language processing that identifies topic, opinion and emotion.

If the systems in which our AI and ML tools are developed or implemented are biased, then their resulting health outcomes can be biased, which can perpetuate health disparities. While breaking down systemic bias can be challenging, it's important that we do all we can to identify and correct it in all its manifestations. This is the only way we can optimize AI and ML in healthcare and ensure the highest quality of patient experience.

Q. Could AI replace clinicians?

A. The short answer is no. AI and ML will not replace clinician judgement. Providers will always have to be involved in the decision-making process, because we hold them accountable for patient care and outcomes.

We already have some successful guardrails in other areas of healthcare that we'll likely evolve to for AI and ML. For example, one parallel is verbal orders. If a doctor gives a nurse a verbal order for a medication, the nurse repeats it back to them before entering it in the chart, and the doctor must sign off on it. If that medication ends up causing harm to the patient, the doctor can't say the nurse is at fault.

Additionally, any standing protocol orders that a hospital wants to institute must be approved by a committee of physicians who then have a regular review period to ensure the protocols are still safe and effective. That way, if the nurse executes a protocol order and there's a patient-safety issue, that medical committee is responsible and accountable not the nurse.

The same thing is going to be there with AI and ML algorithms. There won't be an algorithm that arbitrarily runs on a tool or machine, treating a patient without doctor oversight.

If we throw a bunch of algorithms into the electronic health record that say, "treat the patient this way" or "diagnose him with this," we'll have to hold the clinician and possibly the algorithm maker if it becomes regulated by the U.S. Food and Drug Administration accountable for the outcomes. I can't imagine a situation where that would change.

Clinicians can use,and are using, AI and ML to improve care and maybe make healthcare even more human than it is today. AI and ML could also allow physicians to enhance the quality of time spent with patients.

Bottom line, I think we as the healthcare industry should embrace AI and ML technology. It won't replace us; it will just become a new and effective toolset to use with our patients. And using this technology responsibly means always staying on top of any potential patient safety risks.

Q. What should we know about the regulation of AI in healthcare?

A. AI introduces some important concerns around data ownership, safety and security. Without a standard for how to handle these issues, there's the potential to cause harm, either to the healthcare system or to the individual patient.

For these reasons, important regulations should be expected. The pharmaceutical, clinical treatment and medical device industries provide a precedent for how to protect data rights, privacy, and security, and drive innovation in an AI-empowered healthcare system.

Let's start with data rights. When people use an at-home DNA testing kit, they likely gave broad consent for your data to be used for research purposes, as defined by the U.S. Department of Health and Human Services in a 2017 guidance document.

While that guidance establishes rules for giving consent, it also creates the process for withdrawing consent. Handling consent in an AI-empowered healthcare system may be a challenge, but there's precedent for thinking through this issue to both protect rights and drive innovation.

With regard to patient safety concerns, the Food and Drug Administration has published two documents to address the issue: Draft Guidance on Clinical Decision Support Software and Draft Guidance on Software as a Medical Device. The first guidance sets a framework for determining if an ML algorithm is a medical device.

Once you've determined your ML algorithm is in fact a device, the second guidance provides "good machine learning practices." Similar FDA regulations on diagnostics and therapeutics have kept us safe from harm without getting in the way of innovation. We should expect the same outcome for AI and ML in healthcare.

Finally, let's look at data security and privacy. The industry wants to protect data privacy while unlocking more value in healthcare. For example, HHS has long relied on the Health Insurance Portability and Accountability Act, which was signed into law in 1996.

While HIPAA is designed to safeguard protected health information, growing innovation in healthcare particularly regarding privacy led to HHS' recently issued proposed rule to prevent information blocking and encourage healthcare innovation.

It's safe to conclude that AI and ML in healthcare will be regulated. But that doesn't mean these tools won't be useful. In fact, we should expect the continued growth of AI applications for healthcare as more uses and benefits of the technology surface.

Twitter:@SiwickiHealthITEmail the writer:bsiwicki@himss.orgHealthcare IT News is a HIMSS Media publication.

Read the rest here:

Cerner AI expert discusses important 'misconceptions' about the technology - Healthcare IT News

Patra and expert.ai Announce Strategic Partnership to Improve Policy Review and Bring Advanced AI-based Natural Language Understanding Solutions to…

Patra and expert.ai are bringing a proven leader in AI to the insurance industry to solve real-world challenges.

Ensuring accurate language understanding at speed and scale,expert.ai enables global organizations to leverage its mature and proven AI-based natural language (NL) platform to automate the reading, understanding, and extraction of meaningful data from structured and unstructured text to augment and expand insights for every process that involves language. By integrating expert.ai's cutting edge AI capabilities, Patra improves quality, reduces friction, and drives out inefficiencies in the process of manually reviewing and cross-validating dozens to hundreds of pages of text for any given policy. These capabilities facilitate a deeper understanding of data, enabling previously out-of-reach insights due to the vast and complex nature of language semantics.

In working together, both companies are satisfying the growing demands in the insurance industry of leveraging advanced natural language and ML capabilities to address challenges in policy checking risk exposure. With close to 80% of the information within the insurance industry being unstructured data, intelligent automation based on human-like understanding is a critical factor for competitive advantage, as it increases capacity, reducing inefficiencies and high-risk vulnerabilities. By applying the power of artificial intelligence to policy checking, Patra is providing agencies, wholesalers, MGAs, and carriers a better understanding of their book of business and helping them understand pricing behaviors and coverage dynamics by risk appetite. These capabilities will unleash a new generation of opportunities, including proactive notifications versus reactive discoveries.

"With expert.ai, Patra is unlocking the ability for clients to be alerted of policy inaccuracies, reduce E&O exposures, drive cost savings, create additional value for our services, and push the limits of today's technology," said John Simpson, CEO and Founder of Patra. "Policy Checking has been one of the insurance industry's biggest challenges for decades. Now, with expert.ai and the formation of the InsureConneXtions Alliance, Patra has brought to market a proven leader in artificial intelligence, in addition to partnering with innovators in insurance industry to solve challenges that apply to every policy issued. Policy Checking is just the first of many services we are addressing."

"We're honored to join forces with Patra, an innovation leader in insurance services, in delivering the next generation of AI technology for policy checking and review.And we see this as just the first step in working together to power language understanding in any application or process across the insurance value chain," said Walt Mayo, CEO of expert.ai."The combination of expert.ai's long history of industry-best AI natural language understanding, and Patra's deep process expertise and customer focus creates an incredibly strong foundation for addressing real-world challenges in the insurance industry."

About PatraPatra is a leading provider of technology-enabled services to the insurance industry. Patra's global experts' team allows brokers, MGAs, wholesalers, and carriers to capture the Patra Advantage profitable growth and organizational value. Patra powers insurance processes by optimizing the application of people and technology, supporting insurance organizations as they sell, deliver, and manage policies and customers. Patra is also a founding member of the InsurConneXtions Alliance,representing leaders across insurance technology, brokerage, wholesale, and specialty insurance, representing over $50 Billion in Insurance premiums.

For more information, visitpatracorp.comor follow us @Patracorp onTwitterandLinkedIn.

About expert.aiExpert.ai is the premier artificial intelligence platform for language understanding. Its unique hybrid approach to NL combines symbolic human-like comprehension and machine learning to transform language-intensive processes into practical knowledge, providing the insight required to improve decision making throughout organizations. By offering a full range of on-premise, private, and public cloud offerings, expert.ai augments business operations, accelerates, and scales data science capabilities, and simplifies AI adoption across a vast range of industries, including Insurance, Banking & Finance, Publishing & Media, Defense & Intelligence, Life Science & Pharma, Oil Gas & Energy, and more. The expert.ai brand is owned by Expert System (EXSY:MIL), that has cemented itself at the forefront of natural language solutions, and serves global businesses such as AXA XL, Zurich Insurance Group, Generali, Bloomberg INDG, BNP Paribas, Rabobank, Dow Jones, Gannett, and EBSCO.

For more information visitwww.expert.aiand follow us onTwitterandLinkedIn.

SOURCE Patra Corporation; expert.ai

http://www.patracorp.com

Read more:

Patra and expert.ai Announce Strategic Partnership to Improve Policy Review and Bring Advanced AI-based Natural Language Understanding Solutions to...

Ai-Da, the First Robot Artist To Exhibit Herself – Entrepreneur

As a critic of modern life and technology, Ai-Da can draw thanks to artificial intelligence.

Stay informed and join our daily newsletter now!

February15, 20212 min read

Ai-Da , a humanoid artificial intelligence robot, will exhibit a series of self-portraits that she created by "looking" into a worm with her cameras on her eyes. It sounds strange? A little bit, we will tell you how it works and why the idea came up.

The robot was named Ai-Da after the 19th century mathematician Ada Lovelace. According to its creators, it is capable of drawing real people using its camera eye and a pencil in hand.

She 'looks' in the mirror that is integrated with her camera eyes and with the help of algorithm programs transforms it into coordinates. The hand of the artistic robotic, calculates a virtual route and interprets the coordinates to create the artwork.

The idea for Ai-Da came from the owner of the Oxford art gallery, Aidan Meller and the art curator Lucy Seal.

Seal commented that the self-portraits are meant to be a critique of our current reliance on data-driven technology.

In an interview with The Sunday Times she said that we live in a culture of selfies, but we are giving our data to the tech giants, who use it to predict our behavior. Through technology, we outsource our own decisions .

"The work invites us to think about artificial intelligence, technological uses and abuses in today's world."

His work will be exhibited at the Design Museum in London between May and June, if sanitary conditions permit. However, this would be the second exhibition, in 2019 the robot was presented and explored the limits between artificial intelligence, technology and organic life in drawing, painting, sculpture and video art.

Continue reading here:

Ai-Da, the First Robot Artist To Exhibit Herself - Entrepreneur

How to prevent AI from taking over the world New Statesman – New Statesman

Right now AI diagnoses cancer, decides whether youll get your next job, approves your mortgage, sentences felons, trades on the stock market, populates your news feed, protects you from fraudand keeps you company when youre feeling down.

Soon it will drive you to town, deciding along the way whether to swerve to avoid hitting a wayward fox. It will also tell you how to schedule your day, which career best suits your personalityand even how many children to have.

In the further future, it could cure cancer, eliminate poverty and disease, wipe out crime, halt global warming andhelp colonise space. Fei-Fei Li, a leading technologist at Stanford, paints a rosy picture: I imagine a world in which AI is going to make us work more productively, live longerand have cleaner energy. General optimism about AI is shared by Barack Obama, Mark Zuckerbergand Jack Ma, among others.

And yet from the beginning AI has been dogged by huge concerns.

What if AI develops an intelligence far beyond our own? Stephen Hawking warned that AI could develop a will of its own, a will that is in conflict with ours and which could destroy us. We are all familiar with the typical plotlineof dystopian sci-fi movies. An alien comes to Earth, we try to control it, and it all ends very badly. AI may be the alien intelligence already in our midst.

A new algorithm-driven world could also entrench and propagate injustice, while we arenone the wiser. This is because the algorithms we trust are often black boxes whose operation we dont and sometimes cant understand. Amazons now infamous facial recognition software, Rekognition, seemed like a mostlyinnocuous tool for landlords and employers to run low-cost background checks. But it was seriously biased against people of colour, matching 28 members of the US Congress disproportionately minorities with profiles stored in a database of criminals. AI could perpetuate our worst prejudices.

Finally, there is the problem of what happens if AI is too good at what it does. Its beguiling efficiency could seduce us into allowing it to make more and more of our decisions, until we forget how to make good decisions on our own, in much the way we rely on our smartphones to remember phone numbers and calculate tips. AI could lead us to abdicate what makes us human in the first place our ability to take charge of and tellthe stories of our own lives.

[See also: Philip Ball on how machines think]

Its too early to say how our technological future will unfold. But technology heavyweights such asElon Musk and Bill Gates agree that we need to do something to control the development and spread of AI, and that we need to do it now.

Obvious hacks wont do. You might think that we can control AI by pulling its plug. But experts warn that a super-intelligent AI could easily predict our feeble attempts to shackle it and undertake measures to protect itself by, say, storing up energy reserves and infiltrating power sources. Nor will encoding a master command, Dont harm humans, save us, because its unclear what harmmeans or constitutes. When your self-driving vehicle swerves to avoid hitting a fox, it exposes you to a slight risk of death does it thereby harm you? What about when it swerves into a small group of people to avoid colliding with a larger crowd?

***

The best and most direct way to control AI is to ensure that its values are our values. By building human values into AI, we ensure that everything an AI does meets with our approval. But this is not simple. The so-called Value Alignment Problem how to get AI to respect and conform to human values is arguably the most important, if vexing, problem faced by AI developers today.

So far, this problem has been seen as one of uncertainty: if only we understood our values better, we could program AI to promote these values. Stuart Russell, a leading AI scientist at Berkeley, offers an intriguing solution. Lets design AI so that its goals are unclear. We then allow it to fill in the gaps by observing human behaviour. By learning its values from humans, the AIs goals will be our goals.

This is an ingenious hack. But the problem of value alignment isnt an issue of technological design to be solved by computer scientists and engineers. Its a problem of human understanding to be solved by philosophers and axiologists.

Thedifficulty isnt that we dont know enough about our values though, of course, we dont. Its that even if we had full knowledge of our values, these values mightnot be computationally amenable. If our values cant be captured by algorithmic architecture, even approximately, then even an omniscient God couldnt build AI that is faithful to our values. The basic problem of value alignment, then, is what looks to be a fundamental mismatch between human values and the tools currently used to design AI.

Paradigmatic AI treats values as if they were quantities like length or weight things that can be represented by cardinal units such as inches, grams, dollars. But the pleasure you get from playing with your puppy cant be put on the same scale of cardinal units as the joy you get from holding your newborn. There is no meterstickof human values. Aristotle was among the first to notice that human values are incommensurable. You cant, he argued, measure the true (as opposed to market) value of beds and shoes on the same scale of value. AI supposes otherwise.

AI also assumes that in a decision, there are only two possibilities: one option is better than the other, in which case you should choose it, or theyre equally good, in which case you can just flip a coin. Hard choices suggest otherwise. When you are agonising between two careers, neither is better than the other, but they arent equally good either they are simply different. The values that govern hard choices allow for more possibilities: options might be on a par. Many of our choices between jobs, people to marry, and even government policies are on a par.AI architecture currently makes no room for such hard choices.

Finally, AI presumes that the values in a choice are out there to be found. But sometimes we create values through the very process of making a decision. In choosing between careers, how much does financial security matter as opposed to work satisfaction? You may be willing to forgo fancy mealsin order to make full use ofyour artistic talents. But I want a big house with a big garden, and Im willing to spend my days in drudgeryto get it.

Our value commitments are up to us, and we create them through the process of choice. Since our commitments are internal manifestations of our will, observing our behaviour wont uncover their specificity. AI, as it is currently built, supposes values can be programmed as part of a reward function that the AI is meant to maximise. Human values are more complex than this.

***

So where does that leave us? There are three possible paths forward.

Ideally, we wouldtry to develop AI architecture that respects the incommensurable, parity-tolerantand self-created features of human values. This would require seriouscollaboration between computer scientists and philosophers. If we succeed, we could safely outsource many of our decisions to machines, knowing that AI will mimic human decision-making at its best. We could prevent AI from taking over the world while still allowing it to transform human life for the better.

If we cant get AI to respect human values, the next best thing is to accept that AI shouldbe of limited use to us. It can still help us crunch numbers and discern patterns in data, operating as an enhancedcalculator or smart phone, but it shouldnt be allowed to make any of our decisions. This is because when an AI makes a decision, say, to swerve your car to avoid hitting a fox, at some risk to your life, its not a decision made on the basis of human values but of alien, AI values. We might reasonably decide that we dont want to live in a world where decisions are made on the basis of values that are not our own. AI would not take over the world, but nor would it fundamentally transform human life as we know it.

The most perilous path and the one towards which we are heading is to hope in a vague way that we can strike the right balance between the risks and benefits of AI. If the mismatch between AI architecture and human values is beyond repair, we might ask ourselves: how much risk of annihilation are wewilling to tolerate in exchange for the benefits of allowing AI to make decisions for us, while, at the same time, recognisingthose decisions will necessarily be made based on values that are not our own?

That decision, at least, would be one made by us on the basis of our human values. The overwhelming likelihood, however, is that we get the trade-off wrong. We are, after all, only human. If we take this path, AI could take over the world. And it would be cold comfort that it was our human values that allowed it to doso.

Ruth Chang is the Chair and Professor of Jurisprudence at the University of Oxford and a Professorial Fellow at University College, Oxford. She is the author of Hard Choices and a TED talk on decision-making.

This article is part of the Agora series, a collaboration between the New Statesman and Aaron James Wendland, Senior Research Fellow in Philosophy at Massey College, Toronto. He tweets @aj_wendland.

Originally posted here:

How to prevent AI from taking over the world New Statesman - New Statesman

IBMs Arin Bhowmick explains why AI trust is hard to achieve in the enterprise – VentureBeat

While appreciation of the potential impact AI can have on business processes has been building for some time, progress has not nearly been as quick as many initial forecasts led many organizations to expect.

Arin Bhowmick, chief design officer for IBM, explained to VentureBeat what needs to be done to achieve the level of AI explainability that will be required to take AI to the next level in the enterprise.

This interview has been edited for clarity and brevity.

VentureBeat: It seems a lot of organizations are still not trustful of AI. Do you think thats improving?

Arin Bhowmick: I do think its improved or is getting better. But we still have a long way to go. We havent historically been able to bake in trust and fairness and explainable AI into the products and experiences. From an IBM standpoint, we are trying to create reliable technology that can augment [but] not really replace human decision-making. We feel that trust is essential to the adoption. It allows organizations to understand and explain recommendations and outcomes.

What we are essentially trying to do is akin to a nutritional label. Were looking to have a similar kind of transparency in AI systems. There is still some hesitation in adoption of AI because of a lack of trust. Roughly 80-85% of some of the professionals that took part in an IBM survey from different organizations said their organization has been pretty negatively impacted by problems such as bias, especially in the data. I would say 80% or more agree that consumers are more likely to choose services from a company that offers transparency and an ethical framework for how its AI models are built.

VentureBeat: As an AI model runs, it can generate different results as the algorithms learn more about the data. How much does that lack of consistency impact trust?

Bhowmick: The AI model used to do the prediction is as good as the data. Its not just models. Its about what it does and the insight it provides at that point in time that develops trust. Does it tell the user why the recommendation is made or is significant, how it came up with the recommendations, and how confident it is? AI tends to be a black box. The trick around developing trust is to unravel the black box.

VentureBeat: How do we achieve that level of AI explainability?

Bhowmick: Its hard. Sometimes its hard to even judge the root cause of a prediction and insight. It depends on how the model was constructed. Explainability is also hard because when it is provided to the end user, its full of technical mumbo jumbo. Its not in the voice and tone that the user actually understands.

Sometimes explainability is also a little bit about the why, rather than the what. Giving an example of explainability in the context of the tasks that the user is doing is really, really hard. Unless the developers who are creating these AI-based [and] infused systems actually follow the business process, the context is not going to be there.

VentureBeat: How do we even measure this?

Bhowmick: There is a fairness score and a bias score. There is a concept of model accuracy. Most tools that are available do not provide a realistic score of the element of bias. Obviously, the higher the bias, the worse your model is. Its pretty clear to us that a lot of the source of the bias happens to be in the data and the assumptions that are used to create the model.

What we tried to do is we baked in a little bit of bias detection and explainability into the tooling itself. It will look at the profile of the data and match it against other items and other AI models. Well be able to tell you that what youre trying to produce already has built-in bias, and heres what you can do to fix it.

VentureBeat: That then becomes part of the user experience?

Bhowmick: Yes, and thats very, very important. Whatever bias feeds into the system has huge ramifications. We are creating ethical design practices across the company. We have developed specific design thinking exercises and workshops. We run workshops to make sure that we are considering ethics at the very beginning of our business process planning and design cycle. Were also using AI to improve AI. If we can build in sort of bias and explainable AI checkpoints along the way, inherently we will scale better. Thats sort of the game plan here.

VentureBeat: Will every application going have an AI model embedded within it?

Bhowmick: Its not about the application, its about whether there are things within that application that AI can help with. If the answer is yes, most applications will have infused AI in them. It will be unlikely that applications will not have AI.

VentureBeat: Will most organizations embed AI engines in their applications or simply involve external AI capabilities via an application programming interface (API)?

Bhowmick: Both will be true. I think the API would be good for people who are getting started. But as the level of AI maturity increases, there will be more information that is specific to a problem statement that is specific to an audience. For that, they will likely have to build custom AI models. They might leverage APIs and other tooling, but to have an application that really understands the user and really gets at the crux of the problem, I think its important that its built in-house.

VentureBeat: Overall, whats your best AI advice to organizations?

Bhowmick: I still find that our level of awareness of what is AI and what it can do, and how it can help us, is not high. When we talk to customers, all of them want to go into AI. But when you ask them what are the use cases, they sometimes are not able to articulate that.

I think adoption is somewhat lagging because of peoples understanding and acceptance of AI. But theres enough information on AI principles to read up on. As you develop an understanding, then look into tooling. It really comes down to awareness.

I think were in the hype cycle. Some industries are ahead, but if I could give one piece of advice to everyone, it would be dont force-fit AI. Make sure you design AI in your system in a way that makes sense for the problem youre trying to solve.

Excerpt from:

IBMs Arin Bhowmick explains why AI trust is hard to achieve in the enterprise - VentureBeat

This robotic glove uses AI to help people with hand weakness regain muscle grip – The Next Web

A Scottish biotech startup has invented an AI-powered robotic glove that helpspeople recover muscle grip in their hands.

BioLibertydesigned the glove for people who suffer from hand weakness, due to age or illnesses suchas motor neurone disease and carpal tunnel syndrome.

The system detects their intention to grip by usingelectromyography (EMT) to measure the electrical activity generated by a nerves stimulation of the muscle.

An algorithm then converts the intent into force to help the wearer strengthen their grip on an object.

The glove could help users with a wide range of daily tasks, from driving to opening jars.

[Read:How Polestar is using blockchain to increase transparency]

BioLiberty cofounder Ross Hanlon said he got the idea when an aunt with multiple sclerosis started struggling with simple tasks like drinkingwater:

Being an engineer, I decided to use technology to tackle these challenges head-on with the aim of helping people like my aunt to retain their autonomy. As well as those affected by illness, the population continues to age and this places increasing pressure on care services. We wanted to support independent living and healthy aging by enabling individuals to live more comfortably in their own homes for longer.

Hanlons aunt is one of around 2.5 million UK citizens who suffer from hand weakness. An aging population means this number will only increase.

BioLibertys robotic glove and digital therapy platform could help them regain their strength.

The company has already developed a working prototype of the glove. The team now plans to use supportfrom Edinburgh Business Schools Incubator tobring the glove into homes.

Ultimately, they want their tech to help people suffering from reduced mobility to regain their independence.

Published February 16, 2021 16:17 UTC

The rest is here:

This robotic glove uses AI to help people with hand weakness regain muscle grip - The Next Web

How to Kickstart an AI Venture Without Proprietary Data – Medium

AI startups have a chicken & egg problem. Heres how to solve it.

A few years ago, I learned about the billions of dollars banks lose to credit card fraud on an annual basis. Better detection or prediction of fraud would be incredibly valuable. And so I considered the possibility of convincing a bank to share their transactional data in the hope of building a better fraud detection algorithm. The catch, unsurprisingly, was that no major bank is willing to share such data. They feel theyre better off hiring a team of data scientists to work on the problem internally. My startup idea died a quick death.

Despite the tremendous innovation and entrepreneurial opportunities around AI, breaking into AI can be a daunting task for entrepreneurs as they face a chicken-and-egg problem before they even begin, something existing companies are less likely to contend with. I believe specific strategies can help entrepreneurs overcome this challenge and create successful AI-driven ventures.

Todays AI systems need to be trained on large datasets, which can pose a challenge for entrepreneurs. Established companies with a sizable customer base already have a stream of data from which they can train AI systems, build new products and enhance existing ones, generate additional data, and rinse and repeat (for example, Google Maps has over 1B monthly active users and over 20 Petabytes of data). But for entrepreneurs, the need for data poses a chicken-and-egg problem because their company hasnt yet been built, they dont have data, which means they cant create an AI product as easily.

Additionally, data is not only necessary to get started with AI, it is actually key to AI performance. Research has shown that while algorithms matter, data matters more. Among modern machine learning methods, the differences in performance between various algorithms are relatively small when compared to the performance differences between the same algorithms with more or less data (Banko and Brill 2001).

There are several strategies that can help entrepreneurs navigate this chicken-and-egg problem and access the data they need to break into the AI space.

Research has shown that while algorithms matter, data matters more.

While data does need to come before an AI product, data does not need to come before all products. Entrepreneurs can begin by creating a service that is not AI-based, but that solves customer problems and that generates data in the process. This data can later be used to train an AI system that enhances the existing service or creates a related service.

For example, Facebook didnt use AI in its early days, but it still provided a social networking platform that customers wanted to join. In the process, Facebook generated a large amount of data which was in turn used to train AI systems that helped personalize the newsfeed and also made it possible to run extremely targeted ads. Despite not being an AI-driven service at the outset, Facebook has become a heavy user of AI.

Similarly, the InsurTech startup Lemonade initially didnt have data to build sophisticated AI capabilities on day one. However, over time, Lemonade has built AI tools to create quotes, process claims, and detect fraud. Today, their AI system handles the first notice of loss for 96% of claims, and manages the full claim resolution without any human involvement in a third of the cases. These AI capabilities have been built using the data generated over many years of operations.

2. Partner with a non-tech company that has a proprietary dataset

Entrepreneurs can partner with a company or organization that has a proprietary dataset but lacks in-house AI expertise. This approach is particularly useful in contexts where it would be very difficult to create a product that in turn generates the kind of data your AI application needs, such as medical data about patient tests and diagnoses. In this case, you could partner with a hospital or insurance company in order to obtain anonymized data.

A related point is that training data for your AI product can come from a potential customer. While this is harder in regulated industries like healthcare and finance, customers in other industries like manufacturing may be more open to it. All you might need to offer in return is exclusive access to the AI product for a few months or early access to future product features.

A pitfall of this approach is that potential partners may prefer working with established companies rather than smaller players who may be less known and trusted (especially in a post- GDPR and Cambridge Analytica world). So business development will be tricky but this strategy is nonetheless feasible especially when well-known tech companies are not already chasing after your desired partner.

Entrepreneurs who are part of a family business may already have access to a potentially large amount of data from their existing business. Thats a great option too.

3. Crowdsource the (labeled) data you need

Depending on the kind of data needed, entrepreneurs can obtain data through crowdsourcing. When data is available but is not well labeled (e.g. images on the Internet), crowdsourcing can be a particularly well-suited method for obtaining this data, as labeling is a task that lends itself well to being completed quickly by a large number of individuals on crowdsourcing platforms. Platforms such as Amazon Mechanical Turk and Scale.ai are frequently used to help generate labeled training data.

For example, consider Googles use of Captchas. While they serve an important security purpose, Google simultaneously uses them as a crowdsourced image labeling system. Every day millions of users are part of the Google analytics pre-processing team which are validating machine learning algorithms- for free.

Some products have workflows which allow customers to help label new data in the course of using the product. In fact, the entire subfield of Active Learning is focused on how to interactively query users to better label new data points. For example, consider a cybersecurity product that generates alerts about risks and a workflow in which an Ops engineer resolves those alerts thereby generating new labeled data. Similarly, product recommendation services like Pandora use upvotes and downvotes to validate recommendation accuracy. In both these cases, you can start with an MVP that continually improves over time as customers provide feedback.

4. Make use of public data

Before you conclude that the data you need is not available, look harder. There is more publicly available data than you might imagine. There are even data marketplaces emerging. While publicly available data (and therefore the resulting product) might be less defensible, you can build defensibility through other service/product innovations such as creating an exceptional user experience or combining offline and digital data at scale as Zillow does (the company uses offline public municipal data at scale as part of their innovative online real estate application). One could also combine publicly available data with some proprietary data, which could be generated over time or obtained through partnerships, crowdsourcing, etc.

The Canadian company BlueDot uses a variety of data sources, including publicly available data, in order to detect outbreaks of emerging diseases before they are officially reported as well as predict where an outbreak will spread to next. BlueDot uses statements from official public health organizations, digital media, global airline ticketing data, livestock health reports, and population demographics, among other data sources. The company detected the COVID-19 outbreak on December 30th, 2019, nine days before the WHO reported on it.

There is more publicly available data than you might imagine. There are even data marketplaces emerging.

5. Rethink the need for data

It is true that most of the practical AI in the business world is based on Machine Learning. And most of that ML is supervised ML (which requires large labeled training datasets). But many problems can be solved with other AI techniques that are not reliant on data, such as reinforcement learning or expert systems.

Reinforcement learning is an ML approach in which algorithms learn by testing various actions or strategies and observing the rewards from these actions. Essentially, reinforcement learning uses experimentation to compensate for a lack of labeled training data. The original iteration of Googles Go playing software, Alpha Go, was trained on a large training dataset, but the next iteration, AlphaZero, was based on reinforcement learning and had zero training data. Yet AlphaZero beat AlphaGo (which itself beat Lee Sedol, Gos world champion).

Reinforcement learning is widely used in online personalization. Online companies frequently test and evaluate multiple website designs, product descriptions, product images, and pricing. Reinforcement learning algorithms explore new design and marketing choices and rapidly learn how to personalize user experience based on their responses.

Another approach is to use expert systems, which are simple rule-based systems that often codify rules that experts use routinely. While expert systems rarely beat well-trained ML systems for complex tasks such as medical diagnosis or image recognition, they can help break the chicken-and-egg problem and help you get started. For example, the virtual healthcare company Curai used knowledge from expert systems to create clinical vignettes, and then used these vignettes as training data for ML models (alongside data from electronic health records and other sources).

To be clear, not every intelligence problem can be cast as a reinforcement learning problem or tackled through an expert systems approach. But these are worth considering when the lack of training data has halted the development of an interesting ML product.

Entrepreneurs are most likely to develop a consistent stream of proprietary data if they start by offering a service that has value without AI and that generates data, and then use this to train an AI system. However, this strategy does require time and may not be the best fit for all situations. Depending on the nature of the startup and the kind of data that is needed, it may work better to partner with a non-tech company that has a proprietary dataset, crowdsource (labeled) data, or make use of public data. Alternatively, entrepreneurs can rethink the need for data entirely and consider taking a reinforcement learning or expert systems approach.

Continued here:

How to Kickstart an AI Venture Without Proprietary Data - Medium

MIT researchers use AI to find drugs that could be repurposed for COVID-19 – Healthcare IT News

The Massachusetts Institute of Technology announced this week that researchers had used machine learning to identify medications that may be repurposed to fight COVID-19.

"Making new drugs takes forever," Caroline Uhler, a computational biologist in MIT's Department of Electrical Engineering and Computer Science and the Institute for Data, Systems and Society, said in a press statement. "Really, the only expedient option is to repurpose existing drugs."

The research from Uhler's team, which appears in the journal Nature Communications, notes that the novel coronavirus tends to have much more severe effects in older patients.

"Since the mechanical properties of the lung tissue change with aging, this led us to hypothesize an interplay between viral infection/replication and tissue aging," wrote the researchers.

WHY IT MATTERS

The researchers pointed out that lung tissue becomes stiffer as a person gets older, and it shows different patterns of gene expression than in younger people in response to the same signal.

"We need to look at aging together with SARS-CoV-2 what are the genes at the intersection of these two pathways?" said Uhler.

As the study explains, the team generated a list of possible drugs using an autoencoder before mapping the network of genes and proteins involved in aging and novel coronavirus infection. They then pinpointed genes causing cascading effects throughout the network using statistical algorithms.

"Among the various protein kinases ... identified by our drug repurposing pipeline, RIPK1 was singled out by our causal analysis as being upstream of the largest number of genes that were differentially expressed by SARS-CoV-2 infection and aging," wrote the researchers in the study.In other words, drugs that act on RIPK1 may have the potential to treat COVID-19.

"Given the distinct pathways elicited by RIPK1, there is a need to develop appropriate cell culture models that can differentiate between young and aging tissues to validate our findings experimentally and allow for highly specific and targeted drug discovery programs," read the study.

THE LARGER TREND

Machine learning and artificial intelligence have been instrumental for many facets of COVID-19 research, with scientists using them to predict the length of hospitalization and probable outcomes among patients, as well as to detect the disease in lung scans and improve treatment options.

Cris Ross, CIO at the Mayo Clinic, said in December that AI has been key to understanding COVID-19.

Around the world, Ross said, algorithms are being used to "find powerful things that help us diagnose, manage and treat this disease, to watch its spread, to understand where it's coming next, to understand the characteristics around the disease and to develop new therapies."

ON THE RECORD

"While our work identified particular drugs and drug targets in the context of COVID-19, our computational platform is applicable well beyond SARS-CoV-2, and we believe that the integration of transcriptional, proteomic, and structural data with network models into a causal framework is an important addition to current drug discovery pipelines," wrote the MIT research team.

Kat Jercich is senior editor of Healthcare IT News.Twitter: @kjercichEmail: kjercich@himss.orgHealthcare IT News is a HIMSS Media publication.

Read more:

MIT researchers use AI to find drugs that could be repurposed for COVID-19 - Healthcare IT News

AI-based Drug Discovery Markets, 2030 – ResearchAndMarkets.com – Business Wire

DUBLIN--(BUSINESS WIRE)--The "AI-based Drug Discovery Market: Focus on Deep Learning and Machine Learning, 2020-2030" report has been added to ResearchAndMarkets.com's offering.

The "AI-based Drug Discovery Market: Focus on Machine Learning and Deep Learning, 2020-2030" report features an extensive study of the current market landscape and future potential of the players engaged in offering AI-based services, platforms and tools for the discovery of novel drug candidates. The study presents an in-depth analysis, highlighting the capabilities of various stakeholders engaged in this domain.

One of the key objectives of this report was to estimate the existing market size and the future growth potential within the AI-based drug discovery market. We have developed informed estimates on the financial evolution of the market, over the period 2020-2030.

Amongst other elements, the report features:

The report also provides details on the likely distribution of the current and forecasted opportunity across:

Key Questions Answered

Key Topics Covered:

1. Preface

2. Executive Summary

3. Introduction

4. Market Landscape

5. Company Profiles

6. Ai-Based Healthcare Initiatives Of Technology Giants

7. Partnerships And Collaborations

8. Funding And Investment Analysis

9. Company Valuation Analysis

10. Cost Saving Analysis

11. Market Forecast

12. Conclusion

13. Executive Insights

For more information about this report visit https://www.researchandmarkets.com/r/hgwin9

Read more:

AI-based Drug Discovery Markets, 2030 - ResearchAndMarkets.com - Business Wire

Facebook says AI helped reduce hate speech on its platform last quarter – The Hindu

(Subscribe to our Today's Cache newsletter for a quick snapshot of top 5 tech stories. Click here to subscribe for free.)

Facebook said nearly 97% of the hate speech and harassment content taken down in the final three months of last year were detected by automated systems, before any human flagged it. In the July to September quarter, AI helped detect 94% of hate content; and 80% were spotted in late 2019.

The social network, in its Community Standards Enforcement Report, noted that in the fourth quarter ending December 2020, hate speech prevalence dropped to about 0.08% of total content from nearly 0.11%.

This means, there were about seven to eight views of hate speech for every 10,000 views of content in Q4, Facebook said in a statement.

Also Read | Facebook to temporarily reduce political content for some users in few countries

The California-based technology company introduced several artificial intelligence-powered systems last year to help detect misinformation. It started using AI technologies to identify hateful online content in 2016, and has since been adding several updates to its systems which now extends to images and other forms of media.

The company said its multilingual systems helped moderate content in several languages including Arabic and Spanish, targeting nearly 27 million piece of hateful content last quarter.

Also Read | Facebook faces new UK class action after data harvesting scandal

Facebook has faced criticism previously for its inability to curb hate speech on the platform. Most recently, the social network said it would reduce the distribution of all content and profiles run by Myanmars military after it seized power and detained civilian leaders in a coup earlier in February.

Facebook also said last year it will undertake an independent audit third-party audit of content moderation systems to validate the numbers it publishes.

You have reached your limit for free articles this month.

Find mobile-friendly version of articles from the day's newspaper in one easy-to-read list.

Enjoy reading as many articles as you wish without any limitations.

A select list of articles that match your interests and tastes.

Move smoothly between articles as our pages load instantly.

A one-stop-shop for seeing the latest updates, and managing your preferences.

We brief you on the latest and most important developments, three times a day.

Support Quality Journalism.

*Our Digital Subscription plans do not currently include the e-paper, crossword and print.

Dear subscriber,

Thank you!

Your support for our journalism is invaluable. Its a support for truth and fairness in journalism. It has helped us keep apace with events and happenings.

The Hindu has always stood for journalism that is in the public interest. At this difficult time, it becomes even more important that we have access to information that has a bearing on our health and well-being, our lives, and livelihoods. As a subscriber, you are not only a beneficiary of our work but also its enabler.

We also reiterate here the promise that our team of reporters, copy editors, fact-checkers, designers, and photographers will deliver quality journalism that stays away from vested interest and political propaganda.

Suresh Nambath

More here:

Facebook says AI helped reduce hate speech on its platform last quarter - The Hindu

GeoSpark Rebrands to Roam.ai to Reflect Vastly Improved Location Tracking and AI Technology – WFMZ Allentown

AMSTERDAM, Feb. 17, 2021 /PRNewswire-PRWeb/ -- Roam.ai, an Amsterdam-based startup that provides highly accurate and battery-efficient location tracking for mobile apps, has announced its rebranding from GeoSpark. The transformation takes the company into the next phase of its existence with a strengthening of its core location product, a new pricing model and a strong growth of its customer base.

"Since launching GeoSpark, we've made continuous improvements to our location offering based on customer feedback and drive to build the best location SDK possible," commented CEO Manoj Adithya. "The rebranding to Roam.ai allows us to showcase our market-leading core location technology, express our focus on AI and reaffirm our commitment to developers."

Roam.ai's product offering includes a fully customizable solution to high battery drain that can decrease consumption to 0%. For high-quality and precise location data, the company utilizes AI with its "Accuracy Engine" that combines data filters and IMU sensor fusion to get an accuracy of up to 5 meters. The company's publish/subscribe architecture gives developers more flexibility and ease when integrating real-time location experiences in their apps.

The company's strengthened core tracking technology combined with approximately 30 APIs allows developers to customize any location-aware app with minimal code required.

The new name encapsulates the company's technology of tracking users as they travel, such as a courier delivering a package, an employee going to work or a taxi driver finding their customer. The refreshed minimalist visual identity, including a new website and logo, reflects the company's no-frills focus on its technology and the simple integration that makes it accessible to any developer all from one platform.

Roam.ai has also adapted its pricing to reflect the value of the updated core product offering. "After reviewing how our pricing could make our customers' lives easier in line with our technology, we simplified our pricing strategy and made it easier to scale," said Adithya. "With our brand new tiered usage-based pricing model, developers can choose the plan that fits their use case or budget."

For more information or to sign-up for free visit Roam.ai's new website: https://www.roam.ai/.

About Roam.ai

Roam.ai (formerly GeoSpark) is an accurate and battery efficient location service platform that enables the simple integration of location technology into any mobile application at a low cost. Roam.ai helps developers and businesses worldwide integrate precise real-time location tracking, save engineering time and inform data-driven business decisions. The company was founded in 2018 and is headquartered in Amsterdam with an office in Bengaluru. To learn more visit https://www.roam.ai/ or LinkedIn.

Media Contact

Florence Rodgerson, Roam B.V., +31 655616000, florence@roam.ai

SOURCE Roam.ai

See more here:

GeoSpark Rebrands to Roam.ai to Reflect Vastly Improved Location Tracking and AI Technology - WFMZ Allentown

Selfies by the worlds first humanoid AI artist will go on display – Dazed

In 2019, Ai-Da became the worlds first AI humanoid to pick up a pencil and create art without any human input. Armed with a microchip in her eye, a robotic hand, and a groundbreaking algorithm, the robot can draw and paint from sight, and has since staged her first solo exhibition (raking in a fair amount of cash in the process).

Now, Ai-Da has been taught to look in a mirror and create self-portraits in her distinctive style. The hyperreal robot artist is set to exhibit a series of these self-portraits or selfies in a new show at Londons Design Museum.

Named after the 19th century mathematician Ada Lovelace, Ai-Da is created by gallery director Aidan Meller and curator Lucy Seal, in collaboration with Oxford University and the British robotics company Engineered Arts. In an interview with The Times, the creators explain that the new exhibition is supposed to serve as a warning about our reliance on tech-giants in a world driven by data.

We live in a culture of selfies, says Seal, but we are giving our data to the tech giants, who use it to predict our behavior. Through technology, we outsource our own decisions. The work invites us to think about artificial intelligence, technological uses and abuses in todays world.

Ai-Da previously discussed humans relationship with technology in a conversation with Futurist Geraldine Wharry for Dazed, saying: I would imagine that humans really need to be more conscious of their own nature when using technology and machines. One way we can learn about human nature and its shortcomings is to look at history and watch out for those repeating patterns that might give us early warning signs when our use of technology is heading for damage, exploitation and abuse.

Ai-Das self portraits will be exhibited at the Design Museum from May, subject to coronavirus restrictions. A self-created font will also be featured, while the humanoid herself is set to make guest appearances.

Follow this link:

Selfies by the worlds first humanoid AI artist will go on display - Dazed

The case for, and against, the still-unseen Planet 9 Astronomy Now – Astronomy Now Online

A plot showing the relationship between the clustered orbits of several Trans-Neptunian Objects, or TNOs, in the extreme outer solar system as a result of gravitational interactions with an unseen world dubbed Planet 9. Image: Caltech/R. Hurt (IPAC)

For the past several years, astronomers have been searching for an unseen planet beyond the orbit of Pluto, a presumed world with 10 times the mass of Earth that could be responsible for the seemingly clustered orbits of small Trans-Neptunian Objects, or TNOs, in the extreme outer solar system. So far, Planet 9 has eluded detection.

Dealing a possible blow to the theorised planet, a team of researchers led by Kevin Napier of the University of Michigan suggests selection bias may have played a role in the original justification for Planet 9.

TNOs are so distant and dim they can only be detected, if seen at all, when their orbits carry them relatively close to the inner solar system. Napiers team analysed 14 other extreme TNOs discovered in three surveys and concluded their detection was based on where they happened to be at the time and the ability of the telescopes in question to detect them.

In other words, the clustering seen in the orbits of the original TNOs cited in support of Planet 9 may have been the result of where the bodies happened to be when they were observed. TNOs may well be uniformly distributed across the outer solar system without any need for the gravitational influence of an unseen planet.

It is important to note that our work does not explicitly rule out Planet X/Planet 9; its dynamical effects are not yet well enough defined to falsify its existence with current data, the researchers write in a paper posted on ArXiv. Instead, we have shown that given the current set of ETNOs (extreme TNOs) from well-characterised surveys, there is no evidence to rule out the null hypothesis.

Mike Brown and Konstantin Batygin at the California Institute of Technology, the original proponents of Planet Nine, beg to differ.

Can their analysis distinguish between a clustered and uniform distribution, and the answer appears to be no, Batygin said in an update posted by the journal Science.

Brown took to Twitter on 16 February to voice his thoughts, showing diagrams of the TNOs that he says support the original case for Planet 9. And in a blog post, he provided a detailed rebuttal, concluding that in the end, the previously measured clustering from our 2019 paper is still valid and the conclusions of that paper remain.

The clustering of distant Kuiper belt objects is highly significant, Brown writes. Its hard to imagine a process other than Planet Nine that could make these patterns. The search continues.

Visit link:

The case for, and against, the still-unseen Planet 9 Astronomy Now - Astronomy Now Online

‘Farfarout’ confirmed to be really, seriously far out Astronomy Now – Astronomy Now Online

A graphic representation of the scale of the solar system shows Earths position, 93 million miles from the Sun, or one astronomical unit, at the extreme left. A body nicknamed Farfarout is at the far right end of the scale, currently 132 times farther from Sun than Earth. Pluto is just to the left of the 40 AU marker. Image: Roberto Molar Candanosa, Scott S. Sheppard from Carnegie Institution for Science, and Brooks Bays from University of Hawaii.

Extended tracking has allowed astronomers to pin down the orbit of a presumed dwarf planet in the extreme outer solar system that takes a thousand years to complete one trip around the Sun. Knicknamed Farfarout, the frigid body is the most distant solar system object yet detected, eclipsing the previous record holder, Farout.

Listed as 2018 AG37 by the Minor Planet Center, Farfarout is currently 132 times farther from the Sun than Earth (132 astronomical units, or AU) and nearly four times more distant than Pluto. Its highly elongated trajectory carries it inside the orbit of Neptune and as far as 175 AU from the Sun. Analysis indicates the object is about 250 miles across, putting in on the low end of the dwarf planet scale (assuming it is an icy body).

A single orbit of Farfarout around the Sun takes a millennium, said University of Hawaii researcher David Tholen, a member of the team that discovered the body in 2018. Because of this long orbital period, it moves very slowly across the sky, requiring several years of observations to precisely determine its trajectory.

Tholen, Scott Sheppard of the Carnegie Institution for Science and Chad Trujillo of Northern Arizona University lead an ongoing survey to map the outer solar system beyond Pluto. They discovered the previous record holder, Farout, which is 120 AU from the sun.

The discovery of the even more distant Farfarout demonstrates our increasing ability to map the outer solar system and observe farther and farther towards the fringes of our solar system, said Sheppard. Only with the advancements in the last few years of large digital cameras on very large telescopes has it been possible to efficiently discover very distant objects like Farfarout.

Farfarout will be given an official name after its orbit is known with greater precision. In the meantime, Sheppard described Farfarout as just the tip of the iceberg of objects in the very distant solar system.

See the article here:

'Farfarout' confirmed to be really, seriously far out Astronomy Now - Astronomy Now Online

What the Heavens Declared to a Young Astronomer – ChristianityToday.com

I grew up a Jewish boy in a South African gold-mining town known as Krugersdorp. I remember sitting in shul (synagogue), enthralled as our learned rabbi expounded how God was a personal Godhe would speak to Moses, to Abraham, Isaac, and Jacob, and to many others. Growing up, I often pondered how I fit into all this.

By the time I entered the University of Witwatersrand, Johannesburg, I was deeply concerned that I had no assurance that God was indeed a personal God. I was confident that he was a historical God who had delivered our people from the hands of Pharaoh. But he seemed so far removed from the particulars of my life in Krugersdorp. Where was the personality and the vibrancy of a God who truly could speak to me?

As a student, I began working toward a degree in applied mathematics and computer science. Over the course of my studies, I became friendly with Lewis Hurst, then a professor of psychiatry and genetics. He had a great interest in astronomy, and we would discuss the complexities of the cosmos for hours at a time. Whenever we met, I would delight in explaining basic features of astronomy, such as black holes and quasars.

Intellectually, these were greatly satisfying years. Over time, I became fascinated with the elegance of the mathematical formulation of general relativity, and at age 19 I submitted my first research paper on that theme to the Royal Astronomical Society of London. When it was published one year later, I started receiving requests from observatories and universities for reprints or printed copies (on the mistaken belief that I was already a senior academic!).

But spiritually, this period was rather dry. I remember attending a meeting of the Royal Astronomical Society graced ...

To continue reading, subscribe now. Subscribers have full digital access.

Already a CT subscriber? Log in for full digital access.

Have something to add about this? See something we missed? Share your feedback here.

Read the original post:

What the Heavens Declared to a Young Astronomer - ChristianityToday.com