Monthly Archives: March 2022

FSLVFashion Show Las Vegas offers surprises around every corner – Lasvegasmagazine

Posted: March 17, 2022 at 3:09 am

Spring is sparkling at FSLVFashion Show Las Vegas. From dazzling shoes at Nordstrom to designer dresses at Neiman Marcus, sequins, spangles and glitter seem to glimmer everywhere as the masses return to mask-optional shopping. Ask a Nordstrom team member and theyll likely tell you sparkles are always in style, but the superlative shopping complex is prepared for a glamour revival as lockdowns are left behind.

Free valet is available exterior to Forever 21, Saks Fifth Avenue and Nordstrom, with free electric charge stations adjacent to the latters parking garage elevators. Look out for the Tonal fitness area at Nordstrom and register for a demo to get a 30-minute workout before heading to Dicks Sporting Goods for some putting on their second-floor practice green. Dicks has an enormous inventory of equipment, gear and games (cornhole boards for under $200) upstairs and clothing downstairs, while Shoe Palace has the most comprehensive selection of footwear for discerning sneakerheads.

Look for Macys to add a Toys R Us section before the holidays this year. Already open is Galpo Gaucho, a Brazilian steakhouse on the plaza next to Ra Sushi, where meats are sliced tableside in a gorgeous dining room with a tall wine rack and fine, leather-sheathed cutlery for sale in a glass case behind the hostess.

Sweet-tooth satisfaction can be found at Lolli & Pops, which has everything from gourmet popcorn to packs of Pocky. It also offers one of the best finds in all of FSLV: Pez dispenser sets featuring cast members of Game of Thrones and The Office. They are great gift ideas, but FSLV has a special giveaway this season for a sparkling bride-to-be: a Christian Siriano wedding dress and $500 gift card. Register at FSLV.com by May 22 to be eligible.

Las Vegas Boulevard South and Spring Mountain Road, 702.369.8382

Click here for your free subscription to the weekly digital edition of Las Vegas Magazine, your guide to everything to do, hear, see and experience in Southern Nevada. In addition to the latest edition emailed to every week, youll find plenty of great, money-saving offers from some of the most exciting attractions, restaurants, properties and more! And Las Vegas Magazine is full of informative content such as restaurants to visit, cocktails to sip and attractions to enjoy.

Read the original:

FSLVFashion Show Las Vegas offers surprises around every corner - Lasvegasmagazine

Posted in Las Vegas | Comments Off on FSLVFashion Show Las Vegas offers surprises around every corner – Lasvegasmagazine

Six startups join Digital Catapult to lead the AI net zero era – The Manufacturer

Posted: at 3:08 am

The latest group of innovative startups has joined Digital Catapults Machine Intelligence Garage programme - an artificial intelligence and machine learning accelerator that has supported more than 100 startups to raise a total of 52m investment.

The most recent cohort of leading-edge startups are focused on solving urgent challenges in the manufacturing, engineering and agriculture sectors; from reducing manufacturing material wastage and cutting emissions, to providing powerful image recognition that helps farmers identify crop disease and use land and resources more sustainably.

From dozens of applications, six early-stage companies that are developing net zero solutions using Artificial Intelligence (AI) and Machine Learning (ML) were selected to join the programme, working alongside experts at Digital Catapult to address the UK industrys biggest sustainability challenges.

Launched in 2017, Machine Intelligence Garage helps early-stage businesses access the computation power and expertise they need to develop and build AI/ML solutions, something thats often inaccessible to startups. The programme has gone from strength-to-strength and supported rising stars in the AI/ML ecosystem by removing key barriers to innovation.

The next cohort of cutting edge startups are:

Robert Smith, Director of Artificial Intelligence, Digital Catapult commented: With Machine Intelligence Garage now well established in the market, were continuing to see more and more high-growth, high-potential startups applying for the programme. Increasingly, these companies are using powerful artificial intelligence and machine learning technologies to develop solutions to help the UK reach its ambitious net zero goals.

I believe the speed and scale of AI can play a critical role in addressing global challenges like the climate emergency, and the six startups that make up this latest cohort are just some of the brightest and most interesting innovators leading the charge for planet earth.

Each member of the latest cohort will be pitching their solutions to investors and industry as part of Digital Catapults FutureScope Showcase: Net Zero, a free to attend online event that showcases the trailblazing companies of Digital Catapults programmes.

Read the original here:

Six startups join Digital Catapult to lead the AI net zero era - The Manufacturer

Posted in Ai | Comments Off on Six startups join Digital Catapult to lead the AI net zero era – The Manufacturer

CIOs dish on AI and automation strategies that work – Healthcare IT News

Posted: at 3:08 am

Artificial intelligence and machine learning are already making some intriguing and potentially transformative impacts on the way healthcare is delivered, from the exam room to the diagnosis to ongoing care management and beyond.

But it's important too to keep the promise and limitations of automation and augmented intelligence in mind. At HIMSS22, three clinical and IT leaders from major health systems offered some insights into how they're deploying AI and ML from predictive analytics to EHR automation to value-based care and population health management.

Each emphasized that, despite the huge potential, it's still early days.

Importantly, it's key to not get too excited about the technology itself and wishcasting about the wide array of challenges it can solve, but to focus instead on smaller, discrete, achievable use cases, said Jason Joseph, chief digital and information officer at Michigan-based BHSH System.

"I think we've got to look at this area of deep analytics more holistically, with AI being a piece of it but really focusing instead on what problems we're trying to solve, not necessarily the AI," he said.

Dustin Hufford, CIO at New Jersey's Cooper University Health Care, is also taking a cautiously optimistic view, and moving slowly and deliberately in its AI implementations.

"It's certainly something that we really need to think about in terms of the safety around AI and the equity part of it: Are we building our own biases into the software that we're building?"

But there's no denying that "this is really gearing up right now," he said. At Cooper University, "we focus on governance around digital, which includes a lot of our AI technologies that we're going to implement in the next couple of years."

Ensuring C-suite buy in is also key, he said. "How do we engage the highest levels of the organization in the planning and understanding of what we're looking at here? We spent a lot of time last year understanding how the mechanics were going to work, the transparency, and now we're getting into the nitty gritty of it."

Step two, said Hufford, "is to really define what are those exact things when it comes to something like AI? What's the exact thing that we need to measure to make sure we're hitting the mark on this?"

Dr. Nick Patel, chief digital officer at Prisma Health in South Carolina, sees a lot to like when it comes to small-AI use cases like workflow automation.

"We as providers are constantly doing repetitive activities that can be automated over and over again," he said. "I didn't go to medical school to click my way through taking care of patients.

"Medical school is all about gathering information, learning about anatomy, physiology, disease states, and then applying that to humankind in order to get them to their goals and keep them well," he explained. "But when you throw a layer of EHR in there, you lose a lot of that because you're having to snip into how do you get all this information so you can make a good clinical decision?"

At Prisma Health which is also undergoing a larger digital transformation that Patel will talk about Thursday at HIMSS22 the question is "how do we start to automate those processes, so we're actually using our neurons to actually take care of patients, not trying to figure out the system?"

Bigger-picture, Patel is more excited about AI-enabled analytics for population health.

"A typical physician, their panel is about 1,500 or 2,000 patients. You can't really make a huge impact in the national narrative when it comes to population health, when you're seeing 2,000 patients per year. So what we have to start to think about is we use bigger data."

But it has to be "cleaner data," he added. "You can't layer in machine learning and AI, all these advanced tools until you make sure that the actual data is actually aligned and clear. Because if you do, then you're going to get insights based on false data and that is extremely dangerous."

The main thing here, from a governance standpoint, "is to really understand, what are all your style of data pieces, what are the discrete and non discrete data platforms, and how does that all converge?" said Patel.

"When you think about diabetes and hypertension, what parts can you automate? I would venture to say as high as 80%. Using the right data in order to get the right insights to the providers so they can be empowered to take care of those patients better."

Joseph says he's similarly optimistic about the prospect of AI-empowered value-based care.

"I'm going to differentiate maybe machine learning and predictive analytics from true AI," he said. "There is a huge opportunity there to really start to understand what are the drivers for the population.

"For example, diabetes and hypertension some of those are either under-diagnosed or, if they are diagnosed, there are no interventions. What you need is the ability to surface that stuff, based on the data that's sitting there at rest, surface it and push it forward.

"And that all has to be run through some analytics," he said. "You can do it based on rules you know, if you've got all those rules. And in other cases, you can look at historical patterns, which is where I think you could start to introduce some AI that's just looking at the trends that exist out there using the data you have, which is better than nothing."

But in all of those situations, what we're really talking about is using augmented intelligence," said Joseph. "What we're talking about is very imprecise AI. At this point, it's going to give you maybe an, 'Eh, start here, start here.' But as we get more advanced with our clean, cleanliness of data, we start capturing more data, we start to get more and more precise to the point where it could become fully automated."

At the moment, he said. "I'd rather know a 60% chance than not know anything. And if I can look at an image and say this thing is 82% likely to have this diagnosis, well, that ought to help these radiologists make a diagnosis. Over time, you could probably get that to 90%, 93%, 95%, 98%."

But soon, the ethical challenges may increase as the technology evolves.

"At some point we'll have an ethical decision about when the computer makes the diagnosis. Then the next step will be for the computers to prescribe medication, or order the procedure."

But "that's going to take us years," said Joseph. "What we need to be doing along the way is making our systems better, making our processes better, making sure the data is cleaner, and introducing these things along the way so that [the models] can learn and be more accurate over time."

Twitter:@MikeMiliardHITNEmail the writer:mike.miliard@himssmedia.comHealthcare IT News is a HIMSS publication.

Originally posted here:

CIOs dish on AI and automation strategies that work - Healthcare IT News

Posted in Ai | Comments Off on CIOs dish on AI and automation strategies that work – Healthcare IT News

AI Used to Fill in Missing Words in Ancient Writings – VOA Learning English

Posted: at 3:08 am

Researchers have developed an artificial intelligence (AI) system to help fill in missing words in ancient writings.

The system is designed to help historians restore the writings and identify when and where they were written.

Many ancient populations used writings, also known as inscriptions, to document different parts of their lives. The inscriptions have been found on materials such as rock, ceramic and metal. The writings often contained valuable information about how ancient people lived and how they structured their societies.

But in many cases, the objects containing such inscriptions have been damaged over the centuries. This left major parts of the inscriptions missing and difficult to identify and understand.

In addition, many of the inscribed objects were moved from areas where they were first created. This makes it difficult for scientists to discover when and where the writings were made.

The new AI-based method serves as a technological tool to help researchers repair missing inscriptions and estimate the true origins of the records.

The researchers, led by Alphabets AI company DeepMind, call their tool Ithaca. In a statement, the researchers said the system is the first deep neural network that can restore the missing text of damaged inscriptions. A neural network is a machine learning computer system built to act like the human brain.

The findings were recently reported in a study in the publication Nature. Researchers from other organizations including the University of Oxford, Ca Foscari University of Venice and Athens University of Economics and Business also took part in the study.

The team said it trained Ithaca on the largest collection of data containing Greek inscriptions from the non-profit Packard Humanities Institute in California. Feeding this data into the system is designed to help the tool use past writings to predict missing letters and words in damaged inscriptions.

The researchers reported that in experiments with damaged writings, Ithaca was able to correctly predict missing inscription elements 62 percent of the time. In addition, the tool was 71 percent correct in identifying where the inscriptions first came from. And, the system was able to effectively date writings to within 30 years, the team said.

Yannis Assael is a research scientist with DeepMind who helped lead the study. He said in a statement that Ithaca was designed to support historians to expand and deepen our understanding of ancient history.

When historians work on their own, the success rate for restoring damaged inscriptions is about 25 percent. But when humans teamed up with Ithaca to assist in their work, the success rate jumped to 72 percent, Assael said.

Thea Sommerschield was another lead researcher on the project. She is the Marie Curie Fellow at Ca Foscari University of Venice. Sommerschield said she hopes systems like Ithaca can unlock the cooperative potential between AI and humans in future restoration work involving important ancient inscriptions.

She said the system had already provided new information to help researchers reexamine important periods in Greek history.

In one case, Ithaca confirmed new evidence presented by historians about the dating of a series of important Greek decrees. The decrees were first thought to have been written before 446/445 BCE. But the new evidence suggested a date in the 420s BCE. Ithaca predicted a date of 421 BCE.

Sommerschield said that the date change may seem small. But it has significant implications for our understanding of the political history of Classical Athens, she added.

The team is currently working on other versions of Ithaca trained on other ancient languages. DeepMind has launched a free, interactive tool based on the system for use by researchers, educators, museum workers and the public.

Im Bryan Lynn.

Bryan Lynn wrote this story for VOA Learning English, based on reports from DeepMind, the University of Oxford, the University of Venice and Nature.

We want to hear from you. Write to us in the Comments section, and visit our Facebook page.

_______________________________________________________

artificial intelligence (AI) n. the development of computer systems with the ability to perform work that normally requires human intelligence

restore v. to make something good exist again

ceramic n. objects made by shaping and heating clay

society n. a large group of people who live in the same country and share the same laws, traditions, etc.

origins n. the cause of something or where something comes from

potential n. a possibility when the necessary conditions exist

decree n. an official order for something

significant adj. important or noticeable

implication n. a result or effect

See the article here:

AI Used to Fill in Missing Words in Ancient Writings - VOA Learning English

Posted in Ai | Comments Off on AI Used to Fill in Missing Words in Ancient Writings – VOA Learning English

There’s more to AI Bias than biased data, NIST report highlights – YubaNet

Posted: at 3:08 am

As a step toward improving our ability to identify and manage the harmful effects of bias in artificial intelligence (AI) systems, researchers at the National Institute of Standards and Technology (NIST) recommend widening the scope of where we look for the source of these biases beyond the machine learning processes and data used to train AI software to the broader societal factors that influence how technology is developed.

The recommendation is a core message of a revised NIST publication,Towards a Standard for Identifying and Managing Bias in Artificial Intelligence(NIST Special Publication 1270), which reflects public comments the agency received on itsdraft versionreleased last summer. As part of alarger effortto support the development of trustworthy and responsible AI, the document offers guidance connected to theAI Risk Management Frameworkthat NIST is developing.

According to NISTs Reva Schwartz, the main distinction between the draft and final versions of the publication is the new emphasis on how bias manifests itself not only in AI algorithms and the data used to train them, but also in the societal context in which AI systems are used.

Context is everything, said Schwartz, principal investigator for AI bias and one of the reports authors. AI systems do not operate in isolation. They help people make decisions that directly affect other peoples lives. If we are to develop trustworthy AI systems, we need to consider all the factors that can chip away at the publics trust in AI. Many of these factors go beyond the technology itself to the impacts of the technology, and the comments we received from a wide range of people and organizations emphasized this point.

Bias in AI can harm humans. AI can make decisions that affect whether a person is admitted into a school, authorized for a bank loan or accepted as a rental applicant. It is relatively common knowledge that AI systems can exhibit biases that stem from their programming and data sources; for example, machine learning software could be trained on a dataset that underrepresents a particular gender or ethnic group. The revised NIST publication acknowledges that while thesecomputational and statisticalsources of bias remain highly important, they do not represent the full picture.

A more complete understanding of bias must take into accounthuman and systemicbiases, which figure significantly in the new version. Systemic biases result from institutions operating in ways that disadvantage certain social groups, such as discriminating against individuals based on their race. Human biases can relate to how people use data to fill in missing information, such as a persons neighborhood of residence influencing how likely authorities would consider the person to be a crime suspect. When human, systemic and computational biases combine, they can form a pernicious mixture especially when explicit guidance is lacking for addressing the risks associated with using AI systems.

If we are to develop trustworthy AI systems, we need to consider all the factors that can chip away at the publics trust in AI. Many of these factors go beyond the technology itself to the impacts of the technology. Reva Schwartz,principal investigator for AI bias

To address these issues, the NIST authors make the case for a socio-technical approach to mitigating bias in AI. This approach involves a recognition that AI operates in a larger social context and that purely technically based efforts to solve the problem of bias will come up short.

Organizations often default to overly technical solutions for AI bias issues, Schwartz said. But these approaches do not adequately capture the societal impact of AI systems. The expansion of AI into many aspects of public life requires extending our view to consider AI within the larger social system in which it operates.

Socio-technical approaches in AI are an emerging area, Schwartz said, and identifying measurement techniques to take these factors into consideration will require a broad set of disciplines and stakeholders.

Its important to bring in experts from various fields not just engineering and to listen to other organizations and communities about the impact of AI, she said.

NIST is planning a series of public workshops over the next few months aimed at drafting a technical report for addressing AI bias and connecting the report with the AI Risk Management Framework. For more information and to register, visit theAI RMF workshop page.

The rest is here:

There's more to AI Bias than biased data, NIST report highlights - YubaNet

Posted in Ai | Comments Off on There’s more to AI Bias than biased data, NIST report highlights – YubaNet

Run:ai Seeks to Grow AI Virtualization with $75M Round – Datanami

Posted: at 3:08 am

Run:ai, a provider of an AI virtualization layer that helps optimize GPU instances, yesterday announced a Series C round worth $75 million. The funding figures to help the fast-growing company expand its sales reach and further development the platform.

GPUs are the beating heart of deep learning today, but the limited nature of the computing resource means AI teams are constantly battling to squeeze the most work out of them. Thats where Run:ai steps in with its flagship product, dubbed Atlas, which provides a way for AI teams to get more bang for their GPU buck.

We do for AI hardware what VMware and virtualization did for traditional computingmore efficiency, simpler management, greater user productivity, Ronen Dar, Run:ais CTO and co-founder, says in a press release. Traditional CPU computing has a rich software stack with many development tools for running applications at scale. AI, however, runs on dedicated hardware accelerators such as GPUs which have few tools to help with their implementation and scaling.

Atlas abstracts AI workloads away from GPUs by creating virtual pools where GPU resources can be automatically and dynamically allocated, thereby gaining more efficiency from GPU investments, the company says.

The platform also brings queuing and prioritization methods to deep learning workloads running on GPUs, and develops fairness algorithms to ensure users have an equal chance at getting access to the hardware. The companys software also enables clusters of GPUs to be managed as a single unit, and also allows a single GPU to be broken up into fractional GPUs to ensure better allocation.

Atlas functions as a plug-in to Kubernetes, the open source container orchestration system. Data scientists can get access to Atlas via integration to IDE tools like Jupyter Notebook and PyCharm, the company says.

The abstraction brings greater efficiency to data science teams who are experimenting with different techniques and trying to find what works. According to a December 2020 Run:ai whitepaper, one customer was able to reduce their AI training time from 46 days to about 36 hours, which represents a 3,000% improvement, the company says.

With Run:ai Atlas, weve built a cloud-native software layer that abstracts AI hardware away from data scientists and ML engineers, letting Ops and IT simplify the delivery of compute resources for any AI workload and any AI project, Dar continues.

The Tel Aviv company, which was founded in 2018, has experienced a 9x increase in annual recurring revenue (ARR) over the past 12 months, during which time the companys employee count has tripled. The company has also quadrupled its customer base over the past two years. The Series C round, which brings the companys total funding to $118 million, will be used to grow sales as well as enhancing its core platform.

When we founded Run:ai, our vision was to build the de- facto foundational layer for running any AI workload, says Omri Geller, Run:ai CEO and co-founder in the press release. Our growth has been phenomenal, and this investment is a vote of confidence in our path. Run:ai is enabling organizations to orchestrate all stages of their AI work at scale, so companies can begin their AI journey and innovate faster.

Run:ais platform and growth caught the eyes of Tiger Global Management, which co-led the Series C round with Insight Partners, which led the Series B round. Other firms participating in the current round included existing investors TLV Partners and S Capital VC.

Run:ai is well positioned to help companies reimagine themselves using AI, says Insight Partners Managing Director Lonne Jaffe, who you might remember was the CEO of Syncsort (now Precisely) nearly a decade ago.

As the Forrester Wave AI Infrastructure report recently highlighted, Run:ai creates extraordinary value by bringing advanced virtualization and orchestration capabilities to AI chipsets, making training and inference systems run both much faster and more cost-effectively, Jaffe says in the press release.

In addition to AI workloads, Run:ai can also be used to optimize HPC workloads.

Related Items:

Optimized Machine Learning Libraries For CPUS Exceed GPU Performance

Optimizing AI and Deep Learning Performance

AI Hypervisor Gets a GPU Boost

Read more here:

Run:ai Seeks to Grow AI Virtualization with $75M Round - Datanami

Posted in Ai | Comments Off on Run:ai Seeks to Grow AI Virtualization with $75M Round – Datanami

When it comes to AI, can we ditch the datasets? – MIT News

Posted: at 3:08 am

Huge amounts of data are needed to train machine-learning models to perform image classification tasks, such as identifying damage in satellite photos following a natural disaster. However, these data are not always easy to come by. Datasets may cost millions of dollars to generate, if usable data exist in the first place, and even the best datasets often contain biases that negatively impact a models performance.

To circumvent some of the problems presented by datasets, MIT researchers developed a method for training a machine learning model that, rather than using a dataset, uses a special type of machine-learning model to generate extremely realistic synthetic data that can train another model for downstream vision tasks.

Their results show that a contrastive representation learning model trained using only these synthetic data is able to learn visual representations that rival or even outperform those learned from real data.

This special machine-learning model, known as a generative model, requires far less memory to store or share than a dataset. Using synthetic data also has the potential to sidestep some concerns around privacy and usage rights that limit how some real data can be distributed. A generative model could also be edited to remove certain attributes, like race or gender, which could address some biases that exist in traditional datasets.

We knew that this method should eventually work; we just needed to wait for these generative models to get better and better. But we were especially pleased when we showed that this method sometimes does even better than the real thing, says Ali Jahanian, a research scientist in the Computer Science and Artificial Intelligence Laboratory (CSAIL) and lead author of the paper.

Jahanian wrote the paper with CSAIL grad students Xavier Puig and Yonglong Tian, and senior author Phillip Isola, an assistant professor in the Department of Electrical Engineering and Computer Science. The research will be presented at the International Conference on Learning Representations.

Generating synthetic data

Once a generative model has been trained on real data, it can generate synthetic data that are so realistic they are nearly indistinguishable from the real thing. The training process involves showing the generative model millions of images that contain objects in a particular class (like cars or cats), and then it learns what a car or cat looks like so it can generate similar objects.

Essentially by flipping a switch, researchers can use a pretrained generative model to output a steady stream of unique, realistic images that are based on those in the models training dataset, Jahanian says.

But generative models are even more useful because they learn how to transform the underlying data on which they are trained, he says. If the model is trained on images of cars, it can imagine how a car would look in different situations situations it did not see during training and then output images that show the car in unique poses, colors, or sizes.

Having multiple views of the same image is important for a technique called contrastive learning, where a machine-learning model is shown many unlabeled images to learn which pairs are similar or different.

The researchers connected a pretrained generative model to a contrastive learning model in a way that allowed the two models to work together automatically. The contrastive learner could tell the generative model to produce different views of an object, and then learn to identify that object from multiple angles, Jahanian explains.

This was like connecting two building blocks. Because the generative model can give us different views of the same thing, it can help the contrastive method to learn better representations, he says.

Even better than the real thing

The researchers compared their method to several other image classification models that were trained using real data and found that their method performed as well, and sometimes better, than the other models.

One advantage of using a generative model is that it can, in theory, create an infinite number of samples. So, the researchers also studied how the number of samples influenced the models performance. They found that, in some instances, generating larger numbers of unique samples led to additional improvements.

The cool thing about these generative models is that someone else trained them for you. You can find them in online repositories, so everyone can use them. And you dont need to intervene in the model to get good representations, Jahanian says.

But he cautions that there are some limitations to using generative models. In some cases, these models can reveal source data, which can pose privacy risks, and they could amplify biases in the datasets they are trained on if they arent properly audited.

He and his collaborators plan to address those limitations in future work. Another area they want to explore is using this technique to generate corner cases that could improve machine learning models. Corner cases often cant be learned from real data. For instance, if researchers are training a computer vision model for a self-driving car, real data wouldnt contain examples of a dog and his owner running down a highway, so the model would never learn what to do in this situation. Generating that corner case data synthetically could improve the performance of machine learning models in some high-stakes situations.

The researchers also want to continue improving generative models so they can compose images that are even more sophisticated, he says.

This research was supported, in part, by the MIT-IBM Watson AI Lab, the United States Air Force Research Laboratory, and the United States Air Force Artificial Intelligence Accelerator.

Read the rest here:

When it comes to AI, can we ditch the datasets? - MIT News

Posted in Ai | Comments Off on When it comes to AI, can we ditch the datasets? – MIT News

Training an AI system is time consuming, but this startup says it has a solution – Morning Brew

Posted: at 3:08 am

In northeast England, halfway between Norfolk and Yorkshire, an AI-powered robot spends its days looking at strawberries. Its not as easy as it sounds.

A human farmer can gauge a strawberrys ripeness level by sight and weight, but the process involves putting each strawberry on a scale, which can be destructive and time-consuming. The robot can do the same job for up to 4 million strawberries a day by performing a simple scan of the fruit, undisturbed.

FruitCast, the agricultural AI startup behind the robots, taught its bots how to do their jobs with data from V7 Labs, a London-based startup that helps AI companies automate the training-data process for models. Training can be one of the most labor-intensive parts of getting an AI system off the ground, since it often calls for not only time and resources, but also vetted and relevant data.

The robots are kind of stupid until you put the intelligence on them, Raymond Tunstill, CTO of FruitCast, which was spun off from the University of Lincolns food-tech institute, told Emerging Tech Brew. He added, Its all about taking examples from the real worldis it a ripe strawberry, or is it unripeand showing that to our neural networks so that the neural networks can, essentially, learn. And without V7, we never wouldve been able to classify [them].

Since its 2018 debut, V7 has used its computer vision platform to train AI models to identify everything from lame cows to grapevine bunches, depending on the clients needs. In 2020, V7 raised a $10 million total seed round, and so far, its clients include more than 300 AI companies, as well as academic institutions like Stanford, MIT, and Harvard.

The secret behind V7 is this system that we call AutoAnnotate, the startups CEO Alberto Rizzoli told us. He and his cofounder, Simon Edwardsson, thought it up based on obstacles encountered in their previous business venture: Aipoly, a computer-vision startup that allowed blind users to identify objects using their phone cameras. Though the software worked decently well, Rizzoli recalled, training data was the really difficult part to create.

So they created AutoAnnotate, a general-purpose AI model for computer vision. When a client comes to V7 with training dataimages or videos theyd like an AI model to learn fromV7 detects the objects boundaries in each frame (like strawberries, for instance), and then uses AutoAnnotate to label it. According to its internal measurements, labeling a high-quality piece of training data could take a human up to 2 minutes, said Rizzoli, compared to about 2.5 seconds for AutoAnnotate.

Drones, automation, AI, and more. The technologies that will shape the future of business, all in one newsletter.

To create that training data, V7s model starts off with a continual learning approach. That could begin with subject matter experts in, say, horticulture, drawing boxes around images of fruit and classifying it by ripeness level (e.g., a level-3 strawberry). They then either accept or correct each of the models attempts to do the same.

After about 100 human-guided examples, a model is able to make relatively confident classifications, so it transitions into what Rizzoli calls a co-pilot approachfor any given choice, the AI provides its confidence score and the human makes corrections.

Because its training data, we always have a human verify it, but it becomes a faster process, Rizzoli said. Later, he added, When they find something that is low-confidence, they fix it, otherwise it can go into the knowledge of the modelof the training set.

The company finds human experts via a network of business process outsourcing companies, agencies, and consultants, which Rizzoli claims can find a group of labelers on most topics within 48 hours.

Think of it like sending your pup to dog training camp and still having responsibilities upon its return. When a customer develops their fully-trained model through V7, theyll still need to keep an eye on it and correct any glaring mistakes, but it should, in theory, be much more capable than before. For example, a newly-trained model may be well-equipped to detect strawberry ripeness levels, but if its somehow presented with a photo of a strawberry keychain, it wont know how to proceed.

Even if a model does become an expert in its domain, its risky to use it for tasks besides what its specifically trained for, since results could be unpredictable.

If you have a car that is trained on data from the United States, its able to have certain weather conditions, it's not able to do certain road signs, and to figure out whether it can actually drive on snow or desert, you need to test ityou need to run it on a data set of desert-driving footage and check the accuracy, Rizzoli said. Believe it or not, this sounds pretty straightforward, but there are almost no tools for doing this. And very few people are actually doing benchmarking on training data, because its a new thing.

Continue reading here:

Training an AI system is time consuming, but this startup says it has a solution - Morning Brew

Posted in Ai | Comments Off on Training an AI system is time consuming, but this startup says it has a solution – Morning Brew

Chipotle Is Testing An AI-Driven Robot To Make Its Tortilla Chips – Forbes

Posted: at 3:08 am

Chipotle is testing Miso Robotics' technology, called Chippy, to make tortilla chips

Chipotle shared its tortilla chip recipe on TikTok in 2020, opening up the opportunity for fans to duplicate that recipe at home.

Now the company is exploring whether it can hand off that duty in restaurants to a robot named Chippy.

The company today announced a test with Miso Robotics that brings artificial intelligence-driven Chippy to its Chipotle Cultivate Center in Irvine, California. Chippy is programmed to replicate Chipotles exact recipe, with corn masa flour, water, sunflower oil, salt and lime juice.

The plan is to eventually integrate Chippy technology into a Chipotle restaurant in Southern California later this year. From there, the company will lean on employee and guest feedback before developing a broader rollout strategy.

According to a press release, Chippy is the first and only robot that uses artificial intelligence to make tortilla chips.

That said, its certainly not the only robotics technology catching the interest of restaurant operators as they work to automate tasks and alleviate labor pressures.

The demand for back-of-house automation is evidenced by headlines throughout the past two years in particular. Perhaps the biggest headline is White Castles recent deployment of burger-flipping robot Flippy at more than 100 of its locations. The burger chain is also leveraging Misos technology for the test and began testing Flippy in 2020.

Misos Flippy Wings is also in test at Inspire Brands Buffalo Wild Wings.

Further, Saladworks has been working with Chowbotics to deploy a saladmaking robot called Sally, while Jamba has partnered with autonomous food platform Blendid to automate smoothies.

In fact, the cooking robotics space is expected to grow by over 16% a year through 2028 with an estimated worth of $322 million by 2028.

During a recent interview, Chief Technology Officer Curt Garner said the company is leveraging everything from internet of things to machine learning in an effort to run restaurants more efficiently.

When you see us leaning into this space, it will be a question of are there better tools to help our crews versus removing a task? Those are the kind of things were looking at, he said.

Garner added the companys goal is to enable crew members to focus on other tasks in the restaurant.

Its worth noting that autonomous technology isnt just being deployed in the kitchen, but across restaurant operations. Chipotle, for instance, is also testing autonomous delivery through its partnership with Nuro, while delivery bot firm Starship has raised $100 million since January. Earlier this week, Bear Robotics recently secured $81 million to expand its robotics solution, Servi, which busses tables and delivers food and drinks.

Operators were testing the autonomous robot waters before Covid-19, but the pandemiclike all things tech-relatedaccelerated the space as operators scrambled to find efficiencies. Simultaneously, customers were growing more used to such technologies and came to expect contactless solutions.

Perhaps the biggest draw, however, is the labor-saving component. Autonomous delivery, for instance, frees up the need for a driver which could help cut some of the steep costs that have hindered the delivery model.

Along those labor lines, Chipotle makes a lot of tortilla chips and Chippy can ease that tedium in the kitchen.

According to the National Restaurant Associations 2022 State of the Industry report, operators expect labor shortages to continue this year and most (including 78% of quick-service operators) plan to leverage automation to help fill those gaps. Two-thirds of restaurant operators say technology and automation will become more common this year.

Chippy could undoubtedly drive the technology closer to a tipping point. If the technology clears Chipotles stage-gate testing process, it has the potential to rollout out to the chains 3,000-and-growing footprint.

Here is the original post:

Chipotle Is Testing An AI-Driven Robot To Make Its Tortilla Chips - Forbes

Posted in Ai | Comments Off on Chipotle Is Testing An AI-Driven Robot To Make Its Tortilla Chips – Forbes

NHS rolls out AI tool which detects heart disease in 20 seconds – Healthcare IT News

Posted: at 3:08 am

The NHS has rolled out a new artificial intelligence (AI) tool which can detect heart disease in just 20 seconds while patients are in an MRI scanner.

A British Heart Foundation (BHF) funded study published in the Journal of Cardiovascular Magnetic Resonance concluded the machine analysis had superior precision to three clinicians. It would usually take a doctor 13 minutes or more to manually analyse images after an MRI scan has been performed.

The technology is being used on more than 140 patients a week at University College London (UCL) Hospital, Barts Heart Centre at St Bartholomews Hospital, and Royal Free Hospital. Later this year it will be introduced to a further 40 locations across the UK and globally.

WHY IT MATTERS

Around 120,000 heart MRI scans are performed annually in the UK. Researchers say the AI will help with the backlog in vital heart care by saving around 3,000 clinician days a year, enabling healthcare professionals to see more waiting list patients. It can also give patients and doctors more confidence in results and assist decision-making about possible treatment and surgeries.

THE LARGER CONTEXT

There has been increasing interest in the role of AI to support disease diagnosis. The NHS AI LAB recently announced it has created a blueprint for testing the robustness of AI models, after running a proof-of-concept validation process on five AI models / algorithms using data from the National COVID-19 Chest Imaging Database (NCCID).

ON THE RECORD

Dr Sonya Babu-Narayan, BHF associate medical director, said: This is a huge advance for doctors and patients, which is revolutionising the way we can analyse a persons heart MRI images to determine if they have heart disease at greater speed.

The pandemic has resulted in a backlog of hundreds of thousands of people waiting for vital heart scans, treatment and care. Despite the delay in cardiac care, whilst people remain on waiting lists, they risk avoidable disability and death. Thats why its heartening to see innovations like this, which together could help fast-track heart diagnoses and ease workload so that in future we can give more NHS heart patients the best possible care much sooner.

Dr Rhodri Davies, BHF-funded researcher at UCL and Barts Heart Centre, said: Our new AI reads complex heart scans in record speed, analysing the structure and function of a patients heart with more precision than ever before. The beauty of the technology is that it replaces the need for a doctor to spend countless hours analysing the scans by hand.

We are continually pushing the technology to ensure its the best it can be, so that it can work for any patient with any heart disease. After this initial roll-out on the NHS, well collect the data, and further train and refine the AI so it can be accessible to more heart patients in the UK and across the world.

Excerpt from:

NHS rolls out AI tool which detects heart disease in 20 seconds - Healthcare IT News

Posted in Ai | Comments Off on NHS rolls out AI tool which detects heart disease in 20 seconds – Healthcare IT News