Page 98«..1020..979899100..110120..»

Category Archives: Ai

Podcast: The story of AI, as told by the people who invented it – MIT Technology Review

Posted: October 15, 2021 at 9:05 pm

Welcome to I Was There When, a new oral history project from the In Machines We Trust podcast. It features stories of how breakthroughs in artificial intelligence and computing happened, as told by the people who witnessed them. In this first episode, we meet Joseph Atick who helped create the first commercially viable face recognition system.

This episode was produced by Jennifer Strong, Anthony Green and Emma Cillekens with help from Lindsay Muscato. Its edited by Michael Reilly and Mat Honan. Its mixed by Garret Lang, with sound design and music by Jacob Gorski.

[TR ID]

Jennifer: Im Jennifer Strong, host of In Machines We Trust.

I want to tell you about something weve been working on for a little while behind the scenes here.

Its called I Was There When.

Its an oral history project featuring the stories of how breakthroughs in artificial intelligence and computing happened as told by the people who witnessed them.

Joseph Atick: And as I entered the room, it spotted my face, extracted it from the background and it pronounced: I see Joseph and that was the moment where the hair on the back I felt like something had happened. We were a witness.

Jennifer: Were kicking things off with a man who helped create the first facial recognition system that was commercially viable... back in the 90s

[IMWT ID]

I am Joseph Atick. Today, I'm the executive chairman of ID for Africa, a humanitarian organization that focuses on giving people in Africa a digital identity so they can access services and exercise their rights. But I have not always been in the humanitarian field. After I received my PhD in mathematics, together with my collaborators made some fundamental breakthroughs, which led to the first commercially viable face recognition. That's why people refer to me as a founding father of face recognition and the biometric industry. The algorithm for how a human brain would recognize familiar faces became clear while we were doing research, mathematical research, while I was at the Institute for Advanced Study in Princeton. But it was far from having an idea of how you would implement such a thing.

It was a long period of months of programming and failure and programming and failure. And one night, early morning, actually, we had just finalized a version of the algorithm. We submitted the source code for compilation in order to get a runcode. And we stepped out, I stepped out to go to the washroom. And then when I stepped back into the room and the source code had been compiled by the machine and had returned. And usually after you compile it runs it automatically, and as I entered the room, it spotted a human moving into the room and it spotted my face, extracted it from the background and it pronounced: I see Joseph. and that was the moment where the hair on the backI felt like something had happened. We were a witness. And I started to call on the other people who were still in the lab and each one of them they would come into the room.

And it would say, I see Norman. I would see Paul, I would see Joseph. And we would sort of take turns running around the room just to see how many it can spot in the room. It was, it was a moment of truth where I would say several years of work finally led to a breakthrough, even though theoretically, there wasn't any additional breakthrough required. Just the fact that we figured out how to implement it and finally saw that capability in action was very, very rewarding and satisfying. We had developed a team which is more of a development team, not a research team, which was focused on putting all of those capabilities into a PC platform. And that was the birth, really the birth of commercial face recognition, I would put it, on 1994.

My concern started very quickly. I saw a future where there was no place to hide with the proliferation of cameras everywhere and the commoditization of computers and the processing abilities of computers becoming better and better. And so in 1998, I lobbied the industry and I said, we need to put together principles for responsible use. And I felt good for a while, because I felt we have gotten it right. I felt we've put in place a responsible use code to be followed by whatever is the implementation. However, that code did not live the test of time. And the reason behind it is we did not anticipate the emergence of social media. Basically, at the time when we established the code in 1998, we said the most important element in a face recognition system was the tagged database of known people. We said, if I'm not in the database, the system will be blind.

And it was difficult to build the database. At most we could build thousand 10,000, 15,000, 20,000 because each image had to be scanned and had to be entered by handthe world that we live in today, we are now in a regime where we have allowed the beast out of the bag by feeding it billions of faces and helping it by tagging ourselves. Um, we are now in a world where any hope of controlling and requiring everybody to be responsible in their use of face recognition is difficult. And at the same time, there is no shortage of known faces on the internet because you can just scrape, as has happened recently by some companies. And so I began to panic in 2011, and I wrote an op-ed article saying it is time to press the panic button because the world is heading in a direction where face recognition is going to be omnipresent and faces are going to be everywhere available in databases.

And at the time people said I was an alarmist, but today they're realizing that it's exactly what's happening today. And so where do we go from here? I've been lobbying for legislation. I've been lobbying for legal frameworks that make it a liability for you to use somebody's face without their consent. And so it's no longer a technological issue. We cannot contain this powerful technology through technological means. There has to be some sort of legal frameworks. We cannot allow the technology to go too much ahead of us. Ahead of our values, ahead of what we think is acceptable.

The issue of consent continues to be one of the most difficult and challenging matters when it deals with technology, just giving somebody notice does not mean that it's enough. To me consent has to be informed. They have to understand the consequences of what it means. And not just to say, well, we put a sign up and this was enough. We told people, and if they did not want to, they could have gone anywhere.

And I also find that there is, it is so easy to get seduced by flashy technological features that might give us a short-term advantage in our lives. And then down the line, we recognize that we've given up something that was too precious. And by that point in time, we have desensitized the population and we get to a point where we cannot pull back. That's what I'm worried about. I'm worried about the fact that face recognition through the work of Facebook and Apple and others. I'm not saying all of it is illegitimate. A lot of it is legitimate.

We've arrived at a point where the general public may have become blas and may become desensitized because they see it everywhere. And maybe in 20 years, you step out of your house. You will no longer have the expectation that you wouldn't be not. It will not be recognized by dozens of people you cross along the way. I think at that point in time that the public will be very alarmed because the media will start reporting on cases where people were stalked. People were targeted, people were even selected based on their net worth in the street and kidnapped. I think that's a lot of responsibility on our hands.

And so I think the question of consent will continue to haunt the industry. And until that question is going to be a result, maybe it won't be resolved. I think we need to establish limitations on what can be done with this technology.

My career also has taught me that being ahead too much is not a good thing because face recognition, as we know it today, was actually invented in 1994. But most people think that it was invented by Facebook and the machine learning algorithms, which are now proliferating all over the world. I basically, at some point in time, I had to step down as being a public CEO because I was curtailing the use of technology that my company was going to be promoting because the fear of negative consequences to humanity. So I feel scientists need to have the courage to project into the future and see the consequences of their work. I'm not saying they should stop making breakthroughs. No, you should go full force, make more breakthroughs, but we should also be honest with ourselves and basically alert the world and the policymakers that this breakthrough has pluses and has minuses. And therefore, in using this technology, we need some sort of guidance and frameworks to make sure it's channeled for a positive application and not negative.

Jennifer: I Was There When... is an oral history project featuring the stories of people who have witnessed or created breakthroughs in artificial intelligence and computing.

Do you have a story to tell? Know someone who does? Drop us an email at podcasts@technologyreview.com.

[MIDROLL]

[CREDITS]

Jennifer: This episode was taped in New York City in December of 2020 and produced by me with help from Anthony Green and Emma Cillekens. Were edited by Michael Reilly and Mat Honan. Our mix engineer is Garret Lang with sound design and music by Jacob Gorski.

Thanks for listening, Im Jennifer Strong.

[TR ID]

Go here to read the rest:

Podcast: The story of AI, as told by the people who invented it - MIT Technology Review

Posted in Ai | Comments Off on Podcast: The story of AI, as told by the people who invented it – MIT Technology Review

Smart Farming With Drones and AI – sUAS News

Posted: at 9:05 pm

FlyNex and Pheno-Inspect combine their solutions to digitalize and automatize work processes in agriculture. AI-based image capturing with drones and analysis open up new opportunities for sustainable, digital farming with a simultaneous increase of yields.

Hamburg, Within the next months, as the soil gets cooler and the air gets humid, plant diseases and pest plants will be a hassle for many farming organizations. Growers need to observe the spread of diseases to take countermeasures, and plant breeders need to develop new, more tolerant varieties resistant to diseases that affect yields. Using high-resolution drone imaging combined with Artificial Intelligence, the detection of, e.g., leaf spot disease symptoms can proceed faster and even automated.

Smart Farming, meaning the optimization of work processes with digital solutions and data, is considerably growing. Field robots for sowing seeds, drones for surveying, or automated feed distribution for cows are no longer unusual. However, the potential of drones in this sector has not yet been exploited. Because of that, the tech-startups FlyNex and Pheno-Inspect worked on a solution for identifying plant diseases as soon as possible, mapping different plant species in the field, scoring plant conditions, etc.

FROM DRONE TO DASHBOARD

The main goal of bringing tech to agriculture in this case: Drones are taking high-resolution images of field areas with a special camera. Image data will be analyzed by the AI-powered software of Pheno-Inspect to detect, e.g., plant diseases, weed, number of grains, etc., in a short time. The farm receives a live map of the condition of the respective areas on their own map.

Compared to the typical process, where workers perform this task by actually walking through the field and identify or count by hand, farmers could benefit from time and cost savings, especially from a higher yield. Early detection of plant diseases or pest plants by drones allows faster and earlier countermeasures.Thus, the loss of the output is significantly reduced.

AUTOMATED AND INTEGRATED DATA MEASUREMENT

The deployment of drones has a key long-term advantage. Once a flight mission is planned on the FlyNex Enterprise Suite and the flight pattern is recorded, the specific routes can be repeated as often as needed. The devices fly automatically and create desired imagery for the AI-engine of Pheno-Inspect. By automating the scan of large areas with drone images in high resolution or even with multispectral sensor technology, precise data can be collected and analyzed in just a few hours. This is crucial, for example, for active ingredient management and ultimately for continuous yield increases. For the first time, drones allow it to deploy automated image AI on a large scale, providing agriculture with essential information quickly and cost-effectively.

The widespread use of drones and AI technology in agriculture is no longer a vision of the future. Not least, this is due to increasing challenges such as climate change and fast-growing demand. Without the support of technology, agriculture cannot keep up. Secondly, the tangible benefits of cost and time savings are obvious, said Andreas Dunsch, CEO and co-founder of FlyNex.

With the leading commercial drone project management software in Europe, FlyNex enables companies and organizations worldwide to use unmanned aerial systems for effective data collection. Every step of the data collection process can be managed on the platform, from the initial planning over flying to evaluating the captured image and measurement data with 3rd-party software like the one from Pheno-Inspect.

The German start-up Pheno-Inspect provides state-of-the-art image processing software optimized for the agricultural sector for the fully automatic evaluation by using Artificial Intelligence. The algorithm can adapt to each use case individually and be made available to any farmer worldwide.

More information and a free demo of the solution can be found athttps://www.flynex.io/agriculture/.

Link:

Smart Farming With Drones and AI - sUAS News

Posted in Ai | Comments Off on Smart Farming With Drones and AI – sUAS News

Former Intel AI boss Naveen Rao is now counting the cost of machine learning, literally – The Register

Posted: at 9:05 pm

A former head of artificial intelligence products at Intel has started a company to help companies cut overhead costs on AI systems.

Naveen Rao, CEO and co-founder of MosaicML, previously led Nervana Systems, which was acquired by Intel for $350m. But like many Intel acquisitions, the marriage didn't pan out, and Intel killed the Nervana AI chip last year, after which Rao left the company.

MosaicML's open source tools focus on implementing AI systems based on cost, training time, or speed-to-results. They do so by analyzing an AI problem relative to the neural net settings and hardware, which then paves an efficient path to generate optimal settings while reducing electric costs.

One component is Composer, which provides the building blocks on which AI applications can be efficiently trained. MosaicML developed these methods after months of researching common settings in computer vision models that include ResNets and natural language processing models like Transformer and GPT.

But developers will ultimately need to chose the best approach. That is where the second component, Explorer, steps in. The tool has a visual interface that provides fine-grained details on parameters that include better results, training time or cost, and users can filter results by the hardware type, cloud and technique.

"We change the learning algorithms themselves to make them use less compute to arrive at the result," Rao told The Register.

AI systems can be inefficient and costly, and more thought needs to be put into economizing machine learning, Rao said. "We find Nvidia GPUs give us the fastest and easiest way to get going. We plan on adding support for other chips in the future," he explained.

The library works within PyTorch right now, and support for Tensorflow will be added later, Rao said.

AI isn't a one-size fits all approach, and inefficiencies in both software and hardware are considerations when accounting for total cost of ownership, said Dan Hutcheson, analyst a VLSI Research.

"The amount of computation required to train the largest models is estimated to be growing >5x every year, yet hardware performance per dollar is growing at only a fraction of that rate," MosaicML said in a blog, citing a 2018 study by OpenAI.

OpenAI in a study last year said algorithmic progress have shown further AI speedups than hardware efficiency.

Many systems use racks of power-hungry Nvidia GPUs for machine learning. Rao is a proponent of this distributed approach, with AI processing split over a network of cheaper chips and components that include low-cost DDR memory and PCI-Express interconnects.

In a tweetstorm last week, he took a jab at monolithic AI chips like the WSE-2 chip produced by Cerebras Systems Inc. being inefficient in AI relative to performance-per-dollar.

The distributed approach reflects a fundamental flaw in understanding the cost of doing AI at a chip level and scaling performance, Cerebras CEO Andrew Feldman told The Register.

"The real waste - it's got nothing to do with the individual chip level. To get 10x the performance you're going to spend 100 or 1,000 times the power," Feldman said.

Feldman invoked Moore's Law, saying "What Intel showed over decades is that you can build a great business if you can keep your prices flat and doubling performance every three to four years."

One angel investor in MosaicML, Steve Jurvetson of Future Ventures, in a tweet floated the idea of a "Mosaic's Law" corollary to measuring advances in algorithms per dollar spent.

Venture capitalists have poured $37m into MosaicML, with other investors also including Lux Capital, DCVC Playground Global, AME, Correlation and E14.

Read more from the original source:

Former Intel AI boss Naveen Rao is now counting the cost of machine learning, literally - The Register

Posted in Ai | Comments Off on Former Intel AI boss Naveen Rao is now counting the cost of machine learning, literally – The Register

Aicadium and SambaNova Partner to Bring AI Hardware Solution to Singapore – HPCwire

Posted: at 9:05 pm

PALO ALTO, Calif. and Singapore, Oct. 15, 2021 Aicadium, a global technology company founded by Temasek dedicated to creating and scaling AI solutions, andSambaNova, the company building the industrys most advanced software, hardware, and services to run AI applications, today announced a partnership to bring SambaNovas advanced AI hardware to Singapore. In conjunction with Aicadiums AI platform, the solution is available for companies in Singapore seeking to deploy machine learning applications such as natural language processing, computer vision, recommendation, and more.

Partnering with Aicadium to deliver advanced AI solutions to enterprises and organizations in Singapore pushes SambaNova closer to our goal of enabling global access to AI, said Marshall Choy, Vice President Product for SambaNova. Businesses around the world are under pressure to adopt and accelerate AI initiatives. With Aicadiums expertise and our best-in-class AI computation, we are accelerating machine learning and AI adoption throughout the region together.

SambaNovas Dataflow-as-a-Service utilizing DataScale is a completely integrated software and hardware platform delivering unrivaled performance, accuracy, scale, and ease of use built on SambaNovas Reconfigurable Dataflow Architecture (RDA). SambaNova DataScale software-defined-hardware approach is optimized from algorithms to silicon, delivers efficiency, and is built with a highly flexible and scalable architecture. DataScale can scale seamlessly from one to hundreds of systems to meet the demands of modern AI computing.

Until now, the reality for small companies has been that they have to use transfer learning to benefit from very complex deep learning algorithms, said Dr. Rainer Burkhardt, Senior Vice President of Engineering at Aicadium. With SambaNovas capabilities training deep neural networks from scratch combined with Aicadiums AI platform and access to large datasets, we can really make a difference solving common issues such as high-resolution images for visual analytics problems and accelerated processing speeds.

Launched earlier this year, Aicadium is Temaseks global AI center of excellence to empower companies to achieve better business outcomes through the adoption and delivery of AI technologies and solutions. It aims to partner with enterprises seeking to develop and scale end-to-end AI solutions to improve business outcomes. The firm leverages its global team of data scientists and engineers, a repeatable and scalable process, and a platform of AI algorithms, models, and tools to help clients achieve operational AI within their organizations.

This exciting partnership with SambaNova exemplifies the collective strength and capabilities of the Temasek ecosystem, connecting firms within our portfolio and beyond to deliver better business outcomes, said Dr. Michael Zeller, Head of AI Strategy & Solutions at Temasek, and Board member of Aicadium.

The partnership enables end-users in industry, government, and higher education to benefit from complete AI solutions to solve their most urgent problems by utilizing extensible AI platform services, locally hosted in Singapore and available throughout the ASEAN region and beyond.

About Aicadium

Aicadium is a global technology company dedicated to creating and scaling AI solutions by leveraging deep expertise and a common machine learning platform. As Temaseks global center of excellence in Artificial Intelligence, we partner with companies to build and operationalize impactful AI solutions across a wide variety of industries and use cases. Our team includes expert data scientists, software engineers, and AI business leaders in Singapore and San Diego, CA. As a team, we place the highest priority on the responsible adoption, development, and delivery of AI technologies and solutions, with the goal of delivering improved business outcomes that usher in a more resilient and inclusive world.

For more information, visit the company website:https://aicadium.ai.

About SambaNova Systems

SambaNova Systems is an AI innovation company that empowers organizations to deploy best-in-class solutions for natural language processing, computer vision, recommendation systems, and AI for science with confidence. SambaNovas flagship offering, Dataflow-as-a-Service, helps organizations rapidly deploy AI in days, unlocking new revenue and boosting operational efficiency. SambaNovas DataScale is an integrated software and hardware system using Reconfigurable Dataflow Architecture (RDA), along with open standards and user interfaces. Headquartered in Palo Alto, California, SambaNova Systems was founded in 2017 by industry luminaries, hardware, and software design experts from Sun/Oracle and Stanford University. Investors include SoftBank Vision Fund 2, funds and accounts managed by BlackRock, Intel Capital, GV, Walden International, Temasek, GIC, Redline Capital, Atlantic Bridge Ventures, Celeste, and several others. For more information, please visit us at sambanova.aior contact us atinfo@sambanova.ai.

Source: Aicadium

More:

Aicadium and SambaNova Partner to Bring AI Hardware Solution to Singapore - HPCwire

Posted in Ai | Comments Off on Aicadium and SambaNova Partner to Bring AI Hardware Solution to Singapore – HPCwire

Q&A: Enroute AI Wants to Build Delivery’s ‘Fingerprint of the City’ – Yahoo Finance

Posted: at 9:05 pm

Neil Fernandes is founder of Enroute AI. Fernandes studied operations research during a graduate program in industrial engineering at the University of Michigan at Ann Arbor. During that time, he became interested in developing algorithms for transportation, which eventually led him to founding San Francisco, California-based Enroute AI. The company, which employs 10 people, powers approximately 50,000 deliveries every month and is releasing new features nearly every week.

Fernandes agreed to answer questions from Modern Shipper on Enroute AI and the last-mile routing segment. (Answers have been edited for style and brevity)

MODERN SHIPPER. Tell me a little about Enroute AI.

FERNANDES. Enroute AI is a SaaS solution for simplifying last-mile logistics. At the core of our offering is a dynamic route optimization engine that uses AI to plan delivery routes that get even better with time. We provide our clients with a cloud-based dashboard for routing, real-time tracking, and dispatching. Enroute AI also includes intuitive iOS and Android apps for delivery drivers, proactive delivery notifications to end customers, and a robust API service for various integrations.

MODERN SHIPPER. How did the idea for Enroute AI come about?

FERNANDES. The idea for Enroute came about when I was working as an analyst at a supply chain consulting firm run by an MIT professor. From the various projects that I had done, I realized that most companies were struggling with their last-mile operations, including the Fortune 500s who had access to state-of-the-art routing software. The software would plan routes that were not achieving the desired results in the real world. Almost none of the deliveries ended up being on time. The routes were being planned without considering traffic conditions, or the actual time it takes to make a delivery.

While developing Enroute AI, I knew it had to be super easy to use and intuitive. I didn't want our customers to have to go through long implementation cycles, or significant training. In developing the product, I sat with dispatchers and rode on delivery trucks for months to see what exactly was causing delays and how we could build something that a driver and dispatcher could easily use.

Story continues

As I started working with small and medium businesses, I realized that their struggles with last-mile deliveries were even worse than the bigger companies.

A look at one of the screen's in Enroute AI's system. (Photo: Enroute AI)

MODERN SHIPPER. Can you tell me a little about Enroute AI's approach to last-mile logistics?

FERNANDES. Our philosophy while planning delivery routes is to make them very close to reality. Our ultimate goal is to be able to plan routes such that our predicted ETAs for stops on the route are within a couple of minutes of the actual delivery. This means that we have to predict traffic better, understand infrastructure limitations like one-way streets and road closures, and allow room for last-minute surprises. We also have to be really good at estimating how long it actually takes to complete the delivery once the driver gets to the drop-off location. For example, a delivery to the front porch of a house in the suburbs might take a minute or two. However, a delivery to the 43rd floor at 500 Wall Street is going to take much longer.

We strongly believe that software should be easy and intuitive to use. Most last-mile software has a long implementation time and a steep learning curve. It's not built for drivers and dispatchers who are usually not technically savvy. Most last-mile software is built by software engineers with little to no real-world logistics background. However, we understand the nuances of the delivery workflow. We designed Enroute AI to be intuitive, easy to use, and to automate as many tasks for the drivers as possible.

Finally, we built EnrouteAI to be dynamic and flexible. Exceptions happen every day in the last-mile world. We make it easy for dispatchers and drivers to respond to last-minute customer requests or disruptions in the schedule. This dynamic route optimization capability is a key differentiator for us.

MODERN SHIPPER. You've spoken with end customers, delivery drivers and supply chain professionals, what are their biggest pain points with current technology?

FERNANDES. Each of these parties has specific pain points with current routing technology.Customers don't like it when they have to take time off work to receive a delivery and they are unsure of when it will actually arrive. Almost everyone we talked to prefers a late delivery with good visibility to an uninformed delivery where they are kept in the dark.

I did not fully appreciate how hard a delivery driver's job is until I started riding along with them on delivery routes. They have to worry about a ton of things simultaneously: traffic, parking, finding the exact entrance to the building, knowing where to leave the package once they are inside the building etc. Their biggest pain point was that their existing routing technology was static and not responsive to real-world conditions leading them into areas of high traffic, going down roads that are closed to traffic, or asking them to make unsafe turns on busy roads. Other frustrating things for them were that it would route them to the leasing office of an apartment complex instead of the actual apartment, or the reception of an office building instead of the loading dock, which in rush hour downtown traffic could cost the driver half an hour. Also, most routing apps only work when a driver is connected to the internet. Their routing app stops working when they are downtown, inside high rises, basements, or in remote locations.

Shippers who operate their own fleet have trouble understanding the true cost of a delivery. Their routing software is not able to give them an estimate of how much a particular stop on the route costs them, which results in them losing money on some stops. One of our customers uses our software to decide which stops should be offloaded to a third-party carrier since we provide visibility into per stop costs.

Another pain point for our clients is visibility and billing issues when dealing with 3rd party carriers. Many of our clients use a mix of internal and external fleets for their deliveries. This combination helps them scale and expand to new geographies. However, their route planning software does not give them visibility into external carrier's deliveries. This lack of visibility costs them valuable customer support time when customers call support asking for ETAs. It also results in them overpaying their carrier for all the missed and late deliveries. We integrate with many carriers to solve the visibility issue.

MODERN SHIPPER. Technology providers are always trying to solve for all possible conditions, but even the most robust and intelligent solutions run into real-world challenges. How has the Enroute AI technology been adapted to meet these ever-changing needs?

FERNANDES. We realized pretty quickly that each customer's way of running their business is different. We did not want to be stuck with a platform that needs to be re-architected for each new customer or vertical market. Also, from a software perspective, things can get messy when maintaining multiple codebases for different customers. What we settled on is building a core that is common to all customers. Things like validating and geocoding addresses, optimizing the sequence of stops, notifications etc. Everything else is built as plugins (adaptors). This enables us to cater to each customer's unique needs.

We also know that software and AI can only get you 90% there, it can't account for every single eventuality that may happen when making pickups and deliveries. We try to mitigate some of that by making it very easy to modify routes by dragging and dropping markers on a map, to move stops between vehicles, automating notifications to the customers etc. All along our AI is learning and planning better routes with each delivery.

MODERN SHIPPER. You've mentioned that Enroute AI's artificial intelligence is able to capture small details such as which driveway is best to use when entering a location. How are you able to collect this type of granular data?

FERNANDES. Yes, we gather and process a lot of data that captures the nuances of making a delivery. We call that our "fingerprint of the city." We noticed that a driver who is familiar with a delivery location can complete a delivery in the fraction of time it takes a new driver. This is because the experienced driver knows the best place to park, which building entrance to take, and where specifically the package needs to be dropped off. None of the routing software I'm aware of captures this information. Making location insights available to drivers is a key factor in ensuring deliveries happen on time. For example, we work with a company making deliveries in Manhattan. They were using a routing solution that sent the driver to the Google address of the building which happens to be the front desk, instead of the loading dock. With one-way streets and rush-hour traffic, this can cause delays of up to half an hour.

We use multiple providers for location data; however, our biggest source of data is our mobile app for drivers. We constantly collect GPS pings from our mobile app for drivers and use machine learning to recognize patterns from that information. We are able to learn where the driver usually parks, what the traffic patterns look like in the neighborhood, where the exact entrance of the building is, etc. We are also experimenting with collecting WiFi and Bluetooth network information to improve location accuracy within buildings and dense urban environments where only relying on the GPS can sometimes show your location as being a block away.

Data is only half of the story; our secret sauce lies in processing this information to make sense out of it. If you only look at raw GPS pings in downtown for example, you might think that your driver is driving through buildings and flying across blocks. We need to filter out those outlier data points and only rely on points that make sense. When you deal with AI algorithms, garbage in equals garbage out. We spend a lot of effort making sure we feed our algorithms the right data.

MODERN SHIPPER. Last-mile drivers face any number of obstacles each day traffic congestion, missed delivery windows, weather, etc. What approach does Enroute AI take with its solution to minimize disruptions due to real world, often unforeseen situations to ensure deliveries are made on time and drivers and vehicles remain efficient?

FERNANDES. You're right, last-mile delivery is a hard job. As a delivery driver you deal with multiple unforeseen obstacles every single day. I still regularly ride along with delivery drivers during their daily routes. Every time they find something hard to do, I start thinking of ways that we can simplify the job for them.

Here are a few things we do to minimize disruptions and make it easier on delivery drivers:

Drivers hate being stuck in traffic, we try to help them by taking into account predictive traffic when planning routes.

For unpredictable events such as accidents, we continuously monitor real-time traffic patterns and suggest a better sequence way ahead of time. For instance, EnrouteAI has customers in the Boston area. One of the key things about Boston is its infrastructure is quite old. You have to match the characteristics of your fleet equipment to the infrastructure constraints of the city. For example, several times a year, a driver will try to take a 12-foot-high truck through a 10-foot-high bridge on Storrow Drive.

Missed delivery time windows is a tricky situation that is sometimes unavoidable. We take a proactive approach, unlike the major carriers who inform you that your delivery is late after the time window has actually passed. Most of the time, we know well ahead of time when a driver isn't going to make it within the time window. In those situations, we proactively notify customers and managers that this particular stop is going to be late.

Dynamic routing is a key differentiator for Enroute AI. Existing solutions are great for planning out a route. When the day starts and the drivers have started their routes, changes occur. We can handle real-time changes to traffic patterns.

Our client's customers have a tracking screen to see where their package is in real time along with the latest ETA. With other current solutions, the customer calls customer support who in turn has to call the driver, who looks up the map and estimates the ETA, disrupting the usual course of making deliveries. We avoid that situation through tracking.

We make it easy to rebalance routes in the middle of the day. We make it super simple to move a pickup stop from one driver to another.

If the customer isn't at home you end up paying twice as much for the delivery in addition to all the customer support overhead that goes with rescheduling. Our automated notifications confirm that the customer will be ready when the driver arrives.

MODERN SHIPPER. Last-mile delivery efficiency starts long before a package is loaded into a vehicle and requires coordination across a number of different departments and businesses. What obstacles exist to the integration of these disparate systems?

FERNANDES. Yes, from the outside it's not apparent how many things have to come together to make an on-time delivery possible. It looks and sounds simple until you start to understand all of the systems (ordering, billing, etc.) involved. It's a very daunting and complex problem. There needs to be integration with many systems to ensure efficient delivery. For example, we had to build an integration with a furniture company's e-commerce ordering system to make sure they were not promising customers delivery on days when they had no capacity.

This problem also extends downstream into the billing systems as well. Making sure that payments are made according to the contract is pretty daunting: fuel surcharges, waiting times, the premium for white-glove services, etc. Billing was such a big pain point for our clients that we started integrating our software into their billing system to process rules on the rate card automatically (time, distance, number of packages, fuel surcharge, etc.).

The biggest problem we face is that many carriers and shippers run on legacy platforms that have an integration layer (they don't support API integration). Getting data in and out of these systems involves a convoluted process of exporting data from their system and importing them into our system. This process often requires manual intervention and can be error prone. We also encounter clients who use TMSs and order management systems that support integrations, but these systems lock down that functionality to thwart competition.

What makes integration even more confusing, is that we deal with carriers and shippers using different terminology for the same process, depending on their industry. Even within the same industry carriers and shippers use different terminology. For example, some shippers refer to white glove service as premium service, whereas some carriers refer to it as special deliveries. As a result, we can't build a one size fits all integration. We need to customize the integration depending on the shipper and carrier combination, which can be pretty daunting.

Our approach to deal with complexity of integration is by architecting our software to be flexible and extensible. As mentioned earlier we have core functionality that is common to all our clients. All the customizations and one-off integrations are built as plugins (adaptors). When we notice that the plugin is used by enough people, we integrate that into our core functionality.

Image Sourced from Pixabay

See more from Benzinga

2021 Benzinga.com. Benzinga does not provide investment advice. All rights reserved.

Continue reading here:

Q&A: Enroute AI Wants to Build Delivery's 'Fingerprint of the City' - Yahoo Finance

Posted in Ai | Comments Off on Q&A: Enroute AI Wants to Build Delivery’s ‘Fingerprint of the City’ – Yahoo Finance

Sorry former Pentagon expert, but China is nowhere near winning the AI race – The Next Web

Posted: at 9:05 pm

Nicolas Chaillan, the Pentagons former Chief Software Officer, is on a whirlwind press tour to drum up as much fervor for his radical assertion that the US has already lost the AI race against China.

Speaking to the Financial Times in his first interview after leaving his post at the Pentagon, Chaillan said:

We have no competing fighting chance against China in 15 to 20 years. Right now, its already a done deal.

Chaillans departure from the Pentagon was preceded by a blistering letter where he signaled he was quitting out of frustration over the governments inability to properly implement cybersecurity and artificial intelligence technologies.

And, now, hes telling anyone who will listen that the US has already lost a war to China that hasnt even happened yet. Hes essentially saying that the US is a sitting duck whos safety and sanctity is predicated on the fact that China is choosing not to attack and destroy us.

And, lets be clear, Chaillans not talking about a hot war. Per the FT article, he said whether it takes some kind of war or not is anecdotal.

This is what you call propaganda.

Heres why:

The score: It doesnt matter how you measure things, the US is not losing the AI race to China.

Among Chinas top AI companies youll find Baidu, a business with about a $55B market cap.

Lets put that into perspective. Google is worth over a trillion dollars. Thats 18 times more than Baidu. And thats just Google. Amazon, Apple, and Microsoft are also worth a trillion and theyre all AI companies as well.

There is no measure, including talent draw and laboratory size, by which you could say China is even in the same class when it comes to AI companies.

And when it comes to AI research coming out of universities, the US again leads the world by a grand margin.

Not only does the US attract students from all over the world, but it also houses some of the worlds most advanced AI programs at the university level. Between the cognitive research done at places like NYU and Harvard and the machine learning applications for engineering being invented at MIT, Carnegie Mellon, and their ilk, its incredibly difficult to make an argument that Chinas academic research outclasses the US.

Thats not to denigrate the amazing work being done by researchers in China, but theres certainly no reason to believe Chinas going to overtake the West in a matter of time by sheer virtue of its academia.

And that just leaves public-sector and military AI. Whats interesting about China is that, nationally, its government gives far more support for AI research than any other nation.

Many experts feel that Chinas massive investments in public sector research combined with its authoritarian approach to controlling what the public sector and academia do, could lead to a situation where China leapfrogs the US.

This, however, is conjecture. The reality is that US companies dont need government investments. Unlike the US government, Amazon isnt in massive debt to its shareholders. Amazon is one of the most profitable enterprises in the history of humanity.

And theres no law saying Amazon must work with the US government. Its free to continue making money hand over fist and pushing the philosophical limits on what wealth is or how economies work whether it chooses to play ball with the Pentagon or not.

The point is this: In China, all research is military research.

The FT article makes it apparent that Chaillans real problem is with democracy:

He also blamed the reluctance of Google to work with the US defence department on AI, and extensive debates over AI ethics for slowing the US down. By contrast, he said Chinese companies are obliged to work with Beijing, and were making massive investment into AI without regard to ethics.

In some weird give up your freedoms for the greater good way his words might make sense. Except for one thing: the one major player in the global AI game that we havent spoken about yet is the Defense Advanced Research Projects Agency, more commonly referred to by its acronym DARPA.

DARPA is the US governments version of the laboratory Q leads in the James Bond universe. Its always looking for technologies literally any technologies, no matter how strange or unlikely to exploit for military use.

But theres nothing fictional about DARPA or its work. Either DARPA or a DARPA-adjacent agency of similar reform is at the financial heart of thousands upon thousands of university studies and technology projects in the US every year.

For perspective: DARPA literally invented the internet, GPS, and the graphical user-interface.

I mention all of this to point out that there is no domain by which you can say the US is not leading the world in AI. Im not saying that as a patriot (disclosure: Im a US citizen who lives abroad and a US Navy veteran). Im saying it because its demonstrably true.

In fairness, Chaillans clarified his words since the FT article. On LinkedIn he wrote:

For those who saw this article, I want to clarify one thing. I never said we lost. I said as it stands and if we dont wake up NOW we have no fighting chance to win against China in 15 years. I also said that theyre leading in AI and Cyber NOW. Not in 10 years as some reports mention.

Of course 750 pages government funded reports always tell us we have more time than we have so no one is held accountable for missing the already past due target.

Those are just common sense fact. We are competing against 1.5B folks here. Either we are smarter and more agile or we lose. Period.

Never let the truth get in the way of a good story eh? Per the FT article:

We have no competing fighting chance against China in 15 to 20 years. Right now, its already a done deal; it is already over in my opinion, he said, adding there was good reason to be angry.

The bottom line is that Chaillans spreading propaganda. Hes employing a centuries-old racist trope called the China Bogeyman. The US has used it for decades to justify its bloated defense budget to the public.

The idea is that US citizens should be scared of China not because of its academic, economic, or military technologies. But because of the sheer fact that there are 1.5 billion people in that country who arent Americans.

Chaillans using the China Bogeyman and his former positions as an IT boss for the Air Force and the Pentagon as a political tool. Whether his goal is to run for office or a to get a lofty consulting position at a conservative-leaning organization, its clear what the purpose of Chaillans outlandish statements are: to pressure the public into believing their safety relies on doing whatever it takes to ward off the imminent threat posed by the mere existence of 1.5 billion people in China.

Its a baseless argument against the development of ethical AI and policies restricting the US from creating and using harmful AI technologies.

Read this article:

Sorry former Pentagon expert, but China is nowhere near winning the AI race - The Next Web

Posted in Ai | Comments Off on Sorry former Pentagon expert, but China is nowhere near winning the AI race – The Next Web

Writing helper Copy.ai closes on its second funding round this year – TechCrunch

Posted: at 9:05 pm

On the one-year anniversary of Copy.ais launch on Twitter, the company, a GPT-3 AI-powered platform that generates copywriting tools for business customers, secured another round of funding.

This time, the company brought in an $11 million Series A round, led by Wing Venture Capital, with participation from existing investors Craft Ventures and Sequoia, and new investors including Tiger Global and Elad Gil. This follows a $2.9 million seed round announced in March and brings the companys total funding to $13.9 million.

Copy.ais software costs $35 a month and can, for example, write a blog post outline based on a few sentences and create link descriptions for Facebook ads and even generate a company motto.

A year since CEO Paul Yacoubian and Chris Lu co-founded the company, is not yet profitable, but did go from zero to $2.4 million in annual recurring revenue. It also went from three employees to 13, Yacoubian told TechCrunch.

Though it raised funding earlier this year, he and Lu felt the time was right to go after a Series A to expand the team and hire more engineers to provide capacity for new product features. One recent feature is Editor, which enables users to organize thoughts, save ideas and edit notes directly in the app. Copy.ai is also developing products for long-form content creation.

AI is good at pattern-matching, and when you feed it more information about a business, it can assume the identity of the business, so we are also building a teams product so as the AI learns more, you can invite other business users to sign up, too, Yacoubianadded.

The company will be investing the new capital into hiring. It is a fully remote team with employees all over the country. Copy.ai already has over 300,000 marketers using its tools, like eBay, Nestl and Ogilvy. Over 250,000 have signed up for a free trial since the seed round and it has more than 5,000 premium customers.

Copy.ai is early into AI natural language generation, something Yacoubiansaid the company is just scratching the surface of, so it will continue to improve the core app experience and the quality of the text that is generated.

The founders also hit it off with Wing Venture Capital partner Zach DeWitt, who Yacoubiansaid understands the companys vision and how well artificial intelligence can help marketers.

While looking at the creative capacity of AI, we hear a lot about automation taking away jobs, but not a lot of narrative to create value for yourself or your company, Yacoubianadded. If AI progresses, it will be a source of empowerment and another tool that has uncapped potential. It is interesting in how it can unlock human capital, and in a smaller company that cant afford full-time agencies, provide a quick, simple tool that solves their problem.

DeWitt said customers are fully moving online as digital penetration increases, and companies have to meet customers where they are, whether that be newsletters, blogs, social media or email.

In speaking with small business customers of Copy.ai, he got a sense that the amount of written content is overwhelming to some, and to enable marketers or founders to write a great piece of copy, using AI is the best way to do that.

DeWitt, himself, used the product to generate his initial email to the company. He also writes a weekly blog and is active on Twitter, so Copy.ais products have come in handy as he thinks about blog post ideas and content formats, he said.

He added that the company is one of the fastest-growing that Wing had come across for a company this young. They are also leveraging social media to make their metrics public, which in turn generates loyalty and provides a way for them to learn in public as well, something that initially attracted Wing to the company.

This round was massively oversubscribed, so you can get a sense of the interest in the company, the quality of the team and their traction, DeWitt added. Chris and Paul had the luxury of being selective in the investors they chose to set them up for future success.

Go here to read the rest:

Writing helper Copy.ai closes on its second funding round this year - TechCrunch

Posted in Ai | Comments Off on Writing helper Copy.ai closes on its second funding round this year – TechCrunch

10 Reasons Why AI is Essential for Your Competitive Intelligence Program – Analytics Insight

Posted: at 9:05 pm

Artificial intelligence is a powerful asset to your team. Businesses see it as an essential way to remain in ever-changing markets. AI helps fuel knowledge-gathering efforts, visualize data, deliver information in a timely fashion, and dive deeper into thousands of sources that the average person could never find. AI has become an essential tool for business teams, and in this blog, we are outlining why AI is essential for your competitive intelligence (CI) program.

Gathering information requires more time and energy than most people or companies can afford to spend. Tasking a single person, a team of people, or even an entire department to manually sift through tons of information could quite literally take them an eternity. Artificial intelligence helps you gather information from hundreds of thousands of sources and insights. There are far too many sources of intel for any human to track themselves. Without AI, they would spend all their time doing research to understand whats happening, which makes it difficult for them to track real-time updates and changes. AI can also gather insights from news sources, evolving external markets, social media, website tracking, job review boards, thought leadership, events, and many other sources.

Filtering helps you focus on relevant informationeach team can also have a different definition of what is relevant, and AI accommodates that. What is relevant to a sales team is different from what would be relevant to a product team. AI can help filter this information for delivery so that these teams can focus on strategizing the information pertinent to them. Without the filtering and accessibility capabilities that AI provides, it is incredibly hard to sort through and make sense of the vast amounts of intelligence available.

AI can act as your watchdog on hundreds of thousands of sources, ensuring you dont miss minute changes or updates. It can monitor changes to websites, updates to articles, a refresh to downloadable content, and new thought leadership trends, which are all things that would consume considerable bandwidth to a team of people. AI in your CI program will ensure that you dont miss a step that your competitors, the market, or industry trends take. It will keep track of the information you set as important and relevant, the sources you want to keep an eye on, and the competitors you believe are the biggest threats to increasing your market share.

Visualizing data and trends are vital components to making business decisions. AI can help pull out trends and visualize huge amounts of data at a pace that people could never achieve.

This makes the data immediately actionable since they can quickly see whats happening and what the impact is. For example, product teams could see trends in hiring data for UX to know whats coming in a competitors roadmap; marketing could see content and thought leadership trends, and executives could see territory experimentation and market interest.

Even when competitors change their strategies spontaneously, companies have a good chance of figuring out what theyll do and when, with the help of competitor intelligence. Ciscos former CEO and Chairman, John Chambers, claimed that based on competitor intelligence, he could anticipate competitors moves one or even two steps in advance. This may sound exaggerated, but it turned out to be true when several CEOs were interviewed later. Take, for example, McDonalds and Burger King. They both responded differently to the negative publicity about obesity and fast foods. McDonalds was quick to respond to the market and overtake competition by introducing a variety of healthier options. Burger King, who had already anticipated the move, on the other hand, grabbed the opportunity to cherry-pick customers who are less health-conscious. In turn, they offered high-fat and high-calorie sandwiches by counter-advertising healthy choices. Burger King had anticipated that McDonald would not respond to this attack!!

How often do we pay attention to competitors? We always discount them thinking they are harmless. But they can drastically change the marketplace with their new ways of using the technologies and new strategies. SKY airline identified a new opportunity to compete against its rivals in the Chilean market. It introduced a low-cost model, which was the first of its kind in Chile. SKY took new measures such as eliminating complimentary food and beverages for all passengers during flights and thereby lowered its ticket prices. This helped the company increase its share of carried passengers from 10% to 20%, according to Euromonitor International.

When companies are striving hard to find out as much information as possible about the marketplace, competitive intelligence has become increasingly relevant to stay ahead of the competition in the marketplace. Regardless of how good your product is, you need to travel at the speed of light and thus react quickly to the evolving markets in order to stay first in the competition. Tesla was the first one to give people what theyve been asking for for years: a reliable electric car. None of the other big automotive manufacturers were manufacturing electric cars until Tesla made it happen in 2008. Tesla led the pack with a bombastic announcement of the first luxury electric car: The Tesla Roadster.

Companies should present themselves to consumers with a unique brand identity that sets them apart from their key competitors. Investigating your competition is one of the most important aspects of developing your brand positioning as you need to know where you fit into the overall market. The iconic Be Beautiful campaign was so unique that it knocked out the competition of Dove. In this campaign, the Real Beauty Sketches explored the gap between how women look at themselves versus how others perceive them. The two portraits were drawn by FBI-trained forensic artist Gil Zamora. The results are surprising! This not only positioned Dove as a brand capable of understanding women but also set apart Dove as a unique company compared to its competitors.

Competitive intelligence sparks new ideas and fresh thinking to flow into the company. With competitive intelligence, companies can evaluate how their competition is selling and positioning their products and take advantage of the market gaps to enhance profit margins. By learning these market dynamics, companies can make more effective operational decisions. Take the airline industry, for instance. Fuel is its largest expense that accounts for nearly 20-40% of its operating budget. Hence, they are constantly looking for ways to cut fuel costs and look for alternative fuel. See how United Airlines partnered with Des Plaines, Illinois-based developer of technology for the petroleum refining and gas processing industries, to use its Green Jet Fuel to power its flights from Los Angeles to San Francisco. It converts non-edible animal fats and oils into jet fuel, which will allow United Airlines to replace up to 30% of its petroleum-based fuel for the L.A. flights.

This stands as one of the most crucial reasons why competitive intelligence is used in companies. Innovation is much needed to sustain a competitive advantage that allows companies to leapfrog in front of their customers, and not just mimicking and following them. And certainly, this means profit! Dyson Appliances Limited (DAL) is a classic example of innovation using competitive intelligence. DALs founder James Dyson himself was the source of many innovations. He is popular as the inventor of the first bagless vacuum that took the vacuum cleaner market by storm. Following the pursuit of innovation, Dyson churned out many other innovative models of vacuum cleaners that had helped Dyson gain a market leadership position. DALs deep-set culture of innovation gave it an edge over its competitors.

The rest is here:

10 Reasons Why AI is Essential for Your Competitive Intelligence Program - Analytics Insight

Posted in Ai | Comments Off on 10 Reasons Why AI is Essential for Your Competitive Intelligence Program – Analytics Insight

AI quickly identifies genetic causes of disease in newborns | @theU – @theU

Posted: at 9:05 pm

An artificial intelligence-based technology rapidly diagnoses rare disorders in critically ill children with high accuracy, according to a report by scientists from University of Utah Health and Fabric Genomics, collaborators on a study led by Rady Childrens Hospital in San Diego. The benchmark finding, published in Genomic Medicine, foreshadows the next phase of medicine, where technology helps clinicians quickly determine the root cause of disease so they can give patients the right treatment sooner.

This study is an exciting milestone demonstrating how rapid insights from AI-powered decision support technologies have the potential to significantly improve patient care, says Mark Yandell, co-corresponding author on the paper. Yandell is a professor of human genetics and Edna Benning Presidential Endowed Chair at U of U Health, and a founding scientific advisor to Fabric Genomics.

Worldwide, about 7 million infants are born with serious genetic disorders each year. For these children, life usually begins in intensive care. A handful of NICUs in the U.S., including at U of U Health, are now searching for genetic causes of disease by reading, or sequencing, the 3 billion DNA letters that make up the human genome. While it takes hours to sequence the whole genome, it can take days or weeks of computational and manual analysis to diagnose the illness.

For some infants, that is not fast enough, Yandell says. Understanding the cause of the newborns illness is critical for effective treatment. Arriving at a diagnosis within the first 24 to 48 hours after birth gives these patients the best chance to improve their condition. Knowing that speed and accuracy are essential, Yandells group worked with Fabric to develop the new Fabric GEM algorithm, which incorporates AI to find DNA errors that lead to disease.

In this study, the scientists tested GEM by analyzing whole genomes from 179 previously diagnosed pediatric cases from Radys Childrens Hospital and five other medical centers from across the world. GEM identified the causative gene as one of its top two candidates 92% of the time. Doing so outperformed existing tools that accomplished the same task less than 60% of the time.

Dr. Yandell and the Utah team are at the forefront of applying AI research in genomics, says Martin Reese, CEO of Fabric Genomics and a co-author on the paper. Our collaboration has helped Fabric achieve an unprecedented level of accuracy, opening the door for broad use of AI-powered whole-genome sequencing in the NICU.

GEM leverages AI to learn from a vast and ever-growing body of knowledge that has become challenging to keep up with for clinicians and scientists. GEM cross-references large databases of genomic sequences from diverse populations, clinical disease information and other repositories of medical and scientific data, combining all this with the patients genome sequence and medical records. To assist with the medical record search, GEM can be coupled with a natural language processing tool, Clinithinks CLiX focus, which scans reams of doctors notes for the clinical presentations of the patients disease.

Mark Yandell, Ph.D., professor of human genetics and Edna Benning Presidential Endowed Chair at U of U Health.

Critically ill children rapidly accumulate many pages of clinical notes, Yandell says. The need for physicians to manually review and summarize note contents as part of the diagnostic process is a massive time sink. The ability of Clinithinks tool to automatically convert the contents of these notes in seconds for consumption by GEM is critical for speed and scalability.

Existing technologies mainly identify small genomic variants that include single DNA letter changes, or insertions or deletions of a small string of DNA letters. By contrast, GEM can also find structural variants as causes of disease. These changes are larger and are often more complex. Its estimated that structural variants are behind 10% to 20% of genetic diseases.

To be able to diagnose with more certainty opens a new frontier, says Luca Brunelli, a neonatologist and professor of pediatrics at U of U Health, who leads a team using GEM and other genome analysis technologies to diagnose patients in the NICU. His goal is to provide answers to families who would have had to live with uncertainty before the development of these tools. He says these advances now provide an explanation for why a child is sick, enable doctors to improve disease management, and, at times, lead to recovery.

This is a major innovation, one made possible through AI, Yandell says. GEM makes genome sequencing more cost-effective and scalable for NICU applications. It took an international team of clinicians, scientists, and software engineers to make this happen. Seeing GEM at work for such a critical application is gratifying.

Fabric and Yandells team at the Utah Center for Genetic Discovery have had their collaborative research supported by several national agencies, including the National Institutes of Health and American Heart Association, and by the Us Center for Genomic Medicine. Yandell will continue to advise the Fabric team to further optimize GEMs accuracy and interface for use in the clinic.

The research was published online on Oct. 14, 2021, as Artificial intelligence enables comprehensive genome interpretation and nomination of candidate diagnoses for rare genetic diseases.

Additional centers that participated in the study include Boston Childrens Hospital, Christian-Albrechts University of Kiel & University Hospital Schleswig-Holstein, HudsonAlpha Institute of Biotechnology, Tartu University Hospital and the Translational Genomics Research Institute (TGen).

Competing interests: Yandell has received stock options and consulting fees from Fabric Genomics, Inc. Reese is an employee of Fabric Genomics, Inc.

Link:

AI quickly identifies genetic causes of disease in newborns | @theU - @theU

Posted in Ai | Comments Off on AI quickly identifies genetic causes of disease in newborns | @theU – @theU

AI fake-face generators can be rewound to reveal the real faces they trained on – MIT Technology Review

Posted: at 9:05 pm

Yet this assumes that you can get hold of that training data, says Kautz. He and his colleagues at Nvidia have come up with a different way to expose private data, including images of faces and other objects, medical data, and more, that does not require access to training data at all.

Instead, they developed an algorithm that can re-create the data that a trained model has been exposed to by reversing the steps that the model goes through when processing that data. Take a trained image-recognition network: to identify whats in an image, the network passes it through a series of layers of artificial neurons. Each layer extracts different levels of information, from edges to shapes to more recognizable features.

Kautzs team found that they could interrupt a model in the middle of these steps and reverse its direction, re-creating the input image from the internal data of the model. They tested the technique on a variety of common image-recognition models and GANs. In one test, they showed that they could accurately re-create images from ImageNet, one of the best known image recognition data sets.

NVIDIA

As in Websters work, the re-created images closely resemble the real ones. We were surprised by the final quality, says Kautz.

The researchers argue that this kind of attack is not simply hypothetical. Smartphones and other small devices are starting to use more AI. Because of battery and memory constraints, models are sometimes only half-processed on the device itself and sent to the cloud for the final computing crunch, an approach known as split computing. Most researchers assume that split computing wont reveal any private data from a persons phone because only the model is shared, says Kautz. But his attack shows that this isnt the case.

Kautz and his colleagues are now working to come up with ways to prevent models from leaking private data. We wanted to understand the risks so we can minimize vulnerabilities, he says.

Even though they use very different techniques, he thinks that his work and Websters complement each other well. Websters team showed that private data could be found in the output of a model; Kautzs team showed that private data could be revealed by going in reverse, re-creating the input. Exploring both directions is important to come up with a better understanding of how to prevent attacks, says Kautz.

Original post:

AI fake-face generators can be rewound to reveal the real faces they trained on - MIT Technology Review

Posted in Ai | Comments Off on AI fake-face generators can be rewound to reveal the real faces they trained on – MIT Technology Review

Page 98«..1020..979899100..110120..»