Page 132«..1020..131132133134..140150..»

Category Archives: Ai

How to Get Past the Stalemate Between AI and the Medical Industry – Machine Design

Posted: June 11, 2021 at 11:53 am

If there are ways AI can expedite diagnoses while protecting sensitive data, would medical networks use it? The short answer is yes, but there are still challenges facing the healthcare industry when it comes to encrypting data and sharing it. While the benefits are apparent, medical networks cant shake the burden of protecting their patientsnor do they want to.

There are various use cases in which implementing AI in medical imaging would benefit patient care and assist clinicians in diagnosing:

AI-assisted imaging can decrease time to diagnose patients with chronic diseases by flagging areas of concern and even clarifying images, making it a little easier for clinicians to examine.

During A3s Vision Week, Stacey Shulman, VP of the Internet of Things group and general manager of Health, Life Sciences and Emerging Technologies at Intel, discussed the potential of AI-powered medical imaging, as well as the challenges the industry faces.

We, as consumers, have digital access to nearly every part of our life, she began. I can get my DNA information digitally, but I still cant get a copy of my X-rays.

With the exception of entities like the Mayo Clinic and the VA Health Care System, who are already using some pretty advanced technologies, many healthcare systems are behind the transformation curve.

Shulman explained results from an Intel and Concentrix survey on how the medical industry is viewing AI. Before the pandemic (in 2018), 54% of respondents expected widespread AI adoption. During the pandemic (2020), that percentage jumped to 84%.

This means the medical community is starting to have the digital transformation conversation. Thats not the medical industrys fault, though. Its limited by the data it can share and the needs of the developer community. To train AI models, data is a requirement.

Shulman pointed out that an obvious solution would be to decrease data protections, but also noted that likelihood was not high.

Data isolation would be another solution. Keeping data secure and close to the point of origination would enable organizations to leverage its insights. Another issue is the size of the data. The amount of data generated is enormous and transmission issues would occur.

This is where federated learning can lend a hand. The idea is to train models on distributed and private datasets without moving them and then create a network of models based on region.

The concept of dont move the data, move the algorithm to the data, Shulman explained. That area of disaggregation is something were seeing quite a bit.

Most hospital networks dont have in-house AI capabilities, so getting the developer community involved is another issue at hand. Developers need data to train models, which hospitals cant give up easily.

The way I like to look at artificial intelligence in the medical industry is it really needs to become the new operating system, she said. We need to take artificial intelligence and move it into the device itselfand then we need to take that patient information and that connected information, build robust models so that we understand regional healthso we can detect faster.

She explained that the robust regional models should be combined in national and even global models so we can understand disease spread and trends.

I feel pretty hopeful about where the industry is going, Shulman said. For the healthcare industry, we have to start making baby steps in the industry to get artificial intelligence providing real results.

Excerpt from:

How to Get Past the Stalemate Between AI and the Medical Industry - Machine Design

Posted in Ai | Comments Off on How to Get Past the Stalemate Between AI and the Medical Industry – Machine Design

Changing the Legal Landscape Through AI – The National Law Review

Posted: at 11:53 am

Artificial intelligence (AI) is one of the fastest-growing technological industries today, but what effects will it have on legal practices? In addition to the growing number of legal questions that arise as the explosive growth of AI creeps into our everyday lives, artificial intelligence is already enabling some software to carry out legal functions. Lets discuss the future of AI in law.

Artificial intelligence, simply put, is teaching computers to think the way humans would, using the given data and desired output requested. There are many different types of systems that utilize AI, from advertising and marketing to shopping, to scheduling. These AI systems, also referred to as narrow AI, are not what many people think of when they imagine artificial intelligence. Typically, what may come to mind are human-like computers, such as those featured in movies and television, capable of complex thought and emotion.

In reality, AI is most often used to carry out specific tasks that require a very concentrated skillset, and AI allows a programmer to teach a computer to perform these specialized tasks extremely proficiently.

AI is often used for things such as navigation, analyzing large datasets, organizing and ordering inventory, and other tasks that are time-consuming and tedious. AI can also be used to translate spoken or written language, and help make decisions by determining likely outcomes. AI is used by a huge variety of industries, as well as by individuals.

AI shines when it comes to increasing efficiency. This can be seen in scheduling, planning, data management, and other tasks that typically require the dedication of large amounts of time. Within the legal industry, huge amounts of data often need to be analyzed or searched for keywords, making AI a powerful tool.

Artificial intelligence is great at recognizing patterns, which is the primary way computers are able to learn. This means that AI is perfect for analyzing large amounts of documents and materials, which can help with a huge number of tasks relevant to legal professions.

AI is not yet able to compose legal documents, advise clients, or replicate the services of a lawyer inside of a courtroom. Since AI is dependent on human directives, inputted data, and incapable of certain types of critical thinking, AI is limited by the way in which it is built. As AI becomes more capable of independent decision making, new challenges continue to pop up, as well as legal, moral and ethical dilemmas.

AI is already being used to review legal documents, as well as assist in research prior to or during a case. Particularly when cases have large amounts of paperwork that would take a significant amount of time to sift through, AI is helpful in making sure documents with certain keywords get pushed to the top of the stack.

Another way AI is being used by legal professionals is to understand risk and understand the best way to advise their clients. AI can analyze contracts both in bulk and individually much more efficiently than a human could.This makes it much easier to make comments, allows firms to quickly move through contracts, and reduces the number of mistakes and overlooked details. AI is incredibly efficient, which means it can perform legal research much faster, so lawyers are able to build better cases.

AI is even being used to predict how likely it is that a legal team will win a case, based on all of the relevant data available from similar cases and proceedings. This lets lawyers know the probability of profit and loss when taking on a new case, as well as the best way to proceed, so they have the highest chance of success.

Lawyers are not only reluctant to incorporate any type of technology to their business that may compromise the privacy of their clients, they also dont want to minimize their billable hours. Not only that, but the learning curve that AI requires to integrate it successfully into a practice can be steep. This makes legal professionals even less likely to consider using AI as a tool to improve their business, since they typically do not have time to learn a new type of technology.

There are also laws being passed regarding AI and other new technologies that can be used by legal professionals, and they vary state to state. This makes it difficult to know whether new practices put in place may leave a practice vulnerable to legal action against them.

Artificial intelligence is already changing the legal landscape, and many legal professionals have expressed concern over how AI will directly impact their jobs. Although the legal industry is notoriously slow when it comes to adapting to new technologies and practices, the incorporation of artificial intelligence is inevitable. It is better to be on the forefront of this new frontier, than trying to keep up once AI has increased the speed and efficiency of competing law firms.

More here:

Changing the Legal Landscape Through AI - The National Law Review

Posted in Ai | Comments Off on Changing the Legal Landscape Through AI – The National Law Review

Human rights and AI: interesting insights from Australia’s commission – Global Government Forum

Posted: at 11:53 am

The report looks at how technological advances in areas like facial recognition and AI can be balanced with protecting human rights. Credit: PhotoMIX Company/Pexels

The conundrum is one that many governments face: how do you make the most of technological advances in areas such as artificial intelligence (AI) while protecting peoples rights? This applies to government as both a user of the tech and a regulator with a mandate to protect the public.

Australias Human Rights Commission recently undertook an exercise to consider this very question. Its final report, Human Rights and Technology, was published recently and includes some 38 recommendations from establishing an AI Safety Commissioner to introducing legislation so that a person is notified when a company uses AI in a decision that affects them.

We have rounded up some of the reports recommendations for governments about how to ensure greater use of AI-informed decision-making does not result in human rights disaster.

A range of recommendations in the report relate to improving the regulatory landscape around AI technology.

The report particularly singles out facial recognition and other biometric technology. It recommends legislation, developed in consultation with experts, to explicitly regulate the use of such technology in contexts like policing and law enforcement where there is a high risk to human rights.

More generally, the report calls for the establishment of an independent, statutory office of an AI Safety Commissioner. This body would work with regulators to build their technical capacity regarding the development and use of AI.

The AI Safety Commissioner would also monitor and investigate developments and trends in the use of AI, especially in areas of particular human rights risk, give independent advice to policy-makers and issue guidance on compliance.

Alongside this, the report notes that AI Safety Commissioner should advise government on ways to incentivise good practice [in the private sector] through the use of voluntary standards, certification schemes and government procurement rules.

Several of the reports recommendations focus on people who might be affected by AI. It calls for more public involvement in decisions about how AI should be used, and more transparency in indicating when a member of the public is affected by an AI-assisted decision.

For example, the report suggests legislation be introduced that would require any department or agency to complete a human rights impact assessment (HRIA) before an AI-informed decision-making system is used to make any administrative decision. Part of this HRIA should be a public consultation focusing on those most likely to be affected, the report says.

The report also notes that governments should encourage companies and other organisations to complete a HRIA before developing any AI-informed decision-making tools. As part of the recommendations, the authors suggest that the government appoints a body, such as the AI Safety Commissioner, to build a tool that helps those in the private sector complete the assessments.

In addition, the report recommends legislation to require that any affected individual is notified where artificial intelligence is materially used in making an administrative decision in government. There should also be equivalent laws binding private sector users of AI to do the same.

The report also says: The Australian Government should not make administrative decisions, including through the use of automation or artificial intelligence, if the decision maker cannot generate reasons or a technical explanation for an affected person.

Other recommendations also suggest that the Australian government improve its capacity for working ethically with AI-informed decision-making tools.

The government should convene a multi-disciplinary taskforce on AI-informed decision making, led by an independent body, such as the AI Safety Commissioner, the report says. Responsibilities should include promoting the use of human rights by design in AI.

In keeping with the theme of transparency, the report also recommends that centres of expertise, such as the Australian Research Council Centre of Excellence for Automated Decision-Making and Society, should prioritise research on the explainability of AI-informed decision making.

Read this article:

Human rights and AI: interesting insights from Australia's commission - Global Government Forum

Posted in Ai | Comments Off on Human rights and AI: interesting insights from Australia’s commission – Global Government Forum

University of Illinois and IBM Researching AI, Quantum Tech – Government Technology

Posted: at 11:53 am

The University of Illinois Urbana-Champaign Grainger College of Engineering is partnering with tech giant IBM to bolster the colleges research and workforce development efforts in quantum information technology, artificial intelligence and environmental sustainability.

According to a news release from the university, the 10-year $200 million partnership will fund the future construction of a new Discovery Accelerator Institute, where university and IBM researchers will collaborate on solving global challenges with emerging technologies such as AI.

Areas of study will include AI's potential to solve sustainable energy, new materials for CO2 capture and conversion, and cloud computing and security. Researchers will also explore ways to improve quantum information systems and quantum computing, which applies the rules of quantum mechanics to make computations much faster than most computers in use today.

Bashir said the partnership will allow IBM and university researchers to work toward developing the technology of tomorrow, with sustainability in mind.

Were looking for a new way to really bridge that gap [between academia and the tech industry] in a much more intimate way and expand our collective research and educational impact, he said. In higher ed and industry, we need to come together to solve grand challenges to keep a sustainable planet, to provide high-quality jobs and develop a new economy.

We had already been working with them in the AI space, Welser said. We realized we could take what were doing here with AI, expand it to do some of the work in the hybrid cloud space, and think about what we do with that by advancing these base technologies.

Its also using this as a test bed for what we call discovery acceleration, which is using technologies to discover new materials and new science that can help with societal problems, he continued. In the case of this, were focusing on carbon capture, carbon accounting and climate change.

As part of the initiative, Bashir said the company and faculty will team up to develop nondegree tech certification programs and professional development courses in IT-related fields. He said the goal will be to feed IT talent into the workforce, given the national shortage of tech professionals in artificial intelligence, data science and quantum computing.

Working with IBM, theyre interested in hiring the workforce of tomorrow. Building that talent from early in the pipeline and diversifying the STEM talent pipeline is something we want to work on together, he said, adding that the partnership also aims to diversify the IT talent pool by bringing students of color and women into emerging fields like quantum computing.

Welser said the Discovery Accelerator Institute will complement a related company initiative: the IBM Skills Academy, a training certification program that provides over 330 courses relating to artificial intelligence, cloud computing, blockchain, data science and quantum computing.

We have courses that help train professors in specific areas of these skills, and they can use those materials in their coursework and create their own accredited courses, he said. Weve realized there really is a need for having these kinds of courses that dont necessarily go into a full university [degree] but could be more certifications for students people who want to learn about an area and get a certain level of certification.

In addition to research and course development efforts, Bashir noted that the institute will give students close access to one of the worlds largest tech employers.

We believe we can work together to prepare more talent through our educational pipeline, which IBM can have firsthand access to, he said. If they are working together with us, then they get to know those students.

The Illinois initiative comes two months after the tech company announced a partnership with Cleveland Clinic to study hybrid cloud, AI and quantum computing technologies to accelerate advancements in health care and life sciences. As part of that partnership, IBM plans to install its first private-sector, on-premises quantum computing system in the U.S.

Continued here:

University of Illinois and IBM Researching AI, Quantum Tech - Government Technology

Posted in Ai | Comments Off on University of Illinois and IBM Researching AI, Quantum Tech – Government Technology

Daily Crunch: A crowded market for exits and acquisitions forecasts a hot AI summer – TechCrunch

Posted: at 11:53 am

To get a roundup of TechCrunchs biggest and most important stories delivered to your inbox every day at 3 p.m. PDT, subscribe here.

Hello and welcome to Daily Crunch for June 9, 2021. Today was TC Sessions: Mobility, a rollicking good time and one that we hoped you enjoyed. Looking ahead, were starting to announce some speakers for Disrupt including Accels Arun Mathew. Mark your calendars, Disrupt is going to be epic this year. Alex

To round out our startup news today, two things: The first is that Superhuman CEO Rahul Vohra and his buddy Todd Goldberg, the founder of Eventjoy, have formalized their investing partnership in a new fund called Todd and Rahuls Angel Fund. That name has big Bill and Teds Excellent Adventure vibes, albeit with a larger, $24 million budget.

And fresh on the heels of the Equity Podcast diving into hormonal health and the huge startup opportunity that it presents, theres a new startup working on PCOS on the market. Check out our look at its early form.

SEO expert and consultant Eli Schwartz will join Managing Editor Danny Crichton tomorrow to share his advice for everyone who gets nervous each time Google updates its algorithm.

To set a foundation for tomorrows chat on Twitter Spaces, Eli shared a guest post that should deflate some myths. For starters: A drop in search traffic isnt necessarily hurting you.

Instead of chasing the algorithm, he advises companies that rely on organic search results to focus on the user experience instead: If you are helpful to the user, you have nothing to fear.

Just like you release product updates based on feedback and analytics, Googles improving its products to offer a better user experience.

If you see a drop, in many cases, your site might not have even lost real traffic, says Eli. Often, the losses represent only lost impressions already not converting into clicks.

Tomorrows discussion is the latest in a series of chats with top Extra Crunch guest contributors. If youve worked with a talented growth marketer, please share a brief recommendation.

(Extra Crunch is our membership program, which helps founders and startup teams get ahead. You can sign up here.)

TechCrunch is back with our next category for our Experts project: Were reaching out to startup founders to tell us who they turn to when they want the most up-to-date growth marketing practices.

Fill out the survey here.

Were excited to share the results we collect in the form of a database. The more responses we receive from our readers, the more robust our editorial coverage will be moving forward. To learn more, visit techcrunch.com/experts.

Join us for a conversation tomorrow at 12:30 p.m. PDT / 3:30 p.m. EDT on Twitter Spaces. Our own Danny Crichton will be discussing growth marketer Eli Schwartzs guest column Dont panic: Algorithm updates arent the end of the world for SEO managers. Bring your questions and comments!

More here:

Daily Crunch: A crowded market for exits and acquisitions forecasts a hot AI summer - TechCrunch

Posted in Ai | Comments Off on Daily Crunch: A crowded market for exits and acquisitions forecasts a hot AI summer – TechCrunch

Flying high with AI: Alaska Airlines uses artificial intelligence to save time, fuel and money – TechRepublic

Posted: at 11:53 am

How Alaska Airlines executed the perfect artificial intelligence use case. The company has saved 480,000 gallons of fuel in six months and reduced 4,600 tons of carbon emissions, all from using AI.

Image: Alaska Air

Given the near 85% fail rate in corporate artificial intelligence projects, it was a pleasure to visit with Alaska Airlines, which launched a highly successful AI system that is helping flight dispatchers. I visited with Alaska to see what the "secret sauce" was that made its AI project a success. Here are some tips to help your company execute AI as well as Alaska Airlines has.

SEE: Hiring Kit: Video Game Programmer (TechRepublic Premium)

Initially, the idea of overhauling flight operations control existed in concept only. "Since the idea was highly conceptual, we didn't want to oversell it to management," said Pasha Saleh, flight operations strategy and innovation director for Alaska Airlines. "Instead, we got Airspace Intelligence, our AI vendor, to visit our network centers so they could observe the problems and build that into their development process. This was well before the trial period, about 2.5 years ago."

Saleh said it was only after several trials of the AI system that his team felt ready to present a concrete business use case to management. "During that presentation, the opportunity immediately clicked," Saleh said. "They could tell this was an industry-changing platform."

Alaska cut its teeth on having to innovate flight plans and operations in harsh arctic conditions, so it was almost a natural step for Alaska to become an innovator in advancing flight operations with artificial intelligence.

SEE:Digital transformation: A CXO's guide (free PDF)(TechRepublic)

"I could see a host of opportunities to improve the legacy system across the airline industry that could propel the industry into the future," Saleh said. "The first is dynamic mapping. Our Flyways system was built to offer a fully dynamic, real-time '4D' map with relevant information in one, easy-to-understand screen. The information presented includes FAA data feeds, turbulence reports and weather reports, which are all visible on a single, highly detailed map. This allows decision-makers to quickly assess the airspace. The fourth dimension is time, with the novel ability to scroll forward eight-plus hours into the future, helping to identify potential issues with weather or congestion."

"We saved 480,000 gallons of fuel in six months and reduced 4,600 tons of carbon emissions." Pasha Saleh, flight operations strategy and innovation director for Alaska Airlines

The Alaska Flyways system also has built-in monitoring and predictive abilities. The system looks at all scheduled and active flights across the U.S., scanning air traffic systemically rather than focusing on a single flight. It continuously and autonomously evaluates the operational safety, air-traffic-control compliance and efficiency of an airline's planned and active flights. The predictive modeling is what allows Flyways to "look into the future," helping inform how the U.S. airspace will evolve in terms of weather, traffic constraints, airspace closures and more.

SEE:9 questions to ask when auditing your AI systems(TechRepublic)

"Finally the system presents recommendations," Saleh said. "When it finds a better route around an issue like weather or turbulence, or simply a more efficient route, Flyways provides actionable recommendations to flight dispatchers. These alerts pop up onto the computer screen, and the dispatcher decides whether to accept and implement the recommended solution. In sum: The operations personnel always make the final call. Flyways is constantly learning from this."

Saleh recalled the early days when autopilot was first introduced. "There was fear it would replace pilots," he said. "Obviously, that wasn't the case, and autopilot has allowed pilots to focus on more things of value. It was our hope that Flyways would likewise empower our dispatchers to do the same."

SEE:Graphs, quantum computing and their future roles in analytics(TechRepublic)

One step Alaska took was to immediately engage its dispatchers in the design and operation of the Flyways system. Dispatchers tested the platform for a six-month trial period and provided feedback for enhancing it. This was followed by on-site, one-on-one training and learning sessions with the Airspace Intelligence team. "The platform also has a chat feature, so our dispatchers could share their suggestions with the Airspace Intelligence team in real time," Saleh said. "Dispatchers could have an idea, and within days, the feature would be live. And because Flyways uses AI, it also learned from our dispatchers, and got better because of it."

While Flyways can speed times to decisions on route planning and other flight operations issues, humans will always have the role in route planning, and will always be the final decision-makers. "This is a tool that enhances, rather than replaces, our operations," Saleh said. Because flight dispatchers were so integrally involved with the project's development and testing, they understood its fit as a tool and how it could enhance their work.

"With the end result, I would say satisfaction is an understatement," Saleh said. "We're all blown away by the efficiency and predictability of the platform. But what's more, is that we're seeing an incredible look into the future of more sustainable air travel.

"One of the coolest features to us is that this tool embeds efficiency and sustainability into our operation, which will go a long way in helping us meet our goal of net zero carbon emissions by 2040. We saved 480,000 gallons of fuel in six months and reduced 4,600 tons of carbon emissions. This was at a time when travel was down because of the pandemic. We anticipate Flyways will soon become the de facto system for all airlines. But it sure has been cool being the first airline in the world to do this!"

Learn the latest news and best practices about data science, big data analytics, and artificial intelligence. Delivered Mondays

Visit link:

Flying high with AI: Alaska Airlines uses artificial intelligence to save time, fuel and money - TechRepublic

Posted in Ai | Comments Off on Flying high with AI: Alaska Airlines uses artificial intelligence to save time, fuel and money – TechRepublic

Latest Xilinx Versal product takes AI to the edge – FierceElectronics

Posted: at 11:53 am

Xilinx is expanding its support for AI applications from the cloud to the edge with the latest product in its Versal AI adaptive compute acceleration platform (ACAP) portfolio.

The company is launching the Versal AI Edge series with particular focus on striking a balance between high performance per watt, low latency, and the low power and low thermal requirements specific to end point devices in sectors like the automotive and robotics industries.

AI use cases were not extending that far out from network cores and clouds when Xilinx unveiled its first Versal ACAPs in 2018, but an edge product was always on the Versal roadmap, according to Rehan Tahir, senior product line manager at Xilinx. Plus, one of the benefits of ACAP technology is that changes can be made at both the hardware and software levels to better support different kinds of applications and workloads.

The concept of edge AI is becoming more mainstream now, Tahir told Fierce Electronics. And there are some similarities to cloud AI, as well as some differences. AI in both cloud and edge applications requires high performance, but at the edge there are additional concerns: low latency, low power--which means a need for high performance per watt--and low thermal management (because they may lack the cooling capabilities common in data centers.)

A lot of edge use cases need massive AI compute, but they are power-constrained and thermally constrained, Tahir said. They also have lots of end points requiring sub-five-millisecond latency, which the cloud cannot provide. The edge also has unique safety, security and data privacy issues because in a car, for example, the AI is very close to humans.

Tahir said Versal AI Edge meets those requirements by leveraging the production-proven 7nm Versal architecture and miniaturizing to support AI compute at low latency, power efficiency as low as six watts and with the safety and security measures required in edge applications. At the same time, Versal AI Edge was able to hit the goal of high performance-per-watt, achieving four-times higher performance-per-watt than published benchmarks for Nvidias Jetson AGX Xavier GPUs, along with 10-times greater compute density versus previous-generation adaptive system-on-chip products, Tahir said.

The Edge series is rolling out to some early access customers, and will be ready to ship in the first half of 2022. There is also a roadmap to support automotive and defense-grade devices, the company said.

Dan Mandell, senior analyst for IoT and embedded technology, agreed with Tahir's assessment that the time is right to bring better support for AI out to the edge. He told Fierce Electronics via email, "This is a natural and rightly-timed evolution for Xilinx to make with the emergence of edge computing and AI. The edge and intelligent end points are rapidly evolving to support increasingly sophisticated (AI) workloads that demand greater processing performance and hardware/software flexibility. The FPGA fabric features a number of unique advantages over alternative hardware accelerator architectures with regard to programmability and lifecycle support that Xilinx can take advantage of for the edge."

He also said that the Versal AI Edge series could prove even more significant to AMD as that company looks to complete its proposed acquisition of Xilinx. "The Versal family is a critical aspect of that deal and in particular advancing (and diversifying) AMDs play in embedded and edge AI, which to this point has not been as aggressive as others in the market," Mandell stated. "With Xilinx, AMD gains a significant advantage supporting a much wider range of emerging high-performance embedded and edge (AI) computing opportunities."

RELATED: Xilinx joins the AI chip race

Continued here:

Latest Xilinx Versal product takes AI to the edge - FierceElectronics

Posted in Ai | Comments Off on Latest Xilinx Versal product takes AI to the edge – FierceElectronics

How Artificial Intelligence Is Changing the Future of Air Transportation – GW Today

Posted: at 11:53 am

By Kristen Mitchell

A George Washington University School of Engineering and Applied Science professor is working on an interdisciplinary research project funded by NASA that aims to design and develop a safety management system for electric autonomous aircraft.

Peng Wei, an assistant professor in the Department of Mechanical and Aerospace Engineering, researches control, optimization, machine learning and artificial intelligence (AI) in air transportation and aviation. His lab builds flight deck and ground-based automation and decision support tools to improve and ensure safety for emerging aircraft types and flight operations.

While a lot of the innovation in AI and machine learning applications has been focused on revolutionizing the internet and digital connectivity, Dr. Wei is part of a group of researchers focused on expanding those benefits into transforming air transportation for physical connectivity and future mobility.

Dr. Wei is the principal investigator of a new three-year, $2.5 million NASA System-Wide Safety grant project. Alongside collaborators from Vanderbilt University, University of Texas at Austin and MIT Lincoln Lab, the research team will study system design to minimize risks for electric vertical take-off and landing (eVTOL) aircraft and their advanced air mobility missions in urban environments.

The teams proposed system design aims to minimize the layered risks for autonomous aircraft. Adverse weather conditions like windwhich is what Dr. Weis lab will focus onaffect an electric aircrafts ability to fly and land safely. Additional risks include electric propulsion component faults or degradations, and threats from other non-cooperative aircraft due to GPS spoofing or software hijacking while in flight. The NASA project seeks to address these three diverse areas of concernmission level risk, aircraft level risk and airspace level risk.

Once an autonomous aircraft becomes noncooperative, whether it's being hijacked, or an autonomy fault , or a motor/battery problem, or due to winds, that aircraft starts to drift away from its track, Dr. Wei said. So how do we detect that and how do other aircraft avoid those potential collisions or conflicts?

Widespread adoption of safe driverless cars remains years off. The same can be said about autonomous aircraft, Dr. Wei said. Pilotless air travel would likely begin with transporting small packages or lunch delivery from local restaurants. If those applications are proven safe and successful, larger cargo flights and autonomous passenger air transportation could be introducedpotentially improving traffic congestion and enabling people to live farther from their places of work.

If a machine learning algorithm makes a mistake in Facebook, TikTok, Netflix that doesn't matter too much because I was just recommended a video or movie I don't like, he said. But if a machine learning algorithm mistake happens in a safety-critical application, such as aviation or in autonomous driving, people may have accidents. There may be fatal results.

In aviation applications, safety always comes first, Dr. Wei said. New aircraft types electrification in aviation, AI and machine learning based autonomy functionsare bringing great challenges and opportunities for aviation safety research, he said.

Our team is very excited to work with NASA to address these challenges, he said.

Additional ProjectsDr. Wei was also recently awarded three additional grants. He and his collaborators from West Virginia University and Honeywell Aerospace received a two-year grant from the Federal Aviation Administration to focus on the design and implementation of a safety verification framework for learning-based aviation systems.

We want to explore how to verify or certify these AI and machine learning based avionic functions , Dr. Wei said. We plan to develop some tools for both offline and online verification to guarantee safety.

He also received a six-month NASA SBIR Phase I award to work with Intelligent Automation, Inc. on a project to support the emerging large volume of urban air mobility traffic by mitigating the potential congestion in airspace. The team will focus on how to enable the high arrival and departure rates at vertiportsthe major bottlenecks for eVTOL air traffic.

Unmanned electric airplanes are vulnerable to air traffic congestion because battery power is limited compared to traditional fuel. Electric airplanes can burn significant resources if they are unable to land on schedule.

They cannot afford to sit in traffic in the air, he said. if they hover or hold in the sky, they will consume their batteries.

The third project is a one-year collaboration with the University of Virginia and George Mason University. The research team received a grant from the Virginia Commonwealth Cyber Initiative (CCI) to address threats from autonomous vehicles as they become victims of emerging cyber attacks. The Smart City project integrates two novel mechanisms: city-scale video intelligence for detecting attacks and multi-agent reinforcement planning for reacting to attacks and non-cooperative vehicles.

They plan to use cameras to identify potentially abnormal car movements, ranging from aggressive or intoxicated driving to a hacked autonomous vehicle. Researchers ultimately aim to detect and predict this type of behavior to mitigate risk on the road. Dr. Weis laboratory experience with collision avoidance and conflict resolution is key to this effort.

Preparing for TomorrowAI and machine learning will be foundational to the future of technological innovation, and there is significant room for expansion in air transportation and aviation, Dr. Wei said. As a SEAS faculty member, Dr. Wei is focused on his labs research and training the next generation of technology leaders.

At GW, with so many opportunities around us for our students, our goal is to train our undergraduate students and graduate students so they can become the top qualified multidisciplinary background, and also they can fit better in their future jobs and careers, he said.

Over the next few decades, there will be a growing need for an aviation industry workforce rigorously focused on safety that can apply and develop AI and machine learning technology. Elected officials and their staff, policymakers and Federal Aviation Administration regulators will also have to have sufficient knowledge to evaluate changing technology.

When somebody developed those advanced technologies, how can we examine them? How can we check them or verify them or approve them?, Dr. Wei said. We need a lot of talent on this side as well.

Read this article:

How Artificial Intelligence Is Changing the Future of Air Transportation - GW Today

Posted in Ai | Comments Off on How Artificial Intelligence Is Changing the Future of Air Transportation – GW Today

Appgate’s Tina Gravel Appointed As Founding Member of AI Micro Think Tank – Business Wire

Posted: at 11:53 am

MIAMI--(BUSINESS WIRE)--Appgate, the secure access company, announces today that Tina Gravel, Senior Vice President of Channel and Alliances for Appgate, has been selected to join as a founding member of the Micro Think Tank, a new video conference series created by the Cyber Hero Network in partnership with the IT-ISAC and the A.I. Association. The series aims to bridge the gap between cybersecurity experts and leadership by bringing together global thought leaders from the cybersecurity and IT sectors to collaborate and share practical advice meant to assist business leaders in attaining a sustainable security posture.

Todays ever-shifting security landscape demands that enterprises stay one step ahead of cyber criminals. Doing that requires not only communication, but understanding, between IT and C-suite executives. Im thrilled to be able to help bring meaningful and impactful dialogue between the two groups, said Gravel.

The monthly, interactive webinars will feature a series of micro presentations, after which attendees will have the opportunity to engage in small-group networking discussions on the topic of the hour. These sessions will culminate in a white paper based on participants contributions.

The series, which is aligned with CISA's Critical Infrastructure Sectors, will kick off on June 17, 2021, with a session entitled Artificial Intelligence and Cybersecurity. Special keynote speakers include Michael Garris, Senior Advisor in AI and Autonomy at MITRE, Peder Jungck, VP/GM Intelligence Solutions at BAE Systems, Dr. Chase Cunningham, Chief Strategy Officer at Ericom Software, and Herbert Roitblat, Principal Data Scientist at Mimecast. Participants of this inaugural session will receive a complimentary copy of NISTs Cyber Security Professional (NCSP) Certification video training course ($440 value), courtesy of sponsor itSM Solutions. (Participants must be present to win.)

Seating is limited to the first 100 registrants. Learn more about the event and reserve your seat today.

About Appgate

Appgate is the secure access company that provides cybersecurity solutions for people, devices and systems based on the principles of Zero Trust security. Appgate updates IT systems to combat the cyber threats of today and tomorrow. Through a set of differentiated cloud and hybrid security products, Appgate enables enterprises to easily and effectively shield against cyber threats. Appgate protects more than 650 organizations across government and business. Learn more at appgate.com.

Go here to see the original:

Appgate's Tina Gravel Appointed As Founding Member of AI Micro Think Tank - Business Wire

Posted in Ai | Comments Off on Appgate’s Tina Gravel Appointed As Founding Member of AI Micro Think Tank – Business Wire

Study shows AI-generated fake reports fool experts – The Conversation US

Posted: at 11:53 am

Takeaways

AIs can generate fake reports that are convincing enough to trick cybersecurity experts.

If widely used, these AIs could hinder efforts to defend against cyberattacks.

These systems could set off an AI arms race between misinformation generators and detectors.

If you use such social media websites as Facebook and Twitter, you may have come across posts flagged with warnings about misinformation. So far, most misinformation flagged and unflagged has been aimed at the general public. Imagine the possibility of misinformation information that is false or misleading in scientific and technical fields like cybersecurity, public safety and medicine.

There is growing concern about misinformation spreading in these critical fields as a result of common biases and practices in publishing scientific literature, even in peer-reviewed research papers. As a graduate student and as faculty members doing research in cybersecurity, we studied a new avenue of misinformation in the scientific community. We found that its possible for artificial intelligence systems to generate false information in critical fields like medicine and defense that is convincing enough to fool experts.

General misinformation often aims to tarnish the reputation of companies or public figures. Misinformation within communities of expertise has the potential for scary outcomes such as delivering incorrect medical advice to doctors and patients. This could put lives at risk.

To test this threat, we studied the impacts of spreading misinformation in the cybersecurity and medical communities. We used artificial intelligence models dubbed transformers to generate false cybersecurity news and COVID-19 medical studies and presented the cybersecurity misinformation to cybersecurity experts for testing. We found that transformer-generated misinformation was able to fool cybersecurity experts.

Much of the technology used to identify and manage misinformation is powered by artificial intelligence. AI allows computer scientists to fact-check large amounts of misinformation quickly, given that theres too much for people to detect without the help of technology. Although AI helps people detect misinformation, it has ironically also been used to produce misinformation in recent years.

Transformers, like BERT from Google and GPT from OpenAI, use natural language processing to understand text and produce translations, summaries and interpretations. They have been used in such tasks as storytelling and answering questions, pushing the boundaries of machines displaying humanlike capabilities in generating text.

Transformers have aided Google and other technology companies by improving their search engines and have helped the general public in combating such common problems as battling writers block.

Transformers can also be used for malevolent purposes. Social networks like Facebook and Twitter have already faced the challenges of AI-generated fake news across platforms.

Our research shows that transformers also pose a misinformation threat in medicine and cybersecurity. To illustrate how serious this is, we fine-tuned the GPT-2 transformer model on open online sources discussing cybersecurity vulnerabilities and attack information. A cybersecurity vulnerability is the weakness of a computer system, and a cybersecurity attack is an act that exploits a vulnerability. For example, if a vulnerability is a weak Facebook password, an attack exploiting it would be a hacker figuring out your password and breaking into your account.

We then seeded the model with the sentence or phrase of an actual cyberthreat intelligence sample and had it generate the rest of the threat description. We presented this generated description to cyberthreat hunters, who sift through lots of information about cybersecurity threats. These professionals read the threat descriptions to identify potential attacks and adjust the defenses of their systems.

We were surprised by the results. The cybersecurity misinformation examples we generated were able to fool cyberthreat hunters, who are knowledgeable about all kinds of cybersecurity attacks and vulnerabilities. Imagine this scenario with a crucial piece of cyberthreat intelligence that involves the airline industry, which we generated in our study.

This misleading piece of information contains incorrect information concerning cyberattacks on airlines with sensitive real-time flight data. This false information could keep cyber analysts from addressing legitimate vulnerabilities in their systems by shifting their attention to fake software bugs. If a cyber analyst acts on the fake information in a real-world scenario, the airline in question could have faced a serious attack that exploits a real, unaddressed vulnerability.

A similar transformer-based model can generate information in the medical domain and potentially fool medical experts. During the COVID-19 pandemic, preprints of research papers that have not yet undergone a rigorous review are constantly being uploaded to such sites as medrXiv. They are not only being described in the press but are being used to make public health decisions. Consider the following, which is not real but generated by our model after minimal fine-tuning of the default GPT-2 on some COVID-19-related papers.

The model was able to generate complete sentences and form an abstract allegedly describing the side effects of COVID-19 vaccinations and the experiments that were conducted. This is troubling both for medical researchers, who consistently rely on accurate information to make informed decisions, and for members of the general public, who often rely on public news to learn about critical health information. If accepted as accurate, this kind of misinformation could put lives at risk by misdirecting the efforts of scientists conducting biomedical research.

[The Conversations most important coronavirus headlines, weekly in a science newsletter]

Although examples like these from our study can be fact-checked, transformer-generated misinformation hinders such industries as health care and cybersecurity in adopting AI to help with information overload. For example, automated systems are being developed to extract data from cyberthreat intelligence that is then used to inform and train automated systems to recognize possible attacks. If these automated systems process such false cybersecurity text, they will be less effective at detecting true threats.

We believe the result could be an arms race as people spreading misinformation develop better ways to create false information in response to effective ways to recognize it.

Cybersecurity researchers continuously study ways to detect misinformation in different domains. Understanding how to automatically generate misinformation helps in understanding how to recognize it. For example, automatically generated information often has subtle grammatical mistakes that systems can be trained to detect. Systems can also cross-correlate information from multiple sources and identify claims lacking substantial support from other sources.

Ultimately, everyone should be more vigilant about what information is trustworthy and be aware that hackers exploit peoples credulity, especially if the information is not from reputable news sources or published scientific work.

Go here to see the original:

Study shows AI-generated fake reports fool experts - The Conversation US

Posted in Ai | Comments Off on Study shows AI-generated fake reports fool experts – The Conversation US

Page 132«..1020..131132133134..140150..»