Page 33«..1020..32333435..4050..»

Category Archives: Ai

Now operational DoD chief digital and AI office will be an example of innovation, officials say – Federal News Network

Posted: June 11, 2022 at 2:13 am

The Defense Departments newest office focusing on putting artificial intelligence at the forefront of much of what the military does is going to need to work within the established bureaucracy as it tries to move quickly and bring new companies to the Pentagon.

DoDs chief digital and AI office (CDAO) has only been fully operational for several days, but its chief, Craig Martell, says he plans to hit the ground running by working with DoDs...

READ MORE

The Defense Departments newest office focusing on putting artificial intelligence at the forefront of much of what the military does is going to need to work within the established bureaucracy as it tries to move quickly and bring new companies to the Pentagon.

DoDs chief digital and AI office (CDAO) has only been fully operational for several days, but its chief, Craig Martell, says he plans to hit the ground running by working with DoDs famous bureaucratic processes to add 21st century value to the military.

Talking at the virtual DoD Digital and AI symposium on Wednesday, Martell said hes already noticing the administrative slow down he has yet to receive his access card to the Pentagon, and still needs to go through the visitors entrance. However, Martell said the point of his office is to use DoDs power, even the frustrating parts, to expand AI capabilities.

Were not going to change the bureaucracy of the whole, Martell said. Thats not a challenge I want to put before the team. We need to find the right gaps, the right places where we can leverage value. That value is going to drive a virtuous cycle of change. Theres a lot of things about the DoD that cant be more like industry. We shouldnt try to force that square peg in a round hole, right? We need to find out how to keep it the DoD but also make it more efficient and work better.

Deputy CDAO Marie Palmieri said the office will scale a different operating model for delivering digital technologies. The point of the office is to build end-to-end cohesion on everything from data collection and curation to advanced analytics that will give the agency an advantage in decision-making and operations.

It really is a collective ecosystem. We had the parts of it, but [were] putting it together in a way we havent before to deliver that decision advantage that our leaders need, DoD Chief Information Officer John Sherman said in February. CDAO brings together DoDs Joint Artificial Intelligence Office, the Defense Digital Service the chief data officer and the Advancing Analytics planform Advana.

Despite trying to veer away from making DoD like industry, Martell has extensive Silicon Valley experience. Its no surprise DoD chose the former head of machine learning at Lyft to head the CDAO. DoD often refers to its future military command and control plans by using ride sharing apps as an analogy.

DoD uses ride-sharing service Uber as an analogy to describe its desired end state for JADC2. Uber combines two different apps one for riders and a second for drivers, the Congressional Research Service report on Joint All Domain Command and Control (JADC2) states. Uber relies on cellular and Wi-Fi networks to transmit data to match riders and provide driving instructions. JADC2 envisions providing a cloud-like environment for the joint force to share intelligence, surveillance, and reconnaissance data, transmitting across many communications networks, to enable faster decision making.

The CDAO will be heavily involved in JADC2 as DoD plans to inject the program with AI components to quicken that decision making.

Martell said part of his job will be easing the way for industry to bring in their technologies, especially ones that can be applied right off the shelf. That ease will be extended down to even the smallest companies.

One of the things that I want us to spend a lot of time thinking about is how do we not just go to the big players? Martell said. How do we make it easy for other businesses too? How do we create a marketplace for startups, for medium-size, for small businesses? Because particularly in the AI space and Im sure in many other spaces as well, theres a lot of innovation happening in two-person shops or five-person shops. You know, a good brain with a good idea, we want to be able to leverage all of that.

Deputy Defense Secretary Kathleen Hicks noted at the same conference that there is a massive innovation ecosystem focused on software in the United States and she wants the CDAO to tap into that.

Theres a real a part of the impetus I had in the CDAO is theres a power in bringing a vanguard organization with direct reporting relationship to me and to the secretary at the four star level that can push us in these areas, she said. They can build on work thats been underway. We have a number of procurement vehicles already available. I think theres five that are really focused on expanding the our access in DoD to nontraditional companies.

Hicks said she hopes the office can also bring in a talented workforce that wants to work for and stay with DoD. The CDAO will be an exemplar for people who want to innovate within government.

Read more:

Now operational DoD chief digital and AI office will be an example of innovation, officials say - Federal News Network

Posted in Ai | Comments Off on Now operational DoD chief digital and AI office will be an example of innovation, officials say – Federal News Network

Shield AI Raises $165M Series E to Accelerate Building of the Worlds Best AI Pilot – sUAS News

Posted: at 2:13 am

Shield AI, a fast-growing defense technology company building AI pilots for aircraft, today announced it has raised $90 million in equity and $75 million in debt as part of a Series E fundraising round, increasing the Companys valuation to $2.3 billion. With this deal, Shield AI joins Space X, Palantir, and Anduril as the only multi-billion-dollar defense-tech startups of the past 20 years.

The future of defense aviation is autonomy. AI pilots are the most disruptive defense technology of our generation and Shield AI is committed to putting the worlds best AI pilots in the hands of the United States and our allies. No company has assembled more or recruits better AI engineering talent for aviation autonomy and intelligent swarming than Shield AI, said Shield AIs co-founder and CEO, Ryan Tseng.

The round was led by Snowpoint Ventures Doug Philippone, who has also served as Palantirs Global Defense Lead since 2008, with participation from multiple top-tier venture funds including Riot Ventures, Disruptive, which led Shield AIs Series D, and Homebrew, which led Shield AIs seed round. Previous lead investors include Point72, Andreesen Horowitz, Breyer Capital, and SVB Capital.

Investors are flocking to quality. This round is a reflection of Shield AIs success in creating great products, building a business with strong fundamentals, and dominant technological leadership with an AI pilot proven to be the worlds best in numerous military evaluations. We love that they are leveraging an AI and software backbone across a variety of aircraft to deliver truly game-changing value to our warfighters. The work they are doing today is just the tip of the iceberg, said Doug Philippone, co-founder of Snowpoint Ventures.

Shield AIsHivemindsoftware is an AI pilot for military and commercial aircraft that enables intelligent teams of aircraft to perform missions ranging from room clearance, to penetrating air defense systems, and dogfighting F-16s. Hivemind employs state-of-the-art algorithms for planning, mapping, and state-estimation to enable aircraft to execute dynamic flight maneuvers and uses reinforcement learning for discovery, learning, and execution of winning tactics and strategies. On aircraft, Hivemind enables full autonomy and is designed to run fully on the edge, disconnected from the cloud, in high threat, GPS and communication-degraded environments.

Shield AIs hardware products are its small-unmanned aircraft system (sUAS)Nova, and its medium-size vertical take-off and landing (VTOL) UAS,V-BAT.Hivemind is integrated onboard Nova and has been deployed in combat since 2018; it will soon be integrated onboard the V-BAT to further enhance its class-leading capabilities.

Russia and China are jamming GPS and communications. U.S. and allied forces need swarms of resilient systems flown by AI pilots to operate in these denied environments. We call it low-cost, distributed strategic deterrence. If we had put up a bunch of AI-piloted swarms on the border of Ukraine, the Russians may have thought twice about invading. Distributed swarms are also more survivable than traditional strategic assets like an aircraft carrier (which is a high-cost, centralized strategic deterrent). Every ally is modernizing their military, and theyre looking at how AI-piloted aircraft can give them a strategic, tactical, and cost advantage, said Brandon Tseng, Shield AIs co-founder, President, and a former Navy SEAL.

At the end of the day, this round and Shield AIs work will positively contribute to global security and stability, whichare foundational to human progress. Advancements in technology, medicine, education, and the overall human condition are made when security and stability are strong. This requires the United States and our allies, forces for good, to have the best capabilities at their disposal including AI pilots that protect people and deter conflict,said Shield AIs co-founder and CEO, Ryan Tseng.

Read the original:

Shield AI Raises $165M Series E to Accelerate Building of the Worlds Best AI Pilot - sUAS News

Posted in Ai | Comments Off on Shield AI Raises $165M Series E to Accelerate Building of the Worlds Best AI Pilot – sUAS News

Can AI Help Differentiate Between Tumor Recurrence and Pseudoprogression on MRI in Patients with Glioblastoma? – Diagnostic Imaging

Posted: at 2:13 am

For patients with glioblastomas, timely decision-making and treatment are critical as the median prognosis can range between 16 to 20 months. A key diagnostic challenge in this patient population is differentiating between true tumor progression and temporary pseudoprogression caused by adjunctive use of temozolomide after surgical resection.

While conventional magnetic resonance imaging (MRI) is limited in this regard, the emergence of a deep learning model may offer promise for radiologists and physicians treating these patients, according to new research presented at the Society for Imaging Informatics in Medicine (SIIM) conference.

In a poster abstract presentation, researchers noted the assessment of patients with glioblastoma who had a second resection due to suspected recurrence based on imaging changes and reviewed T2 and contrast-enhanced T1 MRI scans taken after the first resection. Using these scans to help develop a deep learning model, they performed subsequent five-fold cross validation with 56 patients (29 patients with true tumor progression and 27 patients with pseudoprogression).

The cross-validation testing revealed a mean area under the curve (AUC) of .86, an average sensitivity of 92.7 and an average specificity of 79 percent.

While acknowledging the need for larger studies and external validation of the deep learning model, the researchers said the initial findings are promising for the care and treatment of patients with glioblastomas.

Conventional MRI reading techniques cannot distinguish between (true tumor progression and psuedoprogression). Therefore, providing a reliable and consistent technique to distinguish chemoradiation-induced (psuedoprogression) from tumor recurrence would be highly beneficial, wrote Mana Moassefi, MD, a research fellow affiliated with the Radiology Informatic Lab at the Mayo Clinic in Rochester, Mn., and colleagues.

Follow this link:

Can AI Help Differentiate Between Tumor Recurrence and Pseudoprogression on MRI in Patients with Glioblastoma? - Diagnostic Imaging

Posted in Ai | Comments Off on Can AI Help Differentiate Between Tumor Recurrence and Pseudoprogression on MRI in Patients with Glioblastoma? – Diagnostic Imaging

Do scientists need an AI Hippocratic oath? Maybe. Maybe not. – Bulletin of the Atomic Scientists

Posted: at 2:13 am

Engineers Meeting in Robotic Research Laboratory. By Gorodenkoff. Standard license. stock.adobe.com

When a sentient, Hanson Robotics robot named Sophia[1] was asked whether she would destroy humans, it replied, Okay, I will destroy humans. Philip K Dick, another humanoid robot, has promised to keep humans warm and safe in my people zoo. And Bina48, another lifelike robot, has expressed that it wants to take over all the nukes.

All of these robots were powered by artificial intelligence (AI)algorithms that learn from data, make decisions, and perform tasks without human input or even, in some cases, human understanding. And while none of these AIs have followed through with their nefarious plots, some scientists, including the (late) physicist Stephen Hawking, have warned that super-intelligent, AI-powered computers could harbor and achieve goals that conflict with human life.

Youre probably not an evil ant-hater who steps on ants out of malice, but if youre in charge of a hydroelectric green-energy project, and theres an anthill in the region to be flooded, too bad for the ants, Hawking once said. Lets not place humanity in the position of those ants.

Thinking machines powered by AI have contributed incalculable benefits to humankind, including help with developing the COVID-19 vaccine at record speed. But scientists recognize the possibility for a dystopic outcome in which computers one day overtake humans by, for example, targeting them with autonomous or lethal weapons, using all available energy, or accelerating climate change. For this reason, some see a need for an AI Hippocratic oath that might provide scientists with ethical guidance as they explore promising, if sometimes fraught, artificial intelligence research. At the same time, others dub that prospect too simplistic to be useful.

The original Hippocratic oath. The Hippocratic oath, named for the Greek physician Hippocrates, is a medical text that offers doctors a code of principles for fulfilling their duties honestly and ethically. Some use the shorthand first do no harm to describe it, though the oath does not contain those exact words. It does, however, capture that sentiment, along with other ideas such as respect for ones teachers, a willingness to share knowledge, and more.

To be sure, the Hippocratic oath is not a panacea for avoiding medical harm. During World War II, Nazi doctors performed unethical medical experiments on concentration camp prisoners that led to torture and death. In 1932, the US Public Health Service and Tuskegee Institute conducted a study on syphilis in which they neither obtained informed consent nor offered available treatment to the Black male participants.

That said, the Hippocratic oath continues to offer guiding principles in medicine, even though most medical schools today do not require graduates to recite it.

As with medical research and practice, AI research and practice have great potential to helpand to harm. For this reason, some researchers have called for an AI Hippocratic oath.

The gap between ethical AI principles and practice. Even those who support ethical AI recognize the current gap between principles and practice. Scientists who opt for an ethical approach to AI research likely need to do additional work and incur additional costs that may conflict with short-term commercial incentives, according to a study published in Science and Engineering Ethics. Some suggest that AI research funders might assume some responsibility for trustworthy, safe AI systems. For example, funders might require researchers to sign a trustworthy-AI statement or might conduct their own review that essentially says, if you want the money, then build trustworthy AI, according to an AI Ethics study. Some recommendations for responsible AI, such as engaging in a stakeholder dialogue suggested in an AI & Society paper, may be common sense in theory but difficult to implement in practice. For example, when the stakeholder is humanity, who should serve as representatives?

Still, many professional societies and nonprofit organizations offer an assortment of professional conduct expectationseither for research in general or AI in particular. The Association for Computing Machinerys Code of Ethics and Professional Conduct, for example, notes that computing professionals should contribute to society and to human well-being, avoid harm, and be honest and trustworthy, along with other expectations. The Future of Life Institutea nonprofit that advocates within the United Nations, the US government, and the European Union to reduce existential threats to humanity from advanced AIhas garnered signatures from 274 technology companies and organizations and 3,806 leaders, policymakers, and other individuals on its Lethal Autonomous Weapons Pledge. The pledge calls on governments to create a future in which the decision to take a human life should never be delegated to a machine.

Many private corporations have also attempted to establish ethical codes for AI scientists, but some of these efforts have been criticized as performative. In 2019, for example, Google cancelled the AI ethics board it had formed after less than two weeks when employees discovered that, among other concerns, one of the board members was the CEO of a drone company that used AI for military applications.

Standards such as those outlined by the Association for Computing Machinery are not oaths, and pledges such as that put forth by the Future of Life are not mandatory. This leaves a lot of wiggle room for behavior that may fall short of espoused or hard-to-define ideals.

What do scholars and tech professionals think? The imposition of an oath on AI or any aspect of technology feelsa bit like more of a feel good tactic than a practical solution, John Nosta, Google Health Advisory Board member and World Health Organization founding member of the digital-health-expert roster, told the Bulletin. He suggests reflecting on fireone of humanitys first technologiesthat has been an essential and beneficial part of the human story but also destructive, controlled, and managed. We have legislation and even insurance around [fires] appropriate use, Nosta said. We could learn a few things about how it is evolved and be inculcated into todays world.

Meanwhile, others see a need for an oath.

Unlike doctors, AI researchers and practitioners do not need a license to practice and may never meet those most impacted by their work, Valerie Pasquarella, a Boston University environmental professor and visiting researcher at Google, told the Bulletin. Digital Hippocratic oaths are a step in right direction in that they offer overarching guidance and formalize community standards and expectations. Even so, Pasquarella acknowledged that such an oath would be challenging to implement but noted that a range of certifications exist for working professionals. Beyond oaths, how can we bring some of that thinking to the AI community? she asked.

Like Pasquarella, others in the field acknowledge the murky middle between ethical AI principle and practice.

It is impossible to define the ultimate digital Hippocratic oath for AI scientists, Spiros Margaris, venture capitalist, frequent keynote speaker, and top-ranked AI influencer, said. My practical advice is to allow as many definitions to exist as people come up with to advance innovation and serve humankind.

But not everyone is convinced that a variety of oaths is the way to go.

A single, universal digital Hippocratic oath for AIscientists is much better than a variety of oaths, Nikolas Siafakas, an MD and PhD in the University of Crete computer science department who has written on the topic in AI Magazine, told the Bulletin. It will strengthen the homogeneity of the ethical values and consequences of such an effort to enhance morality among AI scientists, as did the Hippocratic oath for medical scientists.

Still others are inclined to recognize medicines longer lead time in sorting through ethical conundrums.

The field is struggling with its relatively sudden rise, Daniel Roy, a University of Toronto computer science professor and Canadian Institute for Advanced Research AI chair, said. Roy thinks that an analogy between medicine and AI is too impoverished to be of use in guiding AI research. Luckily, there are many who have made it their careers to ensure AI is developed in a way that is consistent with societal values, he said. I think theyre having tremendous influence. Simplistic solutions wont replace hard work.

Yet Roozbeh Yousefzadeh, who works in AI as a post-doctoral fellow at Yale, called a Hippocratic oath for AI scientists and AI practitioners a necessity. He hopes to engage even those outside of the AI community in the conversation. The public can play an important role by demanding ethical standards, Yousefzadeh said.

One theme on which most agree, however, is AIs potential for both opportunities and challenges.

Nobody can deny the power of AI to change human life for the betteror the worse, Hirak Sarkar, biomedical informatics research fellow at Harvard Medical School. We should design a guideline to remain benevolent, to put forward the well-being of the humankind before any self-interest.

Attempts to regulate AI ethics. The European Union is currently considering a bill known as the Artificial Intelligence Actthe first of its kindthat would ensure some accountability. The ambitious act has potential to reach a large population, but it is not without challenges. For example, the first draft of the bill requires that data sets be free of errorsan impractical expectation for humans to fulfill, given the size of data sets on which AI relies. It also requires that humans fully understand the capabilities and limitations of the high-risk AI systema requirement that is in conflict with how AI has worked in practice, as humans generally do not understand how AI works. The bill also proposes that tech companies provide regulators with their source code and algorithmsa practice that many would likely resist, according to MIT Technology Review. At the same time, some advisors to the bill have ties to Big Tech, suggesting possible conflicts of interest in the attempt to regulate, according to the EU Observer.

Defining AI ethics differs from defining medical ethics for medicine in (at least) one big way. The collection of medical practitioners is more homogenous than the collection of those working in AI research. The latter may hail from medicine but also from computer science, agriculture, security, education, finance, environmental science, the military, biology, manufacturing, and many other fields. For now, professionals in the field have not yet achieved consensus on whether an AI Hippocratic oath would help mitigate threats. But since AIs potential to benefit humanity goes hand-in-hand with a theoretical possibility to destroy human life, researchers and the public might ask an alternate question: If not an AI Hippocratic oath, then what?

[1] Sophia was so lifelike that Saudi Arabia granted it citizenship.

Originally posted here:

Do scientists need an AI Hippocratic oath? Maybe. Maybe not. - Bulletin of the Atomic Scientists

Posted in Ai | Comments Off on Do scientists need an AI Hippocratic oath? Maybe. Maybe not. – Bulletin of the Atomic Scientists

Satellites and AI Can Help Solve Big ProblemsIf Given the Chance – WIRED

Posted: at 2:13 am

However, as in the Amazon, identifying problem areas only gets you so far if there arent enough resources to act on those findings. The Nature Conservancy uses its AI model to inform conversations with land managers about potential threats to wildlife or biodiversity. Conservation enforcement in the Mojave Desert is overseen by the US Bureau of Land Management, which only has about 270 rangers and special agents on duty.

In northern Europe, the company Iceye got its start monitoring ice buildup in the waters near Finland with microsatellites and machine learning. But in the past two years, the company began to predict flood damage using microwave wavelength imagery that can see through clouds at any time of day. The biggest challenge now, says Iceyes VP of analytics, Shay Strong, isnt engineering spacecraft, data processing, or refining machine learning models that have become commonplace. Its dealing with institutions stuck in centuries-old ways of doing things.

We can more or less understand where things are going to happen, we can acquire imagery, we can produce an analysis. But the piece we have the biggest challenge with now is still working with insurance companies or governments, she says.

Its that next step of local coordination and implementation that it takes to come up with action, says Hamed Alemohammad, chief data scientist at the nonprofit Radiant Earth Foundation, which uses satellite imagery to tackle sustainable development goals like ending poverty and hunger. Thats where I think the industry needs to put more emphasis and effort. Its not just about a fancy blog post and deep learning model.

Its often not only about getting policymakers on board. In a 2020 analysis, a cross-section of academic, government, and industry researchers highlighted the fact that the African continent has a majority of the worlds uncultivated arable land and is expected to account for a large part of global population growth in the coming decades. Satellite imagery and machine learning could reduce reliance on food imports and turn Africa into a breadbasket for the world. But, they said, lasting change will necessitate a buildup of professional talent with technical knowledge and government support so Africans can make technology to meet the continents needs instead of importing solutions from elsewhere. The path from satellite images to public policy decisions is not straightforward, they wrote.

Labaly Toure is a coauthor of that paper and head of the geospatial department at an agricultural university in Senegal. In that capacity and as founder of Geomatica, a company providing automated satellite imagery solutions for farmers in West Africa, hes seen satellite imagery and machine learning help decision-makers recognize how the flow of salt can impact irrigation and influence crop yields. Hes also seen it help settle questions of how long a family has been on a farm and assist with land management issues.

Sometimes free satellite images from services like NASAs LandSat or the European Space Agencys Sentinel program suffice, but some projects require high-resolution photos from commercial providers, and cost can present a challenge.

If decision-makers know [the value] it can be easy, but if they dont know, its not always easy, Toure said.

Back in Brazil, in the absence of federal support, Imazon is now forging ties with more policymakers at the state level. Right now, theres no evidence the federal government will lead conservation or deforestation efforts in the Amazon, says Souza. In October 2022, Imazon signed cooperation agreements with public prosecutors gathering evidence of environmental crimes in four Brazilian states on the border of the Amazon rainforest to share information that can help prioritize enforcement resources.

When you prosecute people who deforest protected lands, the damage has already been done. Now Imazon wants to use AI to stop deforestation before it happens, interweaving that road-detection model with one designed to predict which communities bordering the Amazon are at the highest risk of deforestation within the next year.

Deforestation continued at historic rates in early 2022, but Souza is hopeful that through work with nonprofit partners, Imazon can expand its deforestation AI to the other seven South American countries that touch the Amazon rainforest.

And Brazil will hold a presidential election this fall. The current leader in the polls, former president Luiz Incio Lula da Silva, is expected to strengthen enforcement agencies weakened by Bolsonaro and to reestablish the Amazon Fund for foreign reforestation investments. Lulas environmental plan isnt expected out for a few months, but environmental ministers from his previous term in office predict he will make reforestation a cornerstone of his platform.

Read the original here:

Satellites and AI Can Help Solve Big ProblemsIf Given the Chance - WIRED

Posted in Ai | Comments Off on Satellites and AI Can Help Solve Big ProblemsIf Given the Chance – WIRED

Apex.AI tapped to implement its autonomous tech into swarms of electric farming robots – Electrek.co

Posted: at 2:13 am

Yes, you read that correctly. Agricultural machinery manufacturer AGCO is continuing its technical partnership with Apex.AI in order to use its Apex.OS software development kit to add autonomous capabilities to its farming robot concept. The battery-powered Xaver farming robots were developed by AGCO brand Fendt and can autonomously plant seeds on farms 24 hours a day.

Apex.AI is a scalable software developer based in Palo Alto, California, whose Apex.OS software development kid (SDK) aids OEMs in implementing complex, integrated AI software as well as autonomous mobility applications.

We first covered the company when it partnered with Toyotas Woven Planet in April of 2021, allowing the latter to use Apex.OS for safety-critical automotive applications. In late 2021, Apex.Ai announced $56.5 million in Series B funding, led by a number of investors such as Toyota Ventures, Volvo Group Venture Capital, and Jaguar Land RoversInMotion Ventures.

One of those additional investors was AGCO, who made a 2.53% equity investment at the time. Following news early today, AGCO is expanding its relationship with Apex.AI to develop advanced autonomous capabilities for electric farming robots under its Fendt sub-brand.

As they say in Iowa, America needs farmers. But what farmers may actually need are precise, autonomous farming robots that leave zero-emissions and can continue working long after the sun has gone down.

In a press release today, Apex.AI announced that AGCO has expanded its relationship with the software developer in order to utilize its technology to integrate autonomous driving components into its Fendt Xaver farming robot concept. This includes LiDAR object detection, collision checking, and planning.

According to the release, AGCO has already leveraged Apex.OS to develop a specific software stack for Xaver, based on automotive-industry standards, that has helped extend its autonomous functions.

Thanks to Apex.AIs SDK, the cloud-connected swarm of Fendt Xaver farming robots can be controlled through an app while providing real-time data from each unit. This includes data like each robots location, status, and other diagnostics. AGCO director of engineering Christian Kelber spoke to the technology:

Apex.OS is a foundational software framework and development kit for rapidly developing advanced autonomous capabilities. The technology has helped AGCO shorten R&D timelines of our smart agricultural solutions and for the future of highly automated robots. Coming from the automotive industry, Apex.AI enables us to implement safety-critical applications from autonomous driving that can be deployed across our range of solutions globally.

With the help of Apex.OS, The Fendt Xaver autonomous robot farming concept can plants seeds on farms 24/7 with centimeter precision all while using 90% less energy than conventional machines and with zero emissions. Apex.AI co-founder and CEO Jan Becker spoke to the companys expansion beyond the automotive realm and what it could mean for future autonomous applications:

We are leveraging our success in the automotive and autonomous driving industry and applying it to areas that have similar functional safety needs such as agricultural, industrial, mining and construction. Apex.OS allows the software architecture to be modular, scalable and safe, enabling customers to transition their R&D projects to commercial-ready products in record time.

Fleets of tiny, autonomous, electric farming robots perhaps this is a glimpse into our agricultural future. We recommend checking the Fendt Xaver website to see how this holistic system of farming robots works. Its pretty cool stuff!

FTC: We use income earning auto affiliate links. More.

Subscribe to Electrek on YouTube for exclusive videos and subscribe to the podcast.

Continue reading here:

Apex.AI tapped to implement its autonomous tech into swarms of electric farming robots - Electrek.co

Posted in Ai | Comments Off on Apex.AI tapped to implement its autonomous tech into swarms of electric farming robots – Electrek.co

AI Reveals Unsuspected Connections Hidden in the Complex Math Underlying Search for Exoplanets – SciTechDaily

Posted: May 31, 2022 at 2:44 am

Artists concept of a sun-like star (left) and a rocky planet about 60% larger than Earth in orbit in the stars habitable zone. Gravitational microlensing has the ability to detect such planetary systems and determine the masses and orbital distances, even though the planet itself is too dim to be seen. Credit: NASA Ames/JPL-Caltech/T. Pyle

Machine learning algorithm points to problems in mathematical theory for interpreting microlenses.

Artificial intelligence (AI) systems trained on real astronomical observations now surpass astronomers in filtering through massive amounts of data to find new exploding stars, identify new types of galaxies, and detect the mergers of massive stars, boosting the rate of new discovery in the worlds oldest science.

But a type of AI called machine learning can reveal something deeper, University of California, Berkeley, astronomers found: unsuspected connections hidden in the complex mathematics arising from general relativity in particular, how that theory is applied to finding new planets around other stars.

In a paper published on May 23, 2022, in the journal Nature Astronomy, the researchers describe how an AI algorithm developed to more quickly detect exoplanets when such planetary systems pass in front of a background star and briefly brighten it a process known as gravitational microlensing revealed that the decades-old theories now used to explain these observations are woefully incomplete.

In 1936, Albert Einstein himself used his new theory of general relativity to show how the light from a distant star can be bent by the gravity of a foreground star, not only brightening it as seen from Earth, but often splitting it into several points of light or distorting it into a ring, now called an Einstein ring. This is similar to the way a hand lens can focus and intensify light from the sun.

But when the foreground object is a star with a planet, the brightening over time the light curve is more complicated. Whats more, there are often multiple planetary orbits that can explain a given light curve equally well so called degeneracies. Thats where humans simplified the math and missed the bigger picture.

Seen from Earth (left), a planetary system moving in front of a background star (source, right) distorts the light from that star, making it brighten as much as 10 or 100 times. Because both the star and exoplanet in the system bend the light from the background star, the masses and orbital parameters of the system can be ambiguous. An AI algorithm developed by UC Berkeley astronomers got around that problem, but it also pointed out errors in how astronomers have been interpreting the mathematics of gravitational microlensing. Credit: Diagram courtesy of Research Gate

The AI algorithm, however, pointed to a mathematical way to unify the two major kinds of degeneracy in interpreting what telescopes detect during microlensing, showing that the two theories are really special cases of a broader theory that, the researchers admit, is likely still incomplete.

A machine learning inference algorithm we previously developed led us to discover something new and fundamental about the equations that govern the general relativistic effect of light- bending by two massive bodies, Joshua Bloom wrote in a blog post last year when he uploaded the paper to a preprint server, arXiv. Bloom is a UC Berkeley professor of astronomy and chair of the department.

He compared the discovery by UC Berkeley graduate student Keming Zhang to connections that Googles AI team, DeepMind, recently made between two different areas of mathematics. Taken together, these examples show that AI systems can reveal fundamental associations that humans miss.

I argue that they constitute one of the first, if not the first time that AI has been used to directly yield new theoretical insight in math and astronomy, Bloom said. Just as Steve Jobs suggested computers could be the bicycles of the mind, weve been seeking an AI framework to serve as an intellectual rocket ship for scientists.

This is kind of a milestone in AI and machine learning, emphasized co-author Scott Gaudi, a professor of astronomy at The Ohio State University and one of the pioneers of using gravitational microlensing to discover exoplanets. Kemings machine learning algorithm uncovered this degeneracy that had been missed by experts in the field toiling with data for decades. This is suggestive of how research is going to go in the future when it is aided by machine learning, which is really exciting.

More than 5,000 exoplanets, or extrasolar planets, have been discovered around stars in the Milky Way, though few have actually been seen through a telescope they are too dim. Most have been detected because they create a Doppler wobble in the motions of their host stars or because they slightly dim the light from the host star when they cross in front of it transits that were the focus of NASAs Kepler mission. Little more than 100 have been discovered by a third technique, microlensing.

This infographic explains the light curve astronomers detect when viewing a microlensing event, and the signature of an exoplanet: an additional uptick in brightness when the exoplanet lenses the background star. Credit: NASA, ESA, and K. Sahu (STScI)

One of the main goals of NASAs Nancy Grace Roman Space Telescope, scheduled to launch by 2027, is to discover thousands more exoplanets via microlensing. The technique has an advantage over the Doppler and transit techniques in that it can detect lower-mass planets, including those the size of Earth, that are far from their stars, at a distance equivalent to that of Jupiter or Saturn in our solar system.

Bloom, Zhang and their colleagues set out two years ago to develop an AI algorithm to analyze microlensing data faster to determine the stellar and planetary masses of these planetary systems and the distances the planets are orbiting from their stars. Such an algorithm would speed analysis of the likely hundreds of thousands of events the Roman telescope will detect in order to find the 1% or fewer that are caused by exoplanetary systems.

One problem astronomers encounter, however, is that the observed signal can be ambiguous. When a lone foreground star passes in front of a background star, the brightness of the background stars rises smoothly to a peak and then drops symmetrically to its original brightness. Its easy to understand mathematically and observationally.

UC Berkeley doctoral student Keming Zhang. Credit: Photo courtesy of Keming Zhang

But if the foreground star has a planet, the planet creates a separate brightness peak within the peak caused by the star. When trying to reconstruct the orbital configuration of the exoplanet that produced the signal, general relativity often allows two or more so-called degenerate solutions, all of which can explain the observations.

To date, astronomers have generally dealt with these degeneracies in simplistic and artificially distinct ways, Gaudi said. If the distant starlight passes close to the star, the observations could be interpreted either as a wide or a close orbit for the planet an ambiguity astronomers can often resolve with other data. A second type of degeneracy occurs when the background starlight passes close to the planet. In this case, however, the two different solutions for the planetary orbit are generally only slightly different.

According to Gaudi, these two simplifications of two-body gravitational microlensing are usually sufficient to determine the true masses and orbital distances. In fact, in a paper published last year, Zhang, Bloom, Gaudi, and two other UC Berkeley co-authors, astronomy professor Jessica Lu and graduate student Casey Lam, described a new AI algorithm that does not rely on knowledge of these interpretations at all. The algorithm greatly accelerates analysis of microlensing observations, providing results in milliseconds, rather than days, and drastically reducing the computer crunching.

Zhang then tested the new AI algorithm on microlensing light curves from hundreds of possible orbital configurations of star and exoplanet and discovered something unusual: There were other ambiguities that the two interpretations did not account for. He concluded that the commonly used interpretations of microlensing were, in fact, just special cases of a broader theory that explains the full variety of ambiguities in microlensing events.

The two previous theories of degeneracy deal with cases where the background star appears to pass close to the foreground star or the foreground planet, Zhang said. The AI algorithm showed us hundreds of examples from not only these two cases, but also situations where the star doesnt pass close to either the star or planet and cannot be explained by either previous theory. That was key to us proposing the new unifying theory.

Gaudi was skeptical, at first, but came around after Zhang produced many examples where the previous two theories did not fit observations and the new theory did. Zhang actually looked at the data from two dozen previous papers that reported the discovery of exoplanets through microlensing and found that, in all cases, the new theory fit the data better than the previous theories.

People were seeing these microlensing events, which actually were exhibiting this new degeneracy but just didnt realize it, Gaudi said. It was really just the machine learning looking at thousands of events where it became impossible to miss.

Zhang and Gaudi have submitted a new paper that rigorously describes the new mathematics based on general relativity and explores the theory in microlensing situations where more than one exoplanet orbits a star.

The new theory technically makes interpretation of microlensing observations more ambiguous, since there are more degenerate solutions to describe the observations. But the theory also demonstrates clearly that observing the same microlensing event from two perspectives from Earth and from the orbit of the Roman Space Telescope, for example will make it easier to settle on the correct orbits and masses. That is what astronomers currently plan to do, Gaudi said.

The AI suggested a way to look at the lens equation in a new light and uncover something really deep about the mathematics of it, said Bloom. AI is sort of emerging as not just this kind of blunt tool thats in our toolbox, but as something thats actually quite clever. Alongside an expert like Keming, the two were able to do something pretty fundamental.

Reference: A ubiquitous unifying degeneracy in two-body microlensing systems by Keming Zhang, B. Scott Gaudi and Joshua S. Bloom, 23 May 2022, Nature Astronomy.DOI: 10.1038/s41550-022-01671-6

See more here:

AI Reveals Unsuspected Connections Hidden in the Complex Math Underlying Search for Exoplanets - SciTechDaily

Posted in Ai | Comments Off on AI Reveals Unsuspected Connections Hidden in the Complex Math Underlying Search for Exoplanets – SciTechDaily

What does the future hold for AI in healthcare? – Healthcare IT News

Posted: at 2:44 am

Can you imagine a future in which babies wear smart clothing to track their every move? It may sound like something from science fiction, but a romper suit being piloted in Helsinki, Copenhagen, and Pisa does exactly that.

The motor assessment of infants jumpsuit (MAIJU) looks like typical baby clothing, but there is a crucial difference it is full of sensors which assess child development.

MAIJUoffers the first of its kind quantitative assessment of infants motor abilities through the age from supine lying to fluent walking, explains Professor Sampsa Vanhatalo, project lead at the University of Helsinki. Such quantitation has not been possible anywhere, not even in hospitals. Here, we are bringing the solution to homes, which provides the only ecologically relevant context for motor assessment.

Vanhatalodescribes the path from wishful thinking about a solution to a possible clinical implementation as a windy road.

There is no lack of dreams or technology, but we are lacking relevant and sufficient clinical problem statements, ecologically and context relevant datasets, reliable clinical phenotyping of the material, as well as suitable legislation for products that dont follow the traditional forms, he says.

Machine learning allowed the researchers at Helsinki to find latent characteristics in infants movement signals that could not be identified through conventional heuristic planning.

At the same time, we need to remember that AI in medical applications can only be as intelligent as we allow it to be, adds Vanhatalo. Real world situations are much muddier than we hope, and the ambiguity of many clinical situations or diagnoses is significantly limiting our chance to build as accurate AI solutions as we would hope. For instance, it is not possible to train and validate a classifier for the myriad of medical diagnoses which do not have clear-cut boundaries.

Vanhatalo also believes that the medical community needs to recognise sensible targets for AI.

It is much more fruitful to train clinical decision support systems (CDSS) than to train clinical decision systems, he argues. The latter is what some people hope and others fear; but the liabilities, including legal ones, from the decisions are so big that I struggle to see any company dare to commercialise such solutions. Indeed, I can already see how the legal risks from such liabilities, even if indirect or illusionary, are creating a bottleneck for commercialisation of many good AI products.

The cutting edge of oncology

One area of medicine in which AI holds great potential to revolutionise care is oncology.Professor Karol Sikora, chief medical officer (CMO) at cancer care vanguard, Rutherford House, believes that machine learning can benefit physicians by assisting in complex treatment decisions.

A range of commercial solutions are available to identify and map nearby organs at risk in apposition to the cancer, explains Sikora. Precision oncology demands the analysis of large volumes of data in an unprecedented way and we hope AI will provide patient benefit long term.

Rutherford Healths network of oncologycentres use the latest innovations in cancer technology, such as AI for radiotherapy treatment planning.

According to Sikora, machine learning could also have a huge benefit in enhancing patient choice in the future. AI could drive patient understanding of the risk benefit equation associated with any intervention, he says.

Demystifying AI

But for healthcare organisations to fully untap the potential of AI there is a need to demystify the noise around it, according to Atif Chaughtai, senior director of global healthcare and life sciences business at software firm Red Hat.

AI applied correctly has huge potential in savings lives and managing the ever-increasing cost of healthcare, says Chaughtai. In thefuture AI will continue to evolve and will be widely used as an assistive technology to perform tasks with more accuracy and efficiency with humans in the loop to make final decisions.

He adds that for AI capability to be adopted successfully, organisations must introduce change at a manageable pace and work collaboratively to innovate on intelligent business processes.

Often times, as data scientists or IT professionals we dont take the time understand the business process of our customer resulting in poor change management, he says.

Vanhatalo, Sikora, and Chaughtai will be speaking at the session on Unlocking the Future of AIat the HIMSS22 European Health Conference and Exhibition, which is taking place June 14-16, 2022.

Read more:

What does the future hold for AI in healthcare? - Healthcare IT News

Posted in Ai | Comments Off on What does the future hold for AI in healthcare? – Healthcare IT News

AI: The pattern is not in the data, it’s in the machine – ZDNet

Posted: at 2:44 am

A neural network transforms input, the circles on the left, to output, on the right. How that happens is a transformation of weights, center, which we often confuse for patterns in the data itself.

It's a commonplace of artificial intelligence to say that machine learning, which depends on vast amounts of data, functions by finding patterns in data.

The phrase, "finding patterns in data," in fact, has been a staple phrase of things such as data mining and knowledge discovery for years now, and it has been assumed that machine learning, and its deep learning variant especially, are just continuing the tradition of finding such patterns.

AI programs do, indeed, result in patterns, but, just as "The fault, dear Brutus, lies not in our stars but in ourselves," the fact of those patterns is not something in the data, it is what the AI program makes of the data.

Almost all machine learning models function via a learning rule that changes the so-called weights, also known as parameters, of the program as the program is fed examples of data, and, possibly, labels attached to that data. It is the value of the weights that counts as "knowing" or "understanding."

The pattern that is being found is really a pattern of how weights change. The weights are simulating how real neurons are believed to "fire", the principle formed by psychologist Donald O. Hebb, which became known as Hebbian learning, the idea that "neurons that fire together, wire together."

Also: AI in sixty seconds

It is the pattern of weight changes that is the model for learning and understanding in machine learning, something the founders of deep learning emphasized. As expressed almost forty years ago, in one of the foundational texts of deep learning, Parallel Distributed Processing, Volume I, James McClelland, David Rumelhart, and Geoffrey Hinton wrote,

What is stored is the connection strengths between units that allow these patterns to be created [] If the knowledge is the strengths of the connections, learning must be a matter of finding the right connection strengths so that the right patterns of activation will be produced under the right circumstances.

McClelland, Rumelhart, and Hinton were writing for a select audience, cognitive psychologists and computer scientists, and they were writing in a very different age, an age when people didn't make easy assumptions that anything a computer did represented "knowledge." They were laboring at a time when AI programs couldn't do much at all, and they were mainly concerned with how to produce a computation any computation from a fairly limited arrangement of transistors.

Then, starting with the rise of powerful GPU chips some sixteen years ago, computers really did begin to produce interesting behavior, capped off by the landmark ImageNet performance of Hinton's work with his graduate students in 2012 that marked deep learning's coming of age.

As a consequence of the new computer achievements, the popular mind started to build all kinds of mythology around AI and deep learning. There was a rush of really bad headlines likening the technology to super-human performance.

Also: Why is AI reporting so bad?

Today's conception of AI has obscured what McClelland, Rumelhart, and Hinton focused on, namely, the machine, and how it "creates" patterns, as they put it. They were very intimately familiar with the mechanics of weights constructing a pattern as a response to what was, in the input, merely data.

Why does all that matter? If the machine is the creator of patterns, then the conclusions people draw about AI are probably mostly wrong. Most people assume a computer program is perceiving a pattern in the world, which can lead to people deferring judgment to the machine. If it produces results, the thinking goes, the computer must be seeing something humans don't.

Except that a machine that constructs patterns isn't explicitly seeing anything. It's constructing a pattern. That means what is "seen" or "known" is not the same as the colloquial, everyday sense in which humans speak of themselves as knowing things.

Instead of starting from the anthropocentric question, What does the machine know? it's best to start from a more precise question, What is this program representing in the connections of its weights?

Depending on the task, the answer to that question takes many forms.

Consider computer vision. The convolutional neural network that underlies machine learning programs for image recognition and other visual perception is composed of a collection of weights that measure pixel values in a digital image.

The pixel grid is already an imposition of a 2-D coordinate system on the real world. Provided with the machine-friendly abstraction of the coordinate grid, a neural net's task of representation boils down to matching the strength of collections of pixels to a label that has been imposed, such as "bird" or "blue jay."

In a scene containing a bird, or specifically a blue jay, many things may be happening, including clouds, sunshine, and passers by. But the scene in its entirety is not the thing. What matters to the program is the collection of pixels most likely to produce an appropriate label. The pattern, in other words, is a reductive act of focus and selection inherent in the activation of neural net connections.

You might say, a program of this kind doesn't "see" or "perceive" so much as it filters.

Also: A new experiment: Does AI really know cats or dogs -- or anything?

The same is true in games, where AI has mastered chess and poker. In the "full-information" game chess, mastered by DeepMind's AlphaZero program, the machine learning task boils down to crafting a probability score at each moment of how much a potential next move will lead ultimately to win, lose or draw.

Because the number of potential future game board configurations cannot be calculated even by the fastest computers, the computer's weights cut short the search for moves by doing what you might call summarizing. The program summarizes the likelihood of a success if one were to pursue several moves in a given direction, and then compares that summary to the summary of potential moves to be taken in another direction.

Whereas the state of the board at any moment the position of pieces, and which pieces remain might "mean" something to a human chess grandmaster, it's not clear the term "mean" has any meaning for DeepMind's AlphaZero for such a summarizing task.

A similar summarizing task is achieved for the Pluribus program that in 2019 conquered the hardest form of poker, No-limit Texas hold'em. That game is even more complex in that it has hidden information, the players' face down cards, and additional "stochastic" elements of bluffing. But the representation is, again, a summary of likelihoods by each turn.

Even in programs that handle human language, what's in the weights is different from what the casual observer might suppose. GPT-3, the top language program from OpenAI, can produce strikingly human-like output in sentences and paragraphs.

Does the program "know" language? Its weights hold a representation of the likelihood of how individual words and even whole strings of text are found in sequence with other words and strings.

You could call that function of a neural net a summary similar to AlphaGo or Pluribus, given that the problem is rather like chess or poker. But the possible states to be represented as connections in the neural net are not just vast, they are infinite given the infinite composability of language.

On the other hand, given that the output of a language program such as GPT-3, a sentence, is a fuzzy answer rather than a discrete score, the "right answer" is somewhat less demanding than the win, lose or draw of chess or poker. You could also call this function of GPT-3 and similar programs an "indexing" or an "inventory" of things in their weights.

Also: What is GPT-3? Everything your business needs to know about OpenAI's breakthrough AI language program

Do humans have a similar kind of inventory or index of language? There doesn't seem to be any indication of it so far in neuroscience. Likewise, in the expression"to tell the dancer from the dance,"does GPT-3 spot the multiple levels of significance in the phrase, or the associations? It's not clear such a question even has a meaning in the context of a computer program.

In each of these cases chess board, cards, word strings the data are what they are: a fashioned substrate divided in various ways, a set of plastic rectangular paper products, a clustering of sounds or shapes. Whether such inventions "mean" anything, collectively, to the computer, is only a way of saying that a computer becomes tuned in response, for a purpose.

The things such data prompt in the machine filters, summarizations, indices, inventories, or however you want to characterize those representations are never the thing in itself. They are inventions.

Also: DeepMind: Why is AI so good at language? It's something in language itself

But, you may say, people see snowflakes and see their differences, and also catalog those differences, if they have a mind to. True, human activity has always sought to find patterns, via various means. Direct observation is one of the simplest means, and in a sense, what is being done in a neural network is a kind of extension of that.

You could say the neural network reveals what was always true in human activity for millennia, that to speak of patterns is a thing imposed on the world rather than a thing in the world. In the world, snowflakes have form but that form is only a pattern to a person who collects and indexes them and categorizes them. It is a construction, in other words.

The activity of creating patterns will increase dramatically as more and more programs are unleashed on the data of the world, and their weights are tuned to form connections that we hope create useful representations. Such representations may be incredibly useful. They may someday cure cancer. It is useful to remember, however, that the patterns they reveal are not out there in the world, they are in the eye of the perceiver.

Also: DeepMind's 'Gato' is mediocre, so why did they build it?

Here is the original post:

AI: The pattern is not in the data, it's in the machine - ZDNet

Posted in Ai | Comments Off on AI: The pattern is not in the data, it’s in the machine – ZDNet

AI has a dangerous bias problem heres how to manage it – The Next Web

Posted: at 2:44 am

AI now guides numerous life-changing decisions, from assessing loan applications to determining prison sentences.

Proponents of the approach argue that it can eliminate human prejudices, but critics warn that algorithms can amplify our biases without even revealing how they reached the decision.

This can result in AI systems leading to Black people being wrongfully arrested, or child services unfairly targeting poor families. The victims are frequently from groups that are already marginalized.

Subscribe now for a weekly recap of our favorite AI stories

Alejandro Saucedo, Chief Scientist at The Institute for Ethical AI and Engineering Director at ML startup Seldon, warns organizations to think carefully before deploying algorithms. He told TNW his tips on mitigating the risks.

Machine learning systems need to provide transparency. This can be a challenge when using powerful AI models, whose inputs, operations, and outcomes arent obvious to humans.

Explainability has been touted as a solution for years, but effective approaches remain elusive.

The machine learning explainability tools can themselves be biased, says Saucedo. If youre not using the relevant tool or if youre using a specific tool in a way thats incorrect or not-fit for purpose, you are getting incorrect explanations. Its the usual software paradigm of garbage in, garbage out.

While theres no silver bullet, human oversight and monitoring can reduce the risks.

Saucedo recommends identifying the processes and touchpoints that require a human-in-the-loop. This involves interrogating the underlying data, the model that is used, and any biases that emerge during deployment.

The aim is to identify the touchpoints that require human oversight at each stage of the machine learning lifecycle.

Ideally, this will ensure that the chosen system is fit-for-purpose and relevant to the use case.

Domain experts can also use machine learning explainers to assess the prediction of the model, but its imperative that they first evaluate the appropriateness of the system.

When I say domain experts, I dont always mean technical data scientists, says Saucedo. They can be industry experts, policy experts, or other individuals with expertise in the challenge thats being tackled.

The level of human intervention should be proportionate to the risks. An algorithm that recommends songs, for instance, wont require as much oversight as one that dictates bail conditions.

In many cases, an advanced system will only increase the risks. Deep learning models, for example, can add a layer of complexity that causes more problems than it solves.

If you cannot understand the ambiguities of a tool youre introducing, but you do understand that the risks have high stakes, thats telling you that its a risk that should not be taken, says Saucedo.

The operators of AI systems must also justify the organizational process around the models they introduce.

This requires an assessment of the entire chain of events that leads to a decision, from procuring data to the final output.

You need a framework of accountability

There is a need to ensure accountability at each step, says Saucedo. Its important to make sure that there are best practices on not just the explainability stage, but also on what happens when something goes wrong.

This includes providing a means to analyze the pathway to the outcome, data on which domain experts were involved, and information on the sign-off process.

You need a framework of accountability through robust infrastructure and a robust process that involves domain experts relevant to the risk involved at every stage of the lifecycle.

When AI systems go wrong, the company that deployed them can also suffer the consequences.

This can be particularly damaging when using sensitive data, which bad actors can steal or manipulate.

If artifacts are exploited they can be injected with malicious code, says Saucedo. That means that when they are running in production, they can extract secrets or share environment variables.

The software supply chain adds further dangers.

Organizations that use common data science tools such as TensorFlow and PyTorch introduce extra dependencies, which can heighten the risks.

An upgrade could cause a machine learning system to break, and attackers can inject malware at the supply chain level.

The consequences can exacerbate existing biases and cause catastrophic failures.

Saucedo again recommends applying best practices and human intervention to mitigate the risks.

An AI system may promise better results than humans, but without their oversight, the results can be disastrous.

Did you know Alejandro Saucedo, Engineering Director at Seldon and Chief Scientist at the Institute for Ethical AI & Machine Learning, is speaking at the TNW Conference on June 16? Check out the full list of speakers here.

View post:

AI has a dangerous bias problem heres how to manage it - The Next Web

Posted in Ai | Comments Off on AI has a dangerous bias problem heres how to manage it – The Next Web

Page 33«..1020..32333435..4050..»