Page 114«..1020..113114115116..120130..»

Category Archives: Ai

How Moderna, Home Depot, and others are succeeding with AI – MIT Sloan News

Posted: August 22, 2021 at 3:58 pm

open share links close share links

When pharmaceutical company Moderna announced the first clinical trial of a COVID-19 vaccine, it was a proud moment but not a surprising one for Dave Johnson, the companys chief data and artificial intelligence officer.

When Johnson joined the company in 2014, he helped put in place automated processes and AI algorithmsto increase the number of small-scale messenger RNA (mRNA) needed to run clinical experiments. This groundwork contributed to Moderna releasing one of the first COVID-19 vaccines (using mRNA) even as the world had only started to understand the virus threat.

The whole COVID vaccine development, were immensely proud of the work that weve done there, and were immensely proud of the superhuman effort that our people went through to bring it to market so quickly, Johnson said during a bonus episode of the MIT Sloan Management Review podcast Me, Myself, and AI.

But a lot of it was built on this infrastructure that we had put in place where we didnt build algorithms specifically for COVID; we just put them through the same pipeline of activity that weve been doing, Johnson said. We just turned it as fast as we could.

Successfully using AI in business is at the heart of the podcast, which recently finished its second season. The podcast is hosted by Sam Ransbotham, professor of information systems at Boston College, and Shervin Khodabandeh, senior partner with Boston Consulting Group and co-lead of its AI practice in North America. The series features leaders who are achieving big wins with AI.

Heres a look at some of the highlights from this season.

If youre frantically searching the Home Depot website for a way to patch a hole in your wall, chances are youre not thinking of the people whove generated the recommendation for the correct brackets to use with your new wall-mounted mirror or the project guide for the repairs youre doing.

But Huiming Qu, the Home Depots senior director of data science and machine learning products, marketing, and online, is not only thinking about those data scientists and engineers, shes leading them, and doing it in a way she hopes will leave both her team and customers happy. To do this, Qus team pulls as much data as it can from customer visits to the site, such as what was in their carts and what their prior searches were.

Qus team then weaves that information into an extremely, extremely light test version of an algorithm to cut down on development time and to figure out if that change will be possible within Home Depots digital infrastructure.

It takes a cross-functional team iteratively to move a lot faster to break down that bigger problem, bigger goals, to many smaller ones that we can achieve very quickly, Qu said.

When it comes to AI and machine learning at Google, the tech company applies three principles to innovation: focus on the user, rapidly prototype, and think in 10x.

We want to make sure were solving for a problem that also has the scale that will be worth it and really advances whatever were trying to do not in a small way, but in a really big way, said Will Grannis, managing director of Google Clouds Office of the CTO.

But before Google puts too many resources behind these 10x or moonshot solutions, engineers are encouraged to take on roof shot projects.

Rather than aiming for the sky right out of the gate, engineers only have to get an idea to the roof, Grannis said. A moonshot is often the product of a series of smaller roof shots, he said, and this approach allows him to see who is willing to put in the effort to see something through from start to finish.

If people dont believe in the end state, the big transformation, theyre usually much less likely to journey across those roof shots and to keep going when things get hard, Grannis said. My job is to create an environment where people feel empowered, encouraged, and excited to try and [I] try to demotivate them as little as possible, because theyll find their way to the roof shot, and then the next one, and then the next one, and then pretty soon youre three years in, and I couldnt stop a project if I wanted to.

JoAnn Stonier, chief data officer at Mastercard is using AI and machine learning to prevent and uncover bias, even though most datasets will have some bias in them to begin with.

And thats OK. The 1910 U.S. voter rolls, for example, are a dataset, Stonier said. They could be used to study something like voting habits of early 20th century white men. But you would also need to acknowledge that women and people of color are missing from that dataset, so your study wouldnt reflect the entire U.S. population in 1910.

The problem is, if you dont remember that, or youre not mindful of that, then you have an inquiry thats going to learn off of a dataset that is missing characteristics that [are] going to be important to whatever that other inquiry is, Stonier said. Those are some of the ways that I think we can actually begin to design a better future, but it means really being very mindful of whats inherent in the dataset, whats there, whats missing but also can be imputed.

The complete two seasons of Me, Myself, and AI can be listened to on Apple Podcasts and Spotify.Transcripts of the Me, Myself, and AI podcast are also available.

Excerpt from:

How Moderna, Home Depot, and others are succeeding with AI - MIT Sloan News

Posted in Ai | Comments Off on How Moderna, Home Depot, and others are succeeding with AI – MIT Sloan News

AI for Impact lives up to its name – MIT News

Posted: at 3:58 pm

For entrepreneurial MIT students looking to put their skills to work for a greater good, the Media Arts and Sciences class MAS.664 (AI for Impact) has been a destination point. With the onset of the pandemic, that goal came into even sharper focus. Just weeks before the campus shut down in 2020, a team of students from the class launched a project that would make significant strides toward an open-source platform to identify coronavirus exposures without compromising personal privacy.

Their work was at the heart of Safe Paths, one of the earliest contact tracing apps in the United States. The students joined with volunteers from other universities, medical centers, and companies to publish their code, alongside a well-received white paper describing the privacy-preserving, decentralized protocol, all while working with organizations wishing to launch the app within their communities. The app and related software eventually got spun out into the nonprofit PathCheck Foundation, which today engages with public health entities and is providing exposure notifications in Guam, Cyprus, Hawaii, Minnesota, Alabama, and Louisiana.

The formation of Safe Paths demonstrates the special sense among MIT researchers that we can launch something that can help people around the world, notes Media Lab Associate Professor Ramesh Raskar, who teaches the class together with Media Lab Professor Alex Sandy Pentland and Media Lab Lecturer Joost Bonsen. To have that kind of passion and ambition but also the confidence that what you create here can actually be deployed globally is kind of amazing.

AI for Impact, created by Pentland, began meeting two decades ago under the course name Development Ventures, and has nurtured multiple thriving businesses. Examples of class ventures that Pentland incubated or co-founded include Dimagi, Cogito, Ginger, Prosperia, and Sanergy.

The aim-high challenge posed to each class is to come up with a business plan that touches a billion people, and it cant all be in one country, Pentland explains. Not every class effort becomes a business, but 20 percent to 30 percent of students start something, which is great for an entrepreneur class, says Pentland.

Opportunities for Impact

The numbers behind Dimagi, for instance, are striking. Its core product CommCare has helped front-line health workers provide care for more than 400 million people in more than 130 countries around the world. When it comes to maternal and child care, Dimagi's platform has registered one in every 110 pregnancies worldwide. This past year, several governments around the world deployed CommCare applications for Covid-19 response from Sierra Leone and Somalia to New York and Colorado.

Spinoffs like Cogito, Prosperia, and Ginger have likewise grown into highly successful companies. Cogito helps a million people a day gain access to the health care they need; Prosperia helps manage social support payments to 80 million people in Latin America; and Ginger handles mental health services for over 1 million people.

The passion behind these and other class ventures points to a central idea of the class, Pentland notes: MIT students are often looking for ways to build entrepreneurial businesses that enable positive social change.

During the spring 2021 class, for example, a number of promising student projects included tools to help residents of poor communities transition to owning their homes rather than renting, and to take better control of their community health.

Its clear that the people who are graduating from here want to do something significant with their lives ... they want to have an impact on their world, Pentland says. "This class enables them to meet other people who are interested in doing the same thing, and offers them some help in starting a company to do it.

Many of the students who join the class come in with a broad set of interests. Guest lectures, case studies of other social entrepreneurship projects, and an introduction to a broad ecosystem of expertise and funding, then helps students to refine their general ideas into specific and viable projects.

A path toward confronting a pandemic

Raskar began co-teaching the class in 2019, and brought a Big AI focus to the Development Ventures class, inspired by an AI for Impact team he had set up at his former employer, Facebook. What I realized is that companies like Google or Facebook or Amazon actually have enough data about all of us that they can solve major problems in our society climate, transportation, health, and so on, he says. This is something we should think about more seriously: how to use AI and data for positive social impact, while protecting privacy.

Early into the spring 2020 class, as students were beginning to consider their own projects, Raskar approached the class about the emerging coronavirus outbreak. Students like Kristen Vilcans recognized the urgency, and the opportunity. She and 10 other students joined forces to work on a project that would focus on Covid-19.

"Students felt empowered to do something to help tackle the spread of this alarming new virus," Raskar recalls. "They immediately began to develop data- and AI-based solutions to one of the most critical pieces of addressing a pandemic: halting the chain of infections. They created and launched one of the first digital contact tracing and exposure notification solutions in the U.S., developing an early alert system that engaged the public and protected privacy.

Raskar looks back on the moment when a core group of students coalesced into a team. It was very rare for a significant part of the class to just come together saying, 'lets do this, right away.' It became as much a movement as a venture.

Group discussions soon began to center around an open-source, privacy-first digital set of tools for Covid-19 contact tracing. For the next two weeks, right up to the campus shutdown in March 2020, the team took over two adjacent conference rooms in the Media Lab, and started a Slack messaging channel devoted to the project. As the team members reached out to an ever-wider circle of friends, colleagues, and mentors, the number of participants grew to nearly 1,600 people, coming together virtually from all corners of the world.

Kaushal Jain, a Harvard Business School student who had cross-registered for the spring 2020 class to get to know the MIT ecosystem, was also an early participant in Safe Paths. He wrote up an initial plan for the venture and began working with external organizations to figure out how to structure it into a nonprofit company. Jain eventually became the project's lead for funding and partnerships.

Vilcans, a graduate student in system design and management, served as Safe Paths communications lead through July 2020, while still working a part-time job at Draper Laboratory and taking classes.

There are these moments when you want to dive in, you want to contribute and you want to work nonstop, she says, adding that the experience was also a wake-up call on how to manage burnout, and how to balance what you need as a person while contributing to a high-impact team. That's important to understand as a leader for the future.

MIT recognized Vilcan's contributions later that year with the 2020 SDM Student Award for Leadership, Innovation, and Systems Thinking.

Jain, too, says the class gave him more than he could have expected.

I made strong friendships with like-minded people from very different backgrounds, he says. One key thing that I learned was to be flexible about the kind of work you want to do. Be open and see if there's an opportunity, either through crisis or through something that you believe could really change a lot of things in the world. And then just go for it.

See the rest here:

AI for Impact lives up to its name - MIT News

Posted in Ai | Comments Off on AI for Impact lives up to its name – MIT News

Val Kilmer reclaims his voice through AI technology after throat cancer – The National

Posted: at 3:58 pm

After a two-year battle with throat cancer and a tracheotomy that severely affected his speech, Val Kilmer has reclaimed his voice through AI technology.

UK software company Sonantic used old recordings of the Top Gun actors voice to recreate a computer-generated version.

The company, known for its voice cloning work, shared a clip of the results on its official YouTube channel earlier in August.

"My voice as I knew it was taken away from me. People around me struggle to understand when I'm talking, Kilmer, 61, is heard saying in the clip through his AI voice. But despite all that I still feel, I'm the exact same person. Still the same creative soul. A soul that dreams ideas and stories confidently, but now I can express myself again, bring these ideas to you, and show you this part of myself once more. A part that was never truly gone. Just hiding away.

Kilmer had an active part in developing the AI voice, online news website The Wrap reports. The Batman Forever star provided the archival footage of his voice, which was then used to create the prototype.

Kilmer's AI voice is not featured in the documentary 'Val' but moving forward, the actor will be able to use his new voice in both a professional and personal capacity. Amazon Prime Video via AP

"I'm grateful to the entire team at Sonantic who masterfully restored my voice in a way I've never imagined possible, Kilmer told The Wrap. "As human beings, the ability to communicate is the core of our existence and the side effects from throat cancer have made it difficult for others to understand me. The chance to narrate my story, in a voice that feels authentic and familiar, is an incredibly special gift.

Kilmers throat cancer diagnosis was not publicly confirmed until 2017, two years after the actor was first hospitalised for the condition. By then, he had undergone chemotherapy and a tracheotomy procedure that abraded his voice to a rasp. In 2020, the actor revealed that he had been cancer-free for four years.

News of Kilmers new computer-generated voice comes just after the release of his documentary Val on Amazon Prime Video. Co-produced by the actors daughter and son, the film examines Kilmers life and career, as well as his cancer recovery.

The AI voice is not featured in Val, but moving forward, Kilmer will be able to use his new voice in both a professional and personal capacity.

Updated: August 21st 2021, 12:32 PM

Results

Stage 7:

1.Caleb Ewan (AUS) Lotto Soudal -3:18:29

2.Sam Bennett (IRL) Deceuninck-QuickStep - sametime

3.Phil Bauhaus (GER) Bahrain Victorious

4.Michael Morkov (DEN) Deceuninck-QuickStep

5.Cees Bol (NED) Team DSM

General Classification:

1.Tadej Pogacar (SLO) UAE Team Emirates - 24:00:28

2.Adam Yates (GBR) Ineos Grenadiers -0:00:35

3.Joao Almeida (POR) Deceuninck-QuickStep -0:01:02

4.Chris Harper (AUS) Jumbo-Visma -0:01:42

5.Neilson Powless (USA) EF Education-Nippo -0:01:45

Results

Stage 7:

1.Caleb Ewan (AUS) Lotto Soudal -3:18:29

2.Sam Bennett (IRL) Deceuninck-QuickStep - sametime

3.Phil Bauhaus (GER) Bahrain Victorious

4.Michael Morkov (DEN) Deceuninck-QuickStep

5.Cees Bol (NED) Team DSM

General Classification:

1.Tadej Pogacar (SLO) UAE Team Emirates - 24:00:28

2.Adam Yates (GBR) Ineos Grenadiers -0:00:35

3.Joao Almeida (POR) Deceuninck-QuickStep -0:01:02

4.Chris Harper (AUS) Jumbo-Visma -0:01:42

5.Neilson Powless (USA) EF Education-Nippo -0:01:45

Results

Stage 7:

1.Caleb Ewan (AUS) Lotto Soudal -3:18:29

2.Sam Bennett (IRL) Deceuninck-QuickStep - sametime

3.Phil Bauhaus (GER) Bahrain Victorious

4.Michael Morkov (DEN) Deceuninck-QuickStep

5.Cees Bol (NED) Team DSM

General Classification:

1.Tadej Pogacar (SLO) UAE Team Emirates - 24:00:28

2.Adam Yates (GBR) Ineos Grenadiers -0:00:35

3.Joao Almeida (POR) Deceuninck-QuickStep -0:01:02

4.Chris Harper (AUS) Jumbo-Visma -0:01:42

5.Neilson Powless (USA) EF Education-Nippo -0:01:45

Results

Stage 7:

1.Caleb Ewan (AUS) Lotto Soudal -3:18:29

2.Sam Bennett (IRL) Deceuninck-QuickStep - sametime

3.Phil Bauhaus (GER) Bahrain Victorious

4.Michael Morkov (DEN) Deceuninck-QuickStep

5.Cees Bol (NED) Team DSM

General Classification:

1.Tadej Pogacar (SLO) UAE Team Emirates - 24:00:28

2.Adam Yates (GBR) Ineos Grenadiers -0:00:35

3.Joao Almeida (POR) Deceuninck-QuickStep -0:01:02

4.Chris Harper (AUS) Jumbo-Visma -0:01:42

5.Neilson Powless (USA) EF Education-Nippo -0:01:45

Results

Stage 7:

1.Caleb Ewan (AUS) Lotto Soudal -3:18:29

2.Sam Bennett (IRL) Deceuninck-QuickStep - sametime

3.Phil Bauhaus (GER) Bahrain Victorious

4.Michael Morkov (DEN) Deceuninck-QuickStep

5.Cees Bol (NED) Team DSM

General Classification:

1.Tadej Pogacar (SLO) UAE Team Emirates - 24:00:28

2.Adam Yates (GBR) Ineos Grenadiers -0:00:35

3.Joao Almeida (POR) Deceuninck-QuickStep -0:01:02

4.Chris Harper (AUS) Jumbo-Visma -0:01:42

5.Neilson Powless (USA) EF Education-Nippo -0:01:45

Results

Stage 7:

1.Caleb Ewan (AUS) Lotto Soudal -3:18:29

2.Sam Bennett (IRL) Deceuninck-QuickStep - sametime

3.Phil Bauhaus (GER) Bahrain Victorious

4.Michael Morkov (DEN) Deceuninck-QuickStep

5.Cees Bol (NED) Team DSM

General Classification:

1.Tadej Pogacar (SLO) UAE Team Emirates - 24:00:28

2.Adam Yates (GBR) Ineos Grenadiers -0:00:35

3.Joao Almeida (POR) Deceuninck-QuickStep -0:01:02

4.Chris Harper (AUS) Jumbo-Visma -0:01:42

5.Neilson Powless (USA) EF Education-Nippo -0:01:45

Results

Stage 7:

1.Caleb Ewan (AUS) Lotto Soudal -3:18:29

2.Sam Bennett (IRL) Deceuninck-QuickStep - sametime

3.Phil Bauhaus (GER) Bahrain Victorious

4.Michael Morkov (DEN) Deceuninck-QuickStep

5.Cees Bol (NED) Team DSM

General Classification:

1.Tadej Pogacar (SLO) UAE Team Emirates - 24:00:28

2.Adam Yates (GBR) Ineos Grenadiers -0:00:35

3.Joao Almeida (POR) Deceuninck-QuickStep -0:01:02

Go here to see the original:

Val Kilmer reclaims his voice through AI technology after throat cancer - The National

Posted in Ai | Comments Off on Val Kilmer reclaims his voice through AI technology after throat cancer – The National

AI Weekly: The road to ethical adoption of AI – VentureBeat

Posted: August 14, 2021 at 1:30 am

The Transform Technology Summits start October 13th with Low-Code/No Code: Enabling Enterprise Agility. Register now!

As new principles emerge to guide the development ethical, safe, and inclusive AI, the industry faces self-inflicted challenges. Increasingly, there are many sets of guidelines the Organization for Economic Cooperation and Developments AI repository alone hosts more than 100 documents that are vague and high-level. And while a number of tools are available, most come without actionable guidance on how to use, customize, and troubleshoot them.

This is cause for alarm, because as the coauthors of a recent paper write, AIs impacts are hard to assess especially when they have second- and third-order effects. Ethics discussions tend to focus on futuristic scenarios that may not come to pass and unrealistic generalizations that make the conversations untenable. In particular, companies run the risk of engaging in ethics shopping, ethics washing, or ethics shirking, in which they ameliorate their position with customers to build trust while minimizing accountability.

The points are salient in light of efforts by European Commissions High-level Expert Group on AI (HLEG) and the U.S. National Institute of Standards and Technology, among others, to create standards for building trustworthy AI. In a paper, digital ethics researcher Mark Ryan argues that AI isnt the type of thing that has the capacity to be trustworthy because the category of trust simply doesnt apply to AI. In fact, AI cant have the capacity to be trusted as long as it cant be held responsible for its actions, he argues.

Trust is separate from risk analysis that is solely based on predictions based on past behavior, he explains. While reliability and past experience may be used to develop, confer, or reject trust placed in the trustee, it is not the sole or defining characteristic of trust. Though we may trust people that we rely on, it is not presupposed that we do.

Productizing AI responsibly means different things to different companies. For some, responsible implies adopting AI in a manner thats ethical, transparent, and accountable. For others, it means ensuring that their use of AI remains consistent with laws, regulations, norms, customer expectations, and organizational values. In any case, responsible AI promises to guard against the use of biased data or algorithms, providing an assurance that automated decisions are justified and explainable at least in theory.

Recognizing this, organizations must overcome a misalignment of incentives, disciplinary divides, distributions of responsibilities, and other blockers in responsibly adopting AI. It requires an impact assessment framework thats not only broad, flexible, iterative, possible to operationalize, and guided, but highly participatory as well, according to the papers coauthors. They emphasize the need to shy away from anticipating impacts that are assumed to be important and become more deliberate in deployment choices. As a way of normalizing the practice, the coauthors advocate for including these ideas in documentation the same way that topics like privacy and bias are currently covered.

Another paper this from researchers at the Data & Society Research Institute and Princeton posits algorithmic impact assessments as a tool to help AI designers analyze the benefits and potential pitfalls of algorithmic systems. Impact assessments can address the issues of transparency, fairness, and accountability by providing guardrails and accountability forums that can compel developers to make changes to AI systems.

This is easier said than done, of course. Algorithmic impact assessments focus on the effects of AI decision-making, which doesnt necessarily measure harms and may even obscure them real harms can be difficult to quantify. But if the assessments are implemented with accountability measures, they can perhaps foster technology that respects rather than erodes dignity.

As Montreal AI ethics researcher Abhishek Gupta recently wrote in a column: Design decisions for AI systems involve value judgements and optimization choices. Some relate to technical considerations like latency and accuracy, others relate to business metrics. But each require careful consideration as they have consequences in the final outcome from the system. To be clear, not everything has to translate into a tradeoff. There are often smart reformulations of a problem so that you can meet the needs of your users and customers while also satisfying internal business considerations.

For AI coverage, send news tips toKyle Wiggers and be sure to subscribe to the AI Weekly newsletterand bookmark our AI channel,The Machine.

Thanks for reading,

Kyle Wiggers

AI Staff Writer

Read the original post:

AI Weekly: The road to ethical adoption of AI - VentureBeat

Posted in Ai | Comments Off on AI Weekly: The road to ethical adoption of AI – VentureBeat

Upstart: Can AI Kill The FICO Score? – Forbes

Posted: at 1:30 am

Woman holding a mobile phone with loan application approval. She is being prompted to press a button ... [+] to release the funds. Close up.

Last December, Upstart launched its IPO and raised about $240 million.On the first-day of trading, the shares jumped 47%.

But this was just the beginning of the gains as the IPO would soon become one of the top for the past year. The return?About 800%.

Then again, the company is a high-growth fintech company that has effectively leveraged the power of AI.Its focus is on partnering with banks to provide a much better way to score the risks and automate the tedious processes for issuing and managing consumer loans.

The CEO and cofounder is Dave Girouard, who built the billion-dollar apps business for Google.He had also served as a Product Manager at Apple and an associate in Booz Allen's Information Technology practice.

As for Upstart, Girouards main focus is to upend the banking industrys reliance on the FICO score.

The Upstart system uses AI and machine learning models with 1,600 data points and 15 billion cells of data to improve accuracy in terms of identifying and measuring credit risks, said Phat Le, who is an Associate at Harbor Research.Some of the variables that Upstart considers are employment history, educational background, banking transactions, cost of living, and loan application interactions.

For the most part, Upstart is reducing the inefficiency with the traditional FICO approach.After all, about 80% of Americans never default on their loans yet only 48% have access to loans at prime rate.The result is that good borrowers often pay premiums rates while many other borrowers get loans when they should not.

Granted, when it comes to AI, there can certainly be major issues.There is the potential for bias and discrimination, such as when the data is skewed.Yet Upstart has made great strides in addressing the problems.

In 2017, the company was the first to receive a No Action Letter from the Consumer Financial Protection Bureau (CFPB), which was renewed in November 2020, said Mike Raines, who is the owner of Raines Insurance Group.According to Upstart, the purpose of such letters is to reduce potential regulatory uncertainty for innovative products that may offer significant consumer benefit.

Keep in mind that one of Upstarts banking partners has recently eliminated any minimum FICO requirement for its borrowers.And this is what Girouard had to say about this on his earnings call:To us, this demonstrates both a commitment on behalf of this bank to a more inclusive lending program, as well as an increasing confidence in Upstart's AI-powered model. While credit scores can be useful, hard cutoffs based on a three-digit number invented 30 years ago leaves far too many creditworthy Americans out in the cold.

The Upstart strategy has certainly resulted in staggering growth.In the latest quarter, the revenues soared by 1,308% to $194 million and the transaction volume came to $2.80 billion, up 1,605%.The company was even able to generate a net profit of $37.3 million, up from a loss of $6.2 million in the prior year.

To expand its addressable market, Upstart has acquired Prodigy, which has allowed the company to move into the lucrative auto lending space.Based on the latest earnings report from Upstart, the U.S. personal loan originations are about $84 billion and they are $635 billion for auto loans.

But interestingly enough, Upstart really does not need to look further than these two categories anyway.As Girouard noted on the earnings call:[W]e just see a lot of opportunity out there. We don't think credit is a solved problem almost anywhere in terms of people getting rates that makes sense for them based on their true risk. So you will definitely see us move beyond personal loans and auto, but frankly, we have so much uncharted territory, even in those two categories, we're not in a particular rush to do so.

Tom (@ttaulli) is an advisor/board member to startups and the author of Artificial Intelligence Basics: A Non-Technical Introduction, The Robotic Process Automation Handbook: A Guide to Implementing RPA Systems and Implementing AI Systems: Transform Your Business in 6 Steps. He also has developed various online courses, such as for the COBOL and Python programming languages.

Originally posted here:

Upstart: Can AI Kill The FICO Score? - Forbes

Posted in Ai | Comments Off on Upstart: Can AI Kill The FICO Score? – Forbes

Where AI Is Impacting Content Marketer’s in 2021 – CMSWire

Posted: at 1:30 am

PHOTO:DC Studio

The use case for analyzing and editing content for grammar, sentiment, tone and style isn't the hottest AI marketing use case. But its there. It ranked somewhere in the middle of the pack among 49 use cases presented to marketers in the 2021 State of Marketing AI report by Drift and the Marketing Artificial Intelligence Institute.

That use case scored a 3.48. High value nets a score of 4.0, and 5.0 would be transformative. No marketing use case scored above a 4.0.

Last month, we discussed the AI marketing use case of improving email marketing campaigns and analytics. Here, were discussing improving marketing content using AI.

Tools like Grammarly are becoming well known in the AI marketing bag of tricks, for certain. What about replicating brand tone and keeping things consistent across communication channels and content?

Its getting there, Paul Roetzer, CEO and founder of the Marketing Artificial Intelligence Institute, told CMSWires Rich Hein and Dom Nicastro in an episode of the CX Decoded Podcast. It's made massive leaps forward in the last three years. The ability to understand and replicate tone... If it's not there, it's coming, and there are a lot of people putting a lot of money behind that sort of thing.

Soits not 100%. Not many marketing AI uses cases are, according to a finding in the McKinsey Global Surveys State of AI in 2020 report. Respondents in that report say their AI models have misperformed within the business functions where AI is used most. "Where is the number 1 area of mis-performance," you ask? Marketing and sales (32%), followed by product and/or service development (21%) and then service operations (19%).

Related Article: Opening Up the Email Marketing Engine to Artificial Intelligence

Growing pains? Maybe. Still, the use cases are out there, such as editing grammar. Microsoft scored a deal with Nvidia for grammar refinements in Microsoft Word.

Even high school English teachers are experiencing an impactful shift in literacy practices since the advent of digital word processing, according to one researcher. And that includes Artificial Intelligence literacies, which impact the production of writing with high-accuracy grammar suggestions, according to a report by Jason Toncic of Montclair State University.

Now, more than ever, customers and prospects are meeting and engaging with companies digitally. And its content that creates those connections online, said Christopher Willis, chief marketing officer of Acrolinx, an AI content services provider. That makes it a major asset to an enterprise. Actually, one of its biggest.

Here are three areas where Willis is seeing marketers deploying AI-powered content today.

Demand Gen: Strong content is the fuel of high performing demand generation campaigns and programs. Demand generation teams can produce content thats well-written, clear and findable, according to Willis.

Content Marketing: AI-powered content strategy can enable brands across content-development teams to deploy consistent grammar, voice and style guidelines.

Brand:A well-defined brand voice makes it easy for a companys values and identity to be heard, according to Willis. AI-powered content can align an enterprises content to its brand and style standards by providing feedback to writers directly in their various authoring tools keeping their content on-brand, in the correct tone of voice, inclusive and consistent.

Mike Kaput, chief content officer at Marketing AI Institute, blogged that AI can help content production in the areas of:

Related Article: 3 Misconceptions About AI in Marketing

Where do these AI tools live? Willis said typical integrations include platforms like Adobe Experience Manager and Adobe InDesign, the full Google Suite, all Microsoft Office applications, Kapost, Figma, Sketch and WordPress, among others.

While integrations may vary and while machine learning and AI in marketing content may be still nascent, learning about it is always a good idea. You dont need me to reiterate the importance of content in your digital marketing efforts, SEO strategist Neil Patel blogged. However, you may need clarification on how machine learning can improve what you write and publish and why using it in your content marketing strategy is essential.

Visit link:

Where AI Is Impacting Content Marketer's in 2021 - CMSWire

Posted in Ai | Comments Off on Where AI Is Impacting Content Marketer’s in 2021 – CMSWire

Artificial Intelligence as the Inventor of Life Sciences Patents? – JD Supra

Posted: at 1:30 am

The question whether an artificial intelligence (AI) system can be named as an inventor in a patent application has obvious implications for the life science community, where AIs presence is now well established and growing. For example, AI is currently used to predict biological targets of prospective drug molecules, identify candidates for drug design, decode genetic material of viruses in the context of vaccine development, determine three-dimensional structures of proteins, including their folding form, and many more potential therapeutic applications.

In a landmark decision issued on July 30, 2021, an Australian court declared that an AI system called DABUS can be legally recognized as an inventor on a patent application. It came just days after the Intellectual Property Commission of South Africa granted a patent recognizing DABUS as an inventor. These decisions, as well as at least one other pending case in the U.S. concerning similar issues, have generated excitement and debate in the life sciences community about AI-conceived inventions.

The AI system involved in these legal battles across the globe is called Device for Autonomous Bootstrapping of Unified Sentience aka DABUS developed by Missouri physicist Dr. Stephen Thaler (Thaler). In 2019, two patent applications naming DABUS as the inventor were filed in more than a dozen countries and the European Union. Both applications listed DABUS as the sole inventor, but Thaler remains the owner of any patent rights stemming from these applications. The first application is directed to a design of a container based on fractal geometry. The second application is directed to a device and method for producing light that flickers rhythmically in a specific pattern mimicking human neural activity. In addition, an international patent application combining the subject matter of both applications was filed under the Patent Cooperation Treaty (PCT).

The South African patent based on the PCT application issued without debate about the inventions nonhuman origin. In contrast, during prosecution of the PCT application in Australia, the Deputy Commissioner of Patents of the Australian Intellectual Property Office took the position that the Australian Patents Act requires the inventor to be human and allowed Thalers non-compliant application to lapse. Thaler subsequently sought judicial review, asserting that the relevant Australian patent provisions do not preclude an AI system from being treated as an inventor, and that the Deputy Commissioner misconstrued these provisions. The court agreed, finding that the statues do not expressly exclude an inventor from being an AI system. In its decision, the court describes in detail the many benefits of AI in pharmaceutical research, ranging from identifying molecular targets to development of vaccines. In view of these contributions, the court cautioned that no narrow view should be taken to the concept of inventor. To do so would inhibit innovation in all scientific fields that may benefit from the output of an AI system. The court further opined that the concept of inventor should be flexible and capable of evolution. In the same vein, the relevant patent statutes should be construed in line with the objective of promoting economic wellbeing through technological innovation. Thus, while stopping short of allowing a non-human from being named a patent applicant or grantee, the Australian court permitted inventorship in the name of an AI system under Australian statutory provisions.

To date, the U.S. has not acknowledged the legality of nonhuman inventorship. In response to the filing of two U.S. patent applications in 2019 identifying DABUS as the sole inventor on each application, the U.S. Patent and Trademark Office (USPTO) issued a Notice to File Missing Parts for each application requiring Thaler to identify an inventor by his or her legal name. Upon several petitions by Thaler requesting reconsideration of the notice for each application, the USPTO last year rejected the idea that DABUS, or any other AI systems, can be an inventor on a patent application. The USPTO found that since the U.S. statutes consistently refer to inventors as natural persons, interpreting inventor broadly to encompass machines would contradict the plain reading of the patent statues. In reaching this decision, the USPTO also cited earlier Federal Circuit decisions which found that state governments and corporations could not be listed as inventors because conception of an invention needs to be a formation in the mind of the inventor and a mental act by a natural person. In response, Thaler sued Andrei Iancu, in his capacity as Under Secretary of Commerce for Intellectual Property and Director of the USPTO as well as the USPTO itself in Virginia federal court.

In that pending action, Thaler argued that the USPTOs decisions in both applications effectively prohibit patents on all AI-generated inventions, producing the undesirable outcome of discouraging innovation or encouraging misrepresentations by individuals claiming credit for work they did not perform. In addition, according to Thaler, there is no statute or case in the U.S. holding that an AI cannot be listed as an inventor. Accordingly, he urged the court to undertake a dynamic interpretation of the law. Furthermore, Thaler claimed that a conception requirement should not prevent AI inventorship because the patent system should be indifferent to the means by which invention comes about. For these reasons, Thaler sought reinstatement of both patent applications and a declaration that requiring a natural person to be listed as an inventor as a condition of patentability is contrary to law. While the court has not yet ruled on the issues presented, presiding Judge Leonie Brinkema remarked in a summary judgment hearing held in April of this year that the issue seemed to be best resolved by Congress.

Even if nonhuman inventorship becomes widely recognized, other important questions of AI and patent law will remain. Among these is the issue of ownership. In most jurisdictions, in cases where the applicant is different from the inventor, the applicant needs to show it properly obtained ownership from the inventor. The obvious question that arises is how can a machine like DABUS, which cannot hold title to an invention, pass title to an applicant like Thaler under the current patent system. The likely answer is that legislative changes in the U.S. and around the world are needed to expand the limits of patent inventorship and ownership to accommodate such arrangements. When and if that will happen is unclear, but the decisions from Australia and South Africa have certainly raised the profile of the debate surrounding inventorship and ownership of AI conceived inventions.

[View source.]

The rest is here:

Artificial Intelligence as the Inventor of Life Sciences Patents? - JD Supra

Posted in Ai | Comments Off on Artificial Intelligence as the Inventor of Life Sciences Patents? – JD Supra

Four Policies that Government Can Pursue to Advance Trustworthy AI – uschamber.com

Posted: at 1:30 am

This past July, DeepMind, an artificial intelligence (AI) lab in London, announced a groundbreaking discovery. Using an AI technology called AlphaFold, DeepMind was able to predict the shapes of more than 350,000 proteins, 250,000 of which were previously unknown, and help develop entire new lifesaving drugs and other biological tools, which is particular helpful in the fight against COVID-19.

Broadly, AI is poised to transform the way Americans work, socialize, and other numerous facets of our lives. Deepmind is not the only example of AIs benefits. AI has been cited to improve weather forecasting, make access to finance more inclusive, and keep fraudsters at bay. But like any technology, AI presents some risks too. To fully enable the benefits of AI, it is incumbent on policymakers to advance polices to facilitate trustworthy AI.

A recent report from the U.S. Chamber Technology Center (C_TEC) and the Deloitte AI Institutes highlights the proper role of the federal government in facilitating trustworthy AI and the importance of sound public policies to mitigate risks posed by AI and accelerate its benefits. Based on a survey of business leaders across economic sectors focused on AI, the report examines perceptions of the risks and benefits of AI and outlines a trustworthy AI policy agenda.

Through the right policies, the federal government can play a critical role to incentive the adoption of trustworthy AI application. Here are four key policy areas the government can pursue:

1. Conduct fundamental research in trustworthy AI: Historically, the federal government has played a significant role in building the foundation of emerging technologies through conducting fundamental research. AI is no different.

2. Improve access to government data and models: High quality data is the lifeblood of developing new AI applications and tools, and poor data quality can heighten risks. Governments at all levels possess a significant amount of data that could be used to both improve the training of AI systems and create novel applications.

3. Increase widespread access to shared computing resources: In addition to high quality data, the development of AI applications requires significant compute capacity. However, many small startups and academic institutions lack sufficient computing resources, which in turn prevents many stakeholders to fully access AIs potential.

4. Enable open source tools and frameworks: Ensuring the development of trustworthy AI will require significant collaboration between government, industry, academia, and other relevant stakeholders. One key method to facilitate collaboration is through encouraging the use of open source tools and frameworks to share best practices and approaches on trustworthy AI.

The United States has an enormous opportunity to transform its economy and society in positive ways through leading in AI innovation. As other economies contemplate their approach to trustworthy AI, this report outlines a path forward on how U.S. policymakers can pursue a wide range of options to advance trustworthy AI domestically, and empower the United States to maintain global competitiveness in this critical technology sector.

Here is the original post:

Four Policies that Government Can Pursue to Advance Trustworthy AI - uschamber.com

Posted in Ai | Comments Off on Four Policies that Government Can Pursue to Advance Trustworthy AI – uschamber.com

AI ethics in the real world: FTC commissioner shows a path toward economic justice – ZDNet

Posted: at 1:30 am

The proliferation of artificial intelligence and algorithmic decision-making has helped shape myriad aspects of our society: From facial recognition to deep fake technology to criminal justice and health care, their applications are seemingly endless. Across these contexts, the story of applied algorithmic decision-making is one of both promise and peril. Given the novelty, scale, and opacity involved in many applications of these technologies, the stakes are often incredibly high.

This is the introduction to FTC Commissioner Rebecca Kelly Slaughter's whitepaper:Algorithms and Economic Justice: A Taxonomy of Harms and a Path Forward for the Federal Trade Commission. If you have been keeping up with data-driven and algorithmic decision-making, analytics, machine learning, AI, and their applications, you can tell it's spot on. The 63-page Whitepaper does not disappoint.

Slaughter worked on the whitepaper with her FTC colleagues Janice Kopec and Mohamad Batal. Their work was supported by Immuta, and it has just been published as part of theYale Law School Information Society Project Digital Future Whitepapers series. The Digital Future Whitepaper Series, launched in 2020, is a venue for leading global thinkers to question the impact of digital technologies on law and society.

The series aims to provide academics, researchers, and practitioners a forum to describe novel challenges of data and regulation, to confront core assumptions about law and technology, and to propose new ways to align legal and ethical frameworks to the problems of the digital world.

Slaughter notes that in recent years, algorithmic decision-making has produced biased, discriminatory, and otherwise problematic outcomes in some of the most important areas of the American economy. Her work provides a baseline taxonomy of algorithmic harms that portend injustice, describing both the harms themselves and the technical mechanisms that drive those harms.

In addition, it describes Slaughter's view of how the FTC's existing tools can and should be aggressively applied to thwart injustice, and explores how new legislation or an FTC rulemaking could help structurally address the harms generated by algorithmic decision-making.

Slaughter identifies three ways in which flaws in algorithm design can produce harmful results: Faulty inputs, faulty conclusions, and failure to adequately test.

The value of a machine learning algorithm is inherently related to the quality of the data used to develop it, and faulty inputs can produce thoroughly problematic outcomes. This broad concept is captured in the familiar phrase: "Garbage in, garbage out."

The data used to develop a machine-learning algorithm might be skewed because individual data points reflect problematic human biases or because the overall dataset is not adequately representative. Often skewed training data reflect historical and enduring patterns of prejudice or inequality, and when they do, thesefaulty inputs can create biased algorithms that exacerbate injustice, Slaughter notes.

She cites some high-profile examples of faulty inputs, such asAmazon's failed attempt to develop a hiring algorithm driven by machine learning, and theInternational Baccalaureate'sandUK's A-Level exams. In all of those cases, the algorithms introduced to automate decisions kept identifying patterns of bias in the data used to train them and attempted to reproduce them.

A different type of problem involves feeding data into algorithms that generate conclusions that are inaccurate or misleading -- perhaps better phrased as "data in, garbage out." This type of flaw, faulty conclusions, undergirds fears about the rapidly proliferating field of AI-driven "affect recognition" technology and is often fueled by failures in experimental design.

Machine learning often works as a black box, and as applications are becoming more impactful, that can be problematic. Image: Immuta

Slaughter describes situations in which algorithms attempt to find patterns in, and reach conclusions based on, certain types of physical presentations, and mannerisms. But, she notes, as one might expect, human character cannot be reduced to a set of objective, observable factors. Slaughter highlights the use of affect recognition technology in hiring as particularly problematic.

Some more so than others, such as a company that purports to profile more than sixty personality traits relevant to job performance -- from "resourceful" to "adventurous" to "cultured" -- all based on an algorithm's analysis of an applicant's 30-second recorded video cover letter.

Despite the veneer of objectivity that comes from throwing around terms such as "AI" and "machine learning," in many contexts, the technology is still deeply imperfect, and many argue that its use is nothing less than pseudo-science.

But even algorithms designed with care and good intentions can still produce biased or harmful outcomes that are unanticipated, Slaughter notes. Too often, algorithms are deployed without adequate testing that could uncover these unwelcome outcomes before they harm people in the real world.

Slaughter mentions bias in search results uncovered when testing withGoogle'sandLinkedIn'ssearch but focuses on the health care field. Arecent studyfound racial bias in a widely used machine learning algorithm intended to improve access to care for high-risk patients with chronic health problems.

The algorithm used health care costs as a proxy for health needs, but for a variety of reasons unrelated to health needs, white patients spend more on health care than their equally sick Black counterparts do. Using health care costs to predict health needs, therefore, caused the algorithm to disproportionately flag white patients for additional care.

Researchers estimated that as a result of this embedded bias, the number of Black patients identified for extra care was reduced by more than half. The researchers who uncovered the flaw in the algorithm were able to do so because they looked beyond the algorithm itself to the outcomes it produced and because they had access to enough data to conduct a meaningful inquiry.

When the researchers identified the flaw, the algorithm's manufacturer worked with them to mitigate its impact,ultimately reducing bias by 84%-- exactly the type of bias reduction and harm mitigation that testing and modification seeks to achieve, Slaughter notes.

Not all harmful consequences of algorithms stem from design flaws. Slaughter also identifies three ways in which sophisticated algorithms can generate systemic harm: by facilitating proxy discrimination, by enabling surveillance capitalism, and by inhibiting competition in markets.

Proxy discrimination is the use of one or more facially neutral variables to stand in for a legally protected trait, often resulting in disparate treatment of or disparate impact on protected classes for certain economic, social, and civic opportunities. In other words, these algorithms identify seemingly neutral characteristics to create groups that closely mirror a protected class, and these "proxies" are used for inclusion or exclusion.

Slaughter mentions some high-profile cases of proxy discrimination: The Department of Housing and Urban Development allegations against Facebook's tool called "Lookalike Audiences,"showings of job openings to various audiences, andFinTech innovations that can enable the continuation of historical biasto deny access to the credit system or to efficiently target high-interest products to those who can least afford them.

An additional way algorithmic decision making can fuel broader social challenges is the role it plays in the system ofsurveillance capitalism, which Slaughter defines as a business model that systematically erodes privacy, promotes misinformation and disinformation, drives radicalization, undermines consumers' mental health, and reduces or eliminates consumers' choices.

AI Ethics has very real ramifications that are getting increasingly more widespread and important

Through constant, data-driven adjustments, Slaughter notes, algorithms that process consumer data, often in real-time, evolve, and "improve" in a relentless effort to capture and monetize as much attention from as many people as possible. Many surveillance capitalism enterprises are remarkably successful at using algorithms to "optimize" for consumers' attention with little regard for downstream consequences.

Slaughter examines the case ofYouTube content addressed at children and how it's been weaponized. TheFTC has dealt with this, and Slaughter notes that YouTube announced they will use machine learning to actively search for mis-designated content and automatically apply age restrictions.

While this sounds like the technological backstopSlaughter requestedin that case, she notes two major differences: First, it is entirely voluntary, and second, both its application and effectiveness are opaque. That, she argues, brings up a broader set of concerns about surveillance capitalism -- one that extends beyond any single platform.

The pitfalls associated with algorithmic decision-making sound most obviously in the laws the FTC enforces through its consumer protection mission, Slaughter notes. But the FTC is also responsible for promoting competition, and the threats posed by algorithms profoundly affect that mission as well.

Moreover, she goes on to add, these two missions are not actually distinct, and problems -- including those related to algorithms and economic justice -- need to be considered with both competition and consumer protection lenses.

Slaughter examines topics including traditional antitrust fare such as pricing and collusion, as well as more novel questions such as the implications of the use of algorithms by dominant digital firms to entrench market power and to engage in exclusionary practices.

Overall, the whitepaper seems well-researched and shows a good overview of the subject matter. While the paper's sections on using the FTC's current authorities to better protect consumers and proposed new legislative and regulatory solutions refer to legal tools we do not feel qualified to report on, we encourage interested readers to read them.

We would also like to note, however, that while it's important to be aware ofAI ethics and the far-reaching consequences of data and algorithms, it's equally important to maintain a constructive and unbiased attitude when it comes to issues that are often subjective and open to interpretation.

Overzealous attitude in debates that often take place on social media, where context and intent can easily be misinterpreted and misrepresented, may not be the most constructive way to make progress. Case in point, AI figureheadsYann LeCunandPedro Domingo'smisadventures.

When it comes to AI ethics, we need to go beyond sensationalism and toward a well-informed and, well, data-driven approach. Slaughter's work seems like a step in that direction.

Originally posted here:

AI ethics in the real world: FTC commissioner shows a path toward economic justice - ZDNet

Posted in Ai | Comments Off on AI ethics in the real world: FTC commissioner shows a path toward economic justice – ZDNet

Army Futures Command outlines next five years of AI needs – DefenseNews.com

Posted: at 1:30 am

WASHINGTON Army Futures Command has outlined 11 broad areas of artificial intelligence research its interested in over the next five years, with an emphasis on data analysis, autonomous systems, security and decision-making assistance.

The broad agency announcement from the Austin, Texas-based command comes as the service and the Defense Department work to connect sensors and shooters across the battlefield. Artificial intelligence will be key in that effort by analyzing data and assisting commanders in the decision-making process.

The announcement, released by the commands Artificial Intelligence Integration Center, said the service is particularly interested in AI research of autonomous ground and air platforms, which must operate in open, urban and cluttered environments. The document specifically asks for research into technologies that allow for robots or autonomous systems to move in urban, contested environments, as well as technologies that reduce the electromagnetic profile of the systems. It also wants to know more about AI that can sense obscure targets and understand terrain obstacles.

The document identifies several needs pertaining to data analysis over the next five years. The Army is interested in human-machine interfacing research and needs additional research in ways it can predict an adversarys intent and behavior on the battlefield. In the same category, the Army wants to be able to fuse data from disparate sources and have analytical capabilities to exploit classified and unclassified sources to make enhanced intelligence products.

The Army also wants to be able to combine human insight with machine analysis and develop improved ways of efficiently conveying analytics results to humans.

The Army is interested in AI/ML research in areas which can reduce the cognitive burden on humans and improve overall performance through human-machine teaming, the announcement read.

Similarly, the Army needs more research over the next five years into how to better display data to humans. Data must be presented clearly to users through charts or graphs, for example so they can understand what the information means.

The Army is interested in research that enables improved situational awareness and the visualization and navigation of large data sets to enhance operational activities and training and readiness, the announcement read. Along that same vein, the service is also seeking novel ways of visualizing sensor data and large data sets with multiple sources.

Sign up for our Early Bird Brief Get the defense industry's most comprehensive news and information straight to your inbox

Subscribe

Enter a valid email address (please select a country) United States United Kingdom Afghanistan Albania Algeria American Samoa Andorra Angola Anguilla Antarctica Antigua and Barbuda Argentina Armenia Aruba Australia Austria Azerbaijan Bahamas Bahrain Bangladesh Barbados Belarus Belgium Belize Benin Bermuda Bhutan Bolivia Bosnia and Herzegovina Botswana Bouvet Island Brazil British Indian Ocean Territory Brunei Darussalam Bulgaria Burkina Faso Burundi Cambodia Cameroon Canada Cape Verde Cayman Islands Central African Republic Chad Chile China Christmas Island Cocos (Keeling) Islands Colombia Comoros Congo Congo, The Democratic Republic of The Cook Islands Costa Rica Cote D'ivoire Croatia Cuba Cyprus Czech Republic Denmark Djibouti Dominica Dominican Republic Ecuador Egypt El Salvador Equatorial Guinea Eritrea Estonia Ethiopia Falkland Islands (Malvinas) Faroe Islands Fiji Finland France French Guiana French Polynesia French Southern Territories Gabon Gambia Georgia Germany Ghana Gibraltar Greece Greenland Grenada Guadeloupe Guam Guatemala Guinea Guinea-bissau Guyana Haiti Heard Island and Mcdonald Islands Holy See (Vatican City State) Honduras Hong Kong Hungary Iceland India Indonesia Iran, Islamic Republic of Iraq Ireland Israel Italy Jamaica Japan Jordan Kazakhstan Kenya Kiribati Korea, Democratic People's Republic of Korea, Republic of Kuwait Kyrgyzstan Lao People's Democratic Republic Latvia Lebanon Lesotho Liberia Libyan Arab Jamahiriya Liechtenstein Lithuania Luxembourg Macao Macedonia, The Former Yugoslav Republic of Madagascar Malawi Malaysia Maldives Mali Malta Marshall Islands Martinique Mauritania Mauritius Mayotte Mexico Micronesia, Federated States of Moldova, Republic of Monaco Mongolia Montserrat Morocco Mozambique Myanmar Namibia Nauru Nepal Netherlands Netherlands Antilles New Caledonia New Zealand Nicaragua Niger Nigeria Niue Norfolk Island Northern Mariana Islands Norway Oman Pakistan Palau Palestinian Territory, Occupied Panama Papua New Guinea Paraguay Peru Philippines Pitcairn Poland Portugal Puerto Rico Qatar Reunion Romania Russian Federation Rwanda Saint Helena Saint Kitts and Nevis Saint Lucia Saint Pierre and Miquelon Saint Vincent and The Grenadines Samoa San Marino Sao Tome and Principe Saudi Arabia Senegal Serbia and Montenegro Seychelles Sierra Leone Singapore Slovakia Slovenia Solomon Islands Somalia South Africa South Georgia and The South Sandwich Islands Spain Sri Lanka Sudan Suriname Svalbard and Jan Mayen Swaziland Sweden Switzerland Syrian Arab Republic Taiwan, Province of China Tajikistan Tanzania, United Republic of Thailand Timor-leste Togo Tokelau Tonga Trinidad and Tobago Tunisia Turkey Turkmenistan Turks and Caicos Islands Tuvalu Uganda Ukraine United Arab Emirates United Kingdom United States United States Minor Outlying Islands Uruguay Uzbekistan Vanuatu Venezuela Viet Nam Virgin Islands, British Virgin Islands, U.S. Wallis and Futuna Western Sahara Yemen Zambia Zimbabwe

Thanks for signing up!

By giving us your email, you are opting in to the Early Bird Brief.

The service also wants more research into AI for sensing on the battlefield, including detecting people, equipment and weapons, even when obscured. It wants to sense these targets based on physical, behavioral, cyber or other signatures. Additionally, the Army wants AI-enabled sensors and processors that can detect chemical, biological, radiological, nuclear and explosive threats.

Network and communications security is another area in which the Army wants more research. The service is seeking more research into autonomous network defense and AI-based approaches to offensive cyber capabilities. It also wants novel cyber protection technologies and methods.

Additionally, to prepare for potential GPS-denied environments of the future, the Army is interested in research into algorithms and techniques to fuse sources of position, navigation and timing to provide robust capabilities.

The Internet of Things, or the massive network of devices connected to the internet, presents more artificial intelligence needs for the Army. According to the solicitation, the service is interested in AI research into new approaches to enable secure, resilient, and automatically managed IoT networks in highly complex, mixed cooperative/adversarial, information-centric environments.

The Army needs to better integrate a wide range of capabilities and equipment and capitalize on commercial developments in industrial and human IoT, the solicitation said.

Excerpt from:

Army Futures Command outlines next five years of AI needs - DefenseNews.com

Posted in Ai | Comments Off on Army Futures Command outlines next five years of AI needs – DefenseNews.com

Page 114«..1020..113114115116..120130..»