Page 140«..1020..139140141142..150160..»

Category Archives: Ai

AI and gamification being used to help Aussie farmers reduce spray drift impact – ZDNet

Posted: May 4, 2021 at 8:10 pm

Image: Monash University/Screenshot

While pesticide spraying protects crops against pests, weeds, and diseases, it can also be harmful to neighbouring crops and wildlife.

This unwanted movement of pesticides, known as spray drift, however, could potentially be trackable, thanks to a project developed between Monash University's Faculty of Information Technology, Bard AI, PentaQuest, and AgriSci.

The project combines an artificial intelligence model and augmented reality to enable farmers to see a real-time visual representation of the possible spray drift on their phones. The presentation also allows farmers to view the impact the spray drift could have on neighbouring crops if spraying were to occur during poor conditions, such as when there are strong wind speeds.

Farmers are also able use to the system to understand "what if" scenarios to improve their spray plan and understand the potential impact of spray drift.

"Information alone does not change behaviour and the use of advanced technology doesn't ensure the adoption of new platforms by farmers. By incorporating game-like design applications which drive better training and engagement outcomes, together with AI-driven decision support modelling, we're able to deliver continuous adoption and accurate decision support that informs farmers appropriately," Monash University Faculty of IT interim dean Ann Nicholson said.

Bard AI founder Ross Pearson said the solution has initially been developed to focus on spray drift for large acres but believes it could be used for other applications.

"Our solution combines leading-edge thinking and technology in behavioural science and probabilistic modelling to deliver an engaging experience for farmers that supports them through better decision-making," he said.

In March, Australian agtech firm Agerris rolled outs its "drone on wheels" robot onto the Victorian-based SuniTAFE Smart Farm to support the farm's operations and train technical staff on-site.

The Digital Farmhand was developed to help farmers improve crop yield, pest and weed detection, as well as reduce the need for pesticides.

Each mobile roving robot runs on solar energy and features navigation sensors, laser sensors, infrared sensors, cameras. It also has an artificial intelligence system that can create weed heat maps, as well as detect each individual crop and determine its yield estimation, plant size, and fruit and flowering count.

Inquiry says technology could boost the value of Australia's agriculture sector by AU$20b

The House of Representatives Standing Committee on Agriculture and Water Resources also put forward 13 recommendations.

Australia's report on agtech confirms technology can lead to a fertile future

Sensors, robotics, AI, and blockchain are outlined as some of the future technologies that can improve the sector's advancement.

CSIRO using artificial intelligence to map 1.7m Australian grain paddocks

It developed ePaddocks for the agriculture sector to better understand the boundaries of grain paddocks across the country.

The Yield scores AU$11 million in funding from Yamaha Motor Ventures

The company's major shareholder Bosch Group has also converted its existing loan into equity.

Visit link:

AI and gamification being used to help Aussie farmers reduce spray drift impact - ZDNet

Posted in Ai | Comments Off on AI and gamification being used to help Aussie farmers reduce spray drift impact – ZDNet

Artificial intelligence is learning how to dodge space junk in orbit – Space.com

Posted: at 8:10 pm

An AI-driven space debris-dodging system could soon replace expert teams dealing with growing numbers of orbital collision threats in the increasingly cluttered near-Earth environment.

Every two weeks, spacecraft controllers at the European Space Operations Centre (ESOC) in Darmstadt, Germany, have to conduct avoidance manoeuvres with one of their 20 low Earth orbit satellites, Holger Krag, the Head of Space Safety at the European Space Agency (ESA) said in a news conference organized by ESA during the 8th European Space Debris Conference held virtually from Darmstadt Germany, April 20 to 23. There are at least five times as many close encounters that the agency's teams monitor and carefully evaluate, each requesting a multi-disciplinary team to be on call 24/7 for several days.

"Every collision avoidance manoeuvre is a nuisance," Krag said. "Not only because of fuel consumption but also because of the preparation that goes into it. We have to book ground-station passes, which costs money, sometimes we even have to switch off the acquisition of scientific data. We have to have an expert team available round the clock."

The frequency of such situations is only expected to increase. Not all collision alerts are caused by pieces of space debris. Companies such as SpaceX, OneWeb and Amazon are building megaconstellations of thousands of satellites, lofting more spacecraft into orbit in a single month than used to be launched within an entire year only a few years ago. This increased space traffic is causing concerns among space debris experts. In fact, ESA said that nearly half of the conjunction alerts currently monitored by the agency's operators involve small satellites and constellation spacecraft.

ESA, therefore, asked the global Artificial Intelligence community to help develop a system that would take care of space debris dodging autonomously or at least reduce the burden on the expert teams.

"We made a large historic data set of past conjunction warnings available to a global expert community and tasked them to use AI [Artificial Intelligence] to predict the evolution of a collision risk of each alert over the three days following the alert," Rolf Densing, Director of ESA Operations said in the news conference.

"The results are not yet perfect, but in many cases, AI was able to replicate the decision process and correctly identify in which cases we had to conduct the collision avoidance manoeuvre."

Related: Astronomers ask UN committee to protect night skies from megaconstellations

The agency will explore newer approaches to AI development, such as deep learning and neural networks, to improve the accuracy of the algorithms, Tim Flohrer, the Head of ESA's Space Debris Office told Space.com.

"The standard AI algorithms are trained on huge data sets," Flohrer said. "But the cases when we had actually conducted manoeuvres are not so many in AI terms. In the next phase we will look more closely into specialised AI approaches that can work with smaller data sets."

For now, the AI algorithms can aid the ground-based teams as they evaluate and monitor each conjunction alert, the warning that one of their satellites might be on a collision course with another orbiting body. According to Flohrer, such AI-assistance will help reduce the number of experts involved and help the agency deal with the increased space traffic expected in the near future. The decision whether to conduct an avoidance manoeuvre or not for now still has to be taken by a human operator.

"So far, we have automated everything that would require an expert brain to be awake 24/7 to respond to and follow up the collision alerts," said Krag. "Making the ultimate decision whether to conduct the avoidance manoeuvre or not is the most complex part to be automated and we hope to find a solution to this problem within the next few years."

Ultimately, Densing added, the global community should work together to create a collision avoidance system similar to modern air-traffic management, which would work completely autonomously without the humans on the ground having to communicate.

"In air traffic, they are a step further," Densing said. "Collision avoidance manoeuvres between planes are decentralised and take place automatically. We are not there yet, and it will likely take a bit more international coordination and discussions."

Not only are scientific satellites at risk of orbital collisions, but spacecraft like SpaceX's Crew Dragon could be affected as well. Recently, Crew Dragon Endeavour, with four astronauts on board, reportedly came dangerously close to a small piece of debris on Saturday, April 24, during its cruise to the International Space Station. The collision alert forced the spacefarers to interrupt their leisure time, climb back into their space suits and buckle up in their seats to brace for a possible impact.

According to ESA, about 11,370 satellites have been launched since 1957, when the Soviet Union successfully orbited a beeping ball called Sputnik. About 6,900 of these satellites remain in orbit, but only 4,000 are still functioning.

Follow Tereza Pultarova on Twitter @TerezaPultarova. Follow us on Twitter @Spacedotcom and on Facebook.

View post:

Artificial intelligence is learning how to dodge space junk in orbit - Space.com

Posted in Ai | Comments Off on Artificial intelligence is learning how to dodge space junk in orbit – Space.com

Podcast: AI finds its voice – MIT Technology Review

Posted: at 8:10 pm

Todays voice assistants are still a far cry from the hyper-intelligent thinking machines weve been musing about for decades. And its because that technology is actually the combination of three different skills: speech recognition, natural language processing, and voice generation.

Each of these skills already presents huge challenges. In order to master just the natural language processing part? You pretty much have to re-create human-level intelligence. Deep learning, the technology driving the current AI boom, can train machines to become masters at all sorts of tasks. But it can only learn one at a time. And because most AI models train their skill set on thousands or millions of existing examples, they end up replicating patterns within historical dataincluding the many bad decisions people have made, like marginalizing people of color and women.

Still, systems like the board-game champion AlphaZero and the increasingly convincing fake-text generator GPT-3 have stoked the flames of debate regarding when humans will create an artificial general intelligencemachines that can multitask, think, and reason for themselves. In this episode, we explore how machines learn to communicateand what it means for the humans on the other end of the conversation.

This episode was produced by Jennifer Strong, Emma Cillekens, Anthony Green, Karen Hao, and Charlotte Jee. Were edited by Michael Reilly and Niall Firth.

[TR ID]

Jim: I don't know if it was AI If they had taken the recording of something he had done... and were able to manipulate it... but I'm telling you, it was my son.

Strong: The day started like any other for a man... were going to call Jim. He lives outside Boston.

And by the way... he has a family member who works for MIT.

Were not going to use his last name because they have concerns about their safety.

Jim: It was a Tuesday or Wednesday morning, nine o'clock I'm deep in thought working on something,

Strong: That is... until he received this call.

Jim: The phone rings and I pick it up and it's my son. And he is clearly agitated. This, this kid's a really chill guy but when he does get upset, he has a number of vocal mannerisms. And this was like, Oh my God, he's in trouble.

And he basically told me, look, I'm in jail, I'm in Mexico. They took my phone. I only have 30 seconds. Um, they said I was drinking, but I wasn't and people are hurt. And look, I have to get off the phone, call this lawyer and it gives me a phone number and has to hang up.

Strong: His son is in Mexico and theres just no doubt in his mind its him.

Jim: And I gotta tell you, Jennifer, it, it was him. It was his voice. It was everything. Tone. Just these little mannerisms, the, the pauses, the gulping for air, everything that you could imagine.

Strong: His heart is in his throat...

Jim: My hair standing on edge

Strong: So, he calls that phone number A man picks up and he offers more details on whats going on.

Jim: Your son is being charged with hitting this car. There was a pregnant woman driving whose arm was broken. Her daughter was in the back seat.. is in critical condition and they are, um, they booked him with driving under the influence. We don't think that he has done that. This is we've, we've come across this a number of times before, but the most important thing is to get him out of jail, get him safe, as fast as possible.

Strong: Then the conversation turns to money hes told bail has been set and he needs to put down ten percent.

Jim: So as soon as he started talking about money, you know, the, the flag kind of went up and I said, excuse me, is there any chance that this is a scam of some sort? And he got really kind of, um, irritated. He's like, Hey, you called me. Look, I find this really offensive that you're accusing me of something. And then my heart goes back in my throat. I'm like, this is the one guy who's between my son and even worse jail. So I backtracked

[Music]

My wife walks in 10 minutes later and says, well, you know, I was texting with him late last night. Like this is around the time probably that he would have been arrested and jailed. So, of course we text him, he's just getting up. He's completely fine.

Strong: Hes still not sure how someone captured the essence of his sons voice. But he has some theories...

Jim: They had to have gotten a recording of something when he was upset. That's the only thing that I can say, cause they couldn't have mocked up some of these things that he does. They couldn't guess at that. I don't think, and so they, I think they had certainly some raw material to work with and then what they did with it from there. I don't know.

Strong: And its not just Jim who's unsure We have no idea whether AI had anything to do with this.

But, the point is we now live in a world where we also cant be sure that it didnt.

Its incredibly easy to fake someones voice with even a few minutes of recordings and teenagers like Jims son? They share countless recordings through social media posts and messages.

Jim: I was quite impressed with how good it was. Um, like I said, I'm not easily fooled and man, they had it nailed. So, um, just caution.

Strong: Im Jennifer Strong and this episode we look at what it takes to make a voice.

[SHOW ID]

Zeyu Gin: You guys have been making weird stuff online.

Strong: Zeyu Jin is a research scientist at Adobe This is him speaking at a company conference about five years ago showing how software can rearrange the words in this recording.

Key: I jumped on the bed and I kissed my dogs and my wifein that order.

Zeyu: So how about we mess with who he actually kissed. // Introducing Project VoCo. Project VoCo allows you to edit speech in text. So lets bring it up. So I just load this audio piece in VoCo. So as you can see we have the audio waveform and we have the text under it. //

So what do we do? Copy paste. Oh! Yeah its done. Lets listen to it.

Key: And I kissed my wife and my dogs.

Zeyu: Wait theres more. We can actually type something thats not here.

Key: And I kissed Jordan and my dogs.

Strong: Adobe never released this prototype but the underlying technology keeps getting better.

For example, heres a computer-generated fake of podcaster Joe Rogan from 2019... It was produced by Squares AI lab called Dessa to raise awareness about the technology.

Rogan: 10-7 Friends, I've got something new to tell all of you. Ive decided to sponsor a hockey team made up entirely of chimps.

Strong: While it sounds like fun and games experts warn these artificial voices could make some types of scams a whole lot more common. Things like what we heard about earlier.

Mona Sedky: Communication focused crime has historically been lower on the totem pole.

Strong: Thats federal Prosecutor Mona Sedky speaking last year at the Federal Trade Commission about voice cloning technologies.

Mona Sedky: But now with the advent of things like deep fake video now deep fake audio you you can basically have anonymizing tools and be anywhere on the internet you want to be. anywhere in the world and communicate anonymously with people. So as a result there has been an enormous uptick in communication focused crime.

Balasubramaniyan: But imagine if you as a CFO or chief controller gets a phone call that comes from your CEOs phone number.

Strong: And this is Pindrop Security CEO Vijay Balasubramaniyan at a security conference last year.

Balasubramaniyan: Its completely spoofed so it actually uses your address book, and it shows up as your CEOs name... and then on the other end you hear your CEOs voice with a tremendous amount of urgency. And we are starting to see crazy attacks like that. There was an example that a lot of press media covered, which is a $220,000 wire that happened because a CEO of a UK firm thought he was talking to his parent company so he then sent that money out. But weve seen as high as $17 million dollars go out the door.

Strong: And the very idea of fake voices... can be just as damaging as a fake voice itself Like when former president Donald Trump tried to blame the technology for some offensive things he said that were caught on tape.

But like any other tech its not inherently good or bad its just a tool... and I used it in the trailer for season one to show what the technology can do.

Strong: If seeing is believing...

How do we navigate a world where we cant trust our eyes... or ears?

And so you know... what youre listening to... Its not just me speaking. I had some help from an artificial version of my voice filling in words here and there.

Meet synthetic Jennifer.

Synthetic Jennifer: Hi there, folks!

Strong: I can even click to adjust my mood

Synthetic Jennifer: Hi there.

Strong: Yeah, lets not make it angry..

Strong: In the not so distant future this tech will be used in any number of ways for simple tweaks to pre-recorded presentations even... to bring back the voices of animated characters from a series

In other words, artificial voices are here to stay. But they havent always been so easy to make and I called up an expert whose voice might sound familiar..

Bennett: How does this sound? Um, maybe I could be a little more friendly. How are you?

Hi, I'm Susan C. Bennet, the original voice of Siri.

Well, the day that Siri appeared, which was October 4, 2011, a fellow voice actor emailed me and said, Hey, we're playing around with this new iPhone app, isn't this you? And I said, what? I went on the Apple site and listened... and yep. That was my voice. [chuckles]

Strong: You heard that right. The original female voice that millions associate with Apple devices? Had no idea. And she wasnt alone. The human voices behind other early voice assistants were also taken by surprise.

Bennett: Yeah, it's been an interesting thing. It was an adjustment at first as you can imagine, because I wasn't expecting it. It was a little creepy at first, I'll have to say, I never really did a lot of talking to myself as Siri, but gradually I got accepting of it and actually it ended up turning into something really positive so

Strong: To be clear, Apple did not steal Susan Bennetts voice. For decades, shes done voice work for companies like McDonalds and Delta Airlines and years before Siri came out she did a strange series of recordings that fueled its development.

Bennett: In 2005, we couldn't have imagined something like Siri or Alexa. And so all of us, I've talked to other people who've had the same experience, who have been a virtual voice. You know we just thought we were doing just generic phone voice messaging. And so when suddenly Siri appeared in 2011, it's like, I'm who, what, what is this? So, it was a genuine surprise, but I like to think of it as we were just on the cutting edge of this new technology. So, you know, I choose to think of it as a very positive thing, even though, we, none of us, were ever paid for the millions and millions of phones that our voices are heard on. So that's, that's a downside.

Strong: Something else thats awkward... she says Apple never acknowledged her as the American voice of Siri thats despite becoming an accidental celebrity... reaching millions.

Bennett: The only actual acknowledgement that I've ever had is via Siri. If you ask Siri "Who is Susan Bennett?" she'll say, I'm the original voice of Siri. Thanks so much, Siri. Appreciate it.

Strong: But its not the first time shes given her voice to a machine.

Bennett: In the late 70s when they were introducing ATMs I like to say it was my first experience as a machine, and you know, there were no personal computers or anything at that time and people didn't trust machines. They wouldn't use the ATMs because they didn't trust the machines to give them the right money. They, you know, if they put money in the machine they were afraid they'd never see it again. And so a very enterprising advertising agency in Atlanta at the time called McDonald and Little decided to humanize the machine. So they wrote a jingle and I became the voice of Tilly the all-time teller and then they ultimately put a little face on the machine.

Strong: The human voice helps companies build trust with consumers...

Bennett: There are so many different emotions and meanings that we get across through the sound of our voices rather than just in print. That's why I think emojis came up because you can't get the nuances in there without the voice. And so I think that's why voice has become such an important part of technology.

Strong: And in her own experience, interactions with this synthetic version of her voice have led people to trust and confide in her to call her a friend, even though theyve never met her.

Bennett: Well, I think the oddest thing about being the voice of Siri, to me, is when I first revealed myself, it was astounding to me how many people considered Siri their friend or some sort of entity that they could really relate to. I think they actually in many cases think of her as human.

Strong: Its estimated the global market for voice technologies will reach nearly 185-billion dollars this year... and AI-generated voices? are a game changer.

Bennett: You know, after years and years of working on these voices, it's really, really hard to get the actual rhythm of the human voice. And I'm sure they'll probably do it at some point, but you will notice even to this day, you know, you'll listen to Siri or Alexa or one of the others and they'll be talking along and it sounds good until it doesn't. Like, Oh, I'm going to the store. You know, there's some weirdness in the rhythmic sense of it.

Strong: But even once human-like voices become commonplace... shes not entirely sure that will be a good thing.

Bennett: But you know, the advantage for them is they don't really have to get along with Siri. They can just tell Siri what to do if they don't like what she says, they can just turn it off. So it is not like real human relations. It's like maybe what people would like human relations to be. Everybody does what I want. (laughter) Then everybody's happy. Right?

Strong: Of course, voice assistants like Siri and Alexa arent just voices. Their capabilities come from the AI behind the scenes too.

Its been explored in science fiction films like this one, called Her about a man who falls in love with his voice assistant.

Theodore: How do you work?

Samantha (AI): Well... Basically I have intuition. I mean.. The DNA of who I am is based on the millions of personalities of all the programmers who wrote me, but what makes me me is my ability to grow through my experiences. So basically in every moment I'm evolving, just like you.

Strong: But todays voice assistants are a far cry from the hyper-intelligent thinking machines weve been musing about for decades.

And its because that technology... is actually many technologies. Its the combination of three different skills...speech recognition, natural language processing and voice generation.

Speech recognition is what allows Siri to recognize the sounds you make and transcribe them into words. Natural language processing turns those words into meaning... and figures out what to say in response. And voice generation is the final piece... the human element... that gives Siri the ability to speak.

Each of these skills is already a huge challenge... In order to master just the natural language processing part? You pretty much have to re-create human-level intelligence.

And were nowhere near that. But weve seen remarkable progress with the rise of deep learning helping Siri and Alexa be a little more useful.

Metz: What people may not know about Siri is that original technology was something different.

Strong: Cade Metz is a tech reporter for the New York Times. His new book is called Genius Makers: The Mavericks Who Brought AI to Google, Facebook, and the World.

Metz: The way that Siri was originally built... You had to have a team of engineers, in a room, at their computers and piece by piece, they had to define with computer code how it would recognize your voice.

Strong: Back then... engineers would spend days writing detailed rules meant to show machines how to recognize words and what they mean.

And this was done at the most basic level often working with just snippets of voice at a time.

Just imagine all the different ways people can say the word hello or all the ways we piece together sentences explaining why time flies or how some verbs can also be nouns.

Metz: You can never piece together everything you need, no matter how many engineers you have no matter how rich your company is. Defining every little thing that might happen when someone speaks into their iPhone You just don't have enough person-power to build everything you need to build. It's just too complicated.

Strong: Neural networks made that process a whole lot easier They simply learn by recognizing patterns in data fed into the system.

Metz: You take that human speech You give it to the neural network And the neural network learns the patterns that define human speech. That way it can recreate it without engineers having to define every little piece of it. The neural network literally learns the task on its own. And that's the key change... is that a neural network can learn to recognize what a cat looks like, as opposed to people having to define for the machine what a cat looks like.

Strong: But even before neural networks Tech companies like Microsoft aimed to build systems that could understand the everyday way people write and talk.

And in 1996, Microsoft hired a linguist Chris Brocket... to begin work on what they called natural-language AI.

Read the original here:

Podcast: AI finds its voice - MIT Technology Review

Posted in Ai | Comments Off on Podcast: AI finds its voice – MIT Technology Review

The future of AI is being shaped right now. How should policymakers respond? – Vox.com

Posted: April 9, 2021 at 2:41 am

For a long time, artificial intelligence seemed like one of those inventions that would always be 50 years away. The scientists who developed the first computers in the 1950s speculated about the possibility of machines with greater-than-human capacities. But enthusiasm didnt necessarily translate into a commercially viable product, let alone a superintelligent one.

And for a while in the 60s, 70s, and 80s it seemed like such speculation would remain just that. The sluggishness of AI development actually gave rise to a term: AI winters, periods when investors and researchers got bored with lack of progress in the field and devoted their attention elsewhere.

No one is bored now.

Limited AI systems have taken on an ever-bigger role in our lives, wrangling our news feeds, trading stocks, translating and transcribing text, scanning digital pictures, taking restaurant orders, and writing fake product reviews and news articles. And while theres always the possibility that AI development will hit another wall, theres reason to think it wont: All of the above applications have the potential to be hugely profitable, which means there will be sustained investment from some of the biggest companies in the world. AI capabilities are reasonably likely to keep growing until theyre a transformative force.

A new report from the National Security Commission on Artificial Intelligence (NSCAI), a committee Congress established in 2018, grapples with some of the large-scale implications of that trajectory. In 270 pages and hundreds of appendices, the report tries to size up where AI is going, what challenges it presents to national security, and what can be done to set the US on a better path.

It is by far the best writing from the US government on the enormous implications of this emerging technology. But the report isnt without flaws, and its shortcomings underscore how hard it will be for humanity to get a handle on the warp-speed development of a technology thats at once promising and perilous.

As it exists right now, AI poses policy challenges. How do we determine whether an algorithm is fair? How do we stop oppressive governments from using AI surveillance for totalitarianism? Those questions are mostly addressable with the same tools the US has used in other policy challenges over the decades: Lawsuits, regulations, international agreements, and pressure on bad actors, among others, are tried-and-true tactics to control the development of new technologies.

But for more powerful and general AI systems advanced systems that dont yet exist but may be too powerful to control once they do such tactics probably wont suffice.

When it comes to AI, the big overarching challenge is making sure that as our systems get more powerful, we design them so their goals are aligned with those of humans that is, humanity doesnt construct scaled-up superintelligent AI that overwhelms human intentions and leads to catastrophe.

Because the tech is necessarily speculative, the problem is that we dont know as much as wed like to about how to design those systems. In many ways, were in a position akin to someone worrying about nuclear proliferation in 1930. Its not that nothing useful could have been done at that early point in the development of nuclear weapons, but at the time it would have been very hard to think through the problem and to marshal the resources let alone the international coordination needed to tackle it.

In its new report, the NSCAI wrestles with these problems and (mostly successfully) addresses the scope and key challenges of AI; however, it has limitations. The commission nails some of the key concerns about AIs development, but its US-centric vision may be too myopic to confront a problem as daunting and speculative as an AI that threatens humanity.

AI has seen extraordinary progress over the past decade. AI systems have improved dramatically at tasks including translation, playing games such as chess and Go, answering important research biology questions (such as predicting how proteins fold), and generating images.

These systems also determine what you see in a Google search or in your Facebook News Feed. They compose music and write articles that, at first glance, read as though a human wrote them. They play strategy games. They are being developed to improve drone targeting and detect missiles.

All of those are instances of narrow AI computer systems designed to solve specific problems, versus those with the sort of generalized problem-solving capabilities humans have.

But narrow AI is getting less narrow and researchers have gotten better at creating computer systems that generalize learning capabilities. Instead of mathematically describing detailed features of a problem for a computer to solve, today its often possible to let the computer system learn the problem by itself.

As computers get good enough at performing narrow AI tasks, they start to exhibit more general capabilities. For example, OpenAIs famous GPT series of text generators is, in one sense, the narrowest of narrow AIs it just predicts what the next word will be, based on previous words its prompted with and its vast store of human language. And yet, it can now identify questions as reasonable or unreasonable as well as discuss the physical world (for example, answering questions about which objects are larger or which steps in a process must come first).

What these developments show us is this: In order to be very good at narrow tasks, some AI systems eventually develop abilities that are not narrow at all.

The NSCAI report acknowledges this eventuality. As AI becomes more capable, computers will be able to learn and perform tasks based on parameters that humans do not explicitly program, creating choices and taking actions at a volume and speed never before possible, the report concludes.

Thats the general dilemma the NSCAI is tasked with addressing. A new technology, with both extraordinary potential benefits and extraordinary risks, is being developed. Many of the experts working on it warn that the results could be catastrophic. What concrete policy measures can the government take to get clarity on a challenge such as this one?

The NSCAI report is a significant improvement on much of the existing writing about artificial intelligence in one important respect: It understands the magnitude of the challenge.

For a sense of that magnitude, its useful to imagine the questions involved in figuring out government policy on nuclear nonproliferation in the 1930s.

By 1930, there was certainly some scientific evidence that nuclear weapons would be possible. But there were no programs anywhere in the world to make them, and there was even some dissent within the research community about whether such weapons could ever be built.

As we all know, nuclear weapons were built within the next decade and a half, and they changed the trajectory of human history.

Given all that, what could the government have done about nuclear proliferation in 1930? Decide on the wisdom of pushing itself to develop such weapons, perhaps, or develop surveillance systems that would alert the country if other nations were building them.

In practice, the government in 1930 did none of these things. When an idea is just beginning to gain a foothold among the academics, engineers, and experts who work on it, its hard for policymakers to figure out where to start.

When considering these decisions, our leaders confront the classic dilemma of statecraft identified by Henry Kissinger: When your scope for action is greatest, the knowledge on which you can base this action is always at a minimum. When your knowledge is greatest, the scope for action has often disappeared, Chair Eric Schmidt and Vice Chair Bob Work wrote of this dilemma in the NSCAI report.

As a result, much government writing about AI to date has seemed fundamentally confused, limited by the fact that no one knows exactly what transformative AI will look like or what key technical challenges lie ahead.

In addition, a lot of the writing about AI both by policymakers and by technical experts is very small, focused on possibilities such as whether AI will eliminate call centers, rather than the ways general AI, or AGI, will usher in a dramatic technological realignment, if its built at all.

The NSCAI analysis does not make this mistake.

First, the rapidly improving ability of computer systems to solve problems and to perform tasks that would otherwise require human intelligence and in some instances exceed human performance is world altering. AI technologies are the most powerful tools in generations for expanding knowledge, increasing prosperity, and enriching the human experience, reads the executive summary.

The report also extrapolates from current progress in machine learning to identify some specific areas where AI might enable notable good or notable harm:

Combined with massive computing power and AI, innovations in biotechnology may provide novel solutions for mankinds most vexing challenges, including in health, food production, and environmental sustainability. Like other powerful technologies, however, applications of biotechnology can have a dark side. The COVID-19 pandemic reminded the world of the dangers of a highly contagious pathogen. AI may enable a pathogen to be specifically engineered for lethality or to target a genetic profile the ultimate range and reach weapon.

One major challenge in communicating about AI is its much easier to predict the broad effects that unleashing fast, powerful research and decision-making systems on the world will have speeding up all kinds of research, for both good and ill than it is to predict the specific inventions those systems will come up with. The NSCAI report outlines some of the ways AI will be transformative, and some of the risks those transformations pose that policymakers should be thinking about how to manage.

Overall, the report seems to grasp why AI is a big deal, what makes it hard to plan for, and why its necessary to plan for it anyway.

But theres an important way in which the NSCAI report falls short. Recognizing that AI poses enormous risks and that it will be powerful and transformative, the report foregrounds a posture of great-power competition with both eyes on China to address the looming problem before humanity.

We should race together with partners when AI competition is directed at the moonshots that benefit humanity like discovering vaccines. But we must win the AI competition that is intensifying strategic competition with China, the report concludes.

China is run by a totalitarian regime that poses geopolitical and moral problems for the international community. Chinas repression in Hong Kong and Tibet, and the genocide of the Uyghur people in Xinjiang, have been technologically aided, and the regime should not have more powerful technological tools with which to violate human rights.

Theres no question that China developing AGI would be a bad thing. And the countermeasures the report proposes especially an increased effort to attract the worlds top scientists to America are a good idea.

More than that, the US and the global community should absolutely devote more attention and energy to addressing Chinas human rights violations.

But its where the report proposes beating China to the punch by accelerating AI development in the US, potentially through direct government funding, that I have hesitations. Adopting an arms-race mentality on AI would make involved companies and projects more likely to discourage international collaboration, cut corners, and evade transparency measures.

In 1939, at a conference at George Washington University, Niels Bohr announced that hed determined that uranium fission had been discovered. Physicist Edward Teller recalled the moment:

For all that the news was amazing, the reaction that followed was remarkably subdued. After a few minutes of general comment, my neighbor said to me, perhaps we should not discuss this. Clearly something obvious has been said, and it is equally clear that the consequences will be far from obvious. That seemed to be the tacit consensus, for we promptly returned to low-temperature physics.

Perhaps that consensus would have prevailed, if World War II hadnt started. It took the concerted efforts of many brilliant researchers to bring nuclear bombs to fruition, and at first most of them hesitated to be a part of the effort. Those hesitations were reasonable inventing the weaponry with which to destroy civilization is no small thing. But once they had reason to fear that the Nazis were building the bomb, those reservations melted away. The question was no longer Should these be built at all? but Should these be built by us, or by the Nazis?

It turned out, of course, that the Nazis were never close, nor was the atomic bomb needed to defeat them. And the US development of the bomb caused its geopolitical adversaries, the USSR, to develop it too, much sooner than it otherwise would have, through espionage. The world then spent decades teetering on the brink of nuclear war.

The specter of a mess like that looms large in everyones minds when they think of AI.

I think its a mistake to think of this as an arms race, Gilman Louie, a commissioner on the NSCAI report, told me though he immediately added, We dont want to be second.

An arms race can push scientists toward working on a technology that they have reservations about, or one they dont know how to safely build. It can also mean that policymakers and researchers dont pay enough attention to the AI alignment problem which is really the looming issue when it comes to the future of AI.

AI alignment is the work of trying to design intelligent systems that are accountable to humans. An AI even in well-intentioned hands will not necessarily ensure its development consistent with human priorities. Think of it this way: An AI aiming to increase a companys stock price, or to ensure a robust national defense against enemies, or to make a compelling ad campaign, might take large-scale actions like disabling safeguards, rerouting resources, or interfering with other AI systems we would never have asked for or wanted. Those large-scale actions in turn could have drastic consequences for economies and societies.

Its all speculative, for sure, but thats the point. Were in the year 1930 confronting the potential creation of a world-altering technology that might be here a decade-and-a-half from now or might be five decades away.

Right now, our capacity to build AIs is racing ahead of our capacity to understand and align them. And trying to make sure AI advancements happen in the US first can just make that problem worse, if the US doesnt also invest in the research which is much more immature, and has less obvious commercial value to build aligned AIs.

We ultimately came away with a recognition that if America embraces and invests in AI based on our values, it will transform our country and ensure that the United States and its allies continue to shape the world for the good of all humankind, NSCAI executive director Yll Bajraktari writes in the report. But heres the thing: Its entirely possible for America to embrace and invest in an AI research program based on liberal-democratic values that still fails, simply because the technical problem ahead of us is so hard.

This is an important respect in which AI is not analogous to nuclear weapons, where the most important policy decisions were whether to build them at all and how to build them faster than Nazi Germany.

In other words, with AI, theres not just the risk that someone else will get there first. A misaligned AI built by an altruistic, transparent, careful research team with democratic oversight and a goal to share its profits with all of humanity will still be a misaligned AI, one that pursues its programmed goals even when theyre contrary to human interests.

The limited scope of the NSCAI report is a fairly obvious consequence of what the commission is and what it does. The commission was created in 2018 and tasked with recommending policies that would advance the development of artificial intelligence, machine learning, and associated technologies to comprehensively address the national security and defense needs of the United States.

Right now, the part of the US government that takes artificial intelligence risks seriously is the national security and defense community. Thats because AI risk is weird, confusing, and futuristic, and the national security community has more latitude than the rest of the government to spend resources seriously investigating weird, confusing, and futuristic things.

But AI isnt just a defense and security issue; it will affect is affecting most aspects of society, like education, criminal justice, medicine, and the economy. And to the extent it is a defense issue, that doesnt mean that traditional defense approaches make sense.

If, before the invention of electricity, the only people working on producing electricity had been armies interested in electrical weapons, theyd not just be missing most of the effects of electricity on the world, theyd even be missing most of the effects of electricity on the military, which have to do with lighting, communications, and intelligence, rather than weapons.

The NSCAI, to its credit, takes AI seriously, including the non-defense applications and including the possibility that AI built in America by Americans could still go wrong. The thing I would say to American researchers is to avoid skipping steps, Louie told me. We hope that some of our competitor nations, China, Russia, follow a similar path demonstrate it meets thorough requirements for what we need to do before we use these things.

But the report, overall, looks at AI from the perspective of national defense and international competition. Its not clear that will be conducive to the international cooperation we might need in order to ensure no one anywhere in the world rushes ahead with an AI system that isnt ready.

Some AI work, at least, needs to be happening in a context insulated from arms-race concerns and fears of China. By all means, lets devote greater attention to Chinas use of tech in perpetrating human rights violations. But we should hesitate to rush ahead with AGI work without a sense of how well make it happen safely, and there needs to be more collaborative global work on AI, with a much longer-term lens. The perspectives that work could create room for just might be crucial ones.

Continued here:

The future of AI is being shaped right now. How should policymakers respond? - Vox.com

Posted in Ai | Comments Off on The future of AI is being shaped right now. How should policymakers respond? – Vox.com

Will Artificial Intelligence ever live up to its hype? The Stute – The Stute

Posted: at 2:41 am

When I started writing about science decades ago, artificial intelligence was ascendant.IEEE Spectrum, the technology magazine for which I worked, produced a special issue on how AI would transform the world. I edited an article in which computer scientist Frederick Hayes-Roth predicted that AI would soon replace experts in law, medicine, finance and other professions.

That was 1984. That period of exuberance gave way to a slump known as an AI winter, when disillusionment set in and funding declined. In 1998, I tracked Hayes-Roth down to ask how he thought his predictions had held up. He laughed and replied, Youve got a mean streak. AI had not lived up to expectations, he acknowledged. Our minds are hard to replicate, because we are very, very complicated systems that are both evolved and adapted through learning to deal well and differentially with dozens of variables at one time. Algorithms that can perform a specialized task, like playing chess, cannot be easily adapted for other purposes. It is an example of what is called nonrecurrent engineering, Hayes-Roth explained.

Today, according to some measures, AI is booming once again. Programs such as voice and face recognition are embedded in cell phones, televisions, cars and countless other consumer products. Clever algorithms help me choose a Valentines present for my girlfriend, find my daughters building in Brooklyn and gather information for columns like this one. Venture-capital investments in AI doubled between 2017 and 2018 to $40 billion,according toWIRED.A Price Waterhouse studyestimates that by 2030 AI will boost global economic output by more than $15 trillion, more than the current output of China and India combined.

Some observers fear that AI is moving too fast.New York Timescolumnist Farhad Manjoocalls an AI-based reading and writing program, GPT-3, amazing, spooky, humbling and more than a little terrifying. Someday, he frets, he might be put out to pasture by a machine.Elon Musk made headlinesin 2018 when he warned that superintelligent AI represents the single biggest existential crisis that we face. (Really? Worse than climate change? Nuclear weapons? Psychopathic politicians? I suspect that Musk, whohas invested in AI, is trying to promote the technology with his over-the-top fearmongering.)

Experts are pushing back against the hype, pointing out that many alleged advances in AI are based on flimsy evidence. Last year, for example, a team from Google Healthclaimed inNaturethat their AI program had outperformed humans in diagnosing breast cancer. A group led by Benjamin Haibe-Kains, a computational genomics researcher,criticized the Google Health paper, arguing that the lack of details of the methods and algorithm code undermines its scientific value.

Haibe-Kainscomplained toTechnology Reviewthat the Google Health report is more an advertisement for cool technology than a legitimate, reproducible scientific study. The same is true of other reported advances, he said. Indeed, artificial intelligence, like biomedicine and other fields, has become mired in a replication crisis. Researchers make dramatic claims that cannot be tested, because researchersespecially those in industrydo not disclose their algorithms.One recent reviewfound that only 15 percent of AI studies shared their code.

There are also signs that investments in AI are not paying off. Technology analyst Jeffrey Funk recently examined 40 startup companies developing AI for health care, manufacturing, energy, finance, cybersecurity, transportation and other industries. Many of the startups were not nearly as valuable to society as all the hype would suggest, Funk reports inIEEE Spectrum. Advances in AI are unlikely to be nearly as disruptivefor companies, for workers, or for the economy as a wholeas many observers have been arguing.

The longstanding goal of general artificial intelligence, possessing the broad knowledge and learning capacity to solve a variety of real-world problems, as humans do, remains elusive. We have machines that learn in a very narrow way, Yoshua Bengio, a pioneer in the AI approach called deep learning, recentlycomplained inWIRED. They need much more data to learn a task than human examples of intelligence, and they still make stupid mistakes.

Writing inThe Gradient, an online magazine devoted to tech, AI entrepreneur and writer Gary Marcus accuses AI leaders as well as the media of exaggerating the fields progress. AI-based autonomous cars, fake news detectors, diagnostic programs and chatbots have all been oversold, Marcus contends. He warns that if and when the public, governments, and investment community recognize that they have been sold an unrealistic picture of AIs strengths and weaknesses that doesnt match reality, a newAI winter may commence.

Another AI veteran and writer, Eric Larson, questions the myth that one day AI will inevitably equal or surpass human intelligence. In his new bookThe Myth of Artificial Intelligence: Why Computers Cant Think the Way We Do, Larson argues that success with narrow applications gets us not one step closer to general intelligence. Larson says the actual science of AI (as opposed to the pseudo-science of Hollywood and science fiction novelists) has uncovered a very large mystery at the heart of intelligence, which no one currently has a clue how to solve. Put bluntly: all evidence suggests that human and machine intelligence are radically different. And yet the myth of inevitability persists.

When I first started writing about science, I believed the myth of AI. One day, surely, researchers would achieve the goal of a flexible, supersmart, all-purpose artificial intelligence, like HAL. Given rapid advances in computer hardware and software, it was only a matter of time. Gradually, I became an AI doubter, as I realized that our mindsin spite of enormous advances in neuroscience, genetics, cognitive science and, yes, artificial intelligenceremain as mysterious as ever. Heres the paradox: machines are becoming undeniably smarterand humans, it seems lately, more stupid, and yet machines will never equal, let alone surpass, our intelligence. They will always remain mere machines. Thats my guess, and my hope.

John Horgan directs the Center for Science Writings at Stevens. This column is adapted from one originally published on ScientificAmerican.com.

Read the original:

Will Artificial Intelligence ever live up to its hype? The Stute - The Stute

Posted in Ai | Comments Off on Will Artificial Intelligence ever live up to its hype? The Stute – The Stute

The Future of AI 7 Stages of Evolution You Need to Know About – ReadWrite

Posted: at 2:41 am

According to artificial intelligence statistics, the global AI market is expected to grow to $60 billion by 2025. Global GDP will grow by $15.7 trillion by 2030 due to artificial intelligence, as it will increase business productivity by 40%. Investment in artificial intelligence has grown by 6 times since 2000. In fact, 84% of businesses think that artificial intelligence can give them a competitive advantage.

If you are a fan of science fiction movies, you might have seen AI in action in its full glory. With artificial intelligence leaving impressionable marks on every facet of our personal and professional lives, it is important to understand how it works and how it will evolve in the future. This allows us to prepare for the future in a much better way.

In this article, you will learn about how artificial intelligence will evolve in the future and what stages it will go through.

7 Stages of AI Evolution

This form of artificial intelligence is everywhere. It surrounds us whether we are at work, at home or traveling. From business software to smart apps, aircraft to electronic appliances, all follow rule-based systems. Robotic process automation is the next stage of a rule-based system in which the machine can perform complete processes on their own without requiring any help from humans.

Since it is a basic level of artificial intelligence and also the most ubiquitous, it is cost-effective and fast. That is why mobile app development company uses it. On the flip side, it requires comprehensive knowledge and domain expertise and involves some kind of human involvement. Generating rules for such a system is sophisticated, time-consuming and resource-intensive.

This type of algorithm is developed by feeding information about a particular domain in which they would be implemented in. Since these algorithms are trained using the knowledge and experience of experts and are updated to cope with new emerging situations, this makes them an alternative to human experts in the same industry. One of the best examples of that type of artificial intelligence is smart chatbots.

Chatbots have already changed the way businesses look at customer support and deliver customer service. It has not only saved businesses from hiring customer service representatives but also help them automate and streamline customer support. In addition to this, it can help businesses in many other ways.

Another form of this type of artificial intelligence is Robo advisors. These Robo advisors are already being used in finance and helping people make sensible investment decisions. We might see their applications grow in other industries as well in the future. They can automate and optimize passive indexing strategies and follow mean-variance optimization.

Unlike context-aware and retention artificial intelligence, domain-specific expertise aims to not only reach a level of human capability but also want to surpass it. Since it has access to more data, it can make better decisions than its human counterpart. We already see its application in areas of cancer diagnosis.

Another popular example of this type of AI is Googles Deepmind Alpha Go. Initially, the system was taught rules and objectives of winning and later, it taught itself how to play Go. The important thing to note here is that it did so with human support, which stopped them from making poor decisions. In March 2016, we finally saw Alpha Go defeat the 18-time world Go champion Lee Sedol by four games to one.

Soon after Alpha Gos success, Google created Alpha Go Zero, which requires no human support to play Go. It learned rules and analyzed thousands of Go games to create strategies. After three days, it defeated Alpha Go by a huge margin of 100 games to nil. This was a clear indication of the potential of smart machines and what they can do when they acquire human-like intelligence. It was a massive breakthrough in the field of artificial intelligence.

These reasoning machines are powered by algorithms that have a theory of mind. This means that it can make sense of different mental states. In fact, they also have beliefs, knowledge, and intentions which are used to create their own logic.

Hence, they have the capacity to reason, negotiate, and interact with humans and other machines. Such algorithms are currently at the development stage, but we can expect to see them in commercial applications in the next few years. Due to this, they can interact, reason, and even negotiate with humans as well as other machines.

The ultimate goal of artificial intelligence is to create systems that can surpass human intelligence. Even though we are very close to achieving that goal, there is still no system that can achieve that feat. Experts are divided on this one as some think that we can achieve that level in less than five years while others argue that we may never be able to achieve that level.

Self-aware AI systems will have more perspective and can understand and react to emotional responses. Just like self-aware humans, self-aware machines can also show a degree of self-control and can regulate themselves according to the situation.

AI researchers have already developed systems that can win from humans in games and do a better job in many other areas. Whats next? The real challenge for AI experts would be to create AI-powered systems that can outperform humans in every department. As a human, visualizing something which is miles ahead of us is out of the question, let alone creating it.

If AI researchers succeed in creating something along these lines, we might see it is being used in solving the worlds biggest problems, such as poverty, hungry and climate change. In fact, we can also expect such systems to make new scientific discoveries and design new economic and governance models. Just like self-aware systems, experts are on the fence about whether it is possible or not. Even if it is possible, how long will it take for this dream to see the light of the day?

At this stage of artificial intelligence, we will be able to connect our brains with one another. This will pave the way for the future of the internet. This will not only help with traditional activities such as sharing ideas but also help with advanced activities such as the ability to observe dreams. It could enable humans to communicate with other living beings, such as plants and animals.

How will artificial intelligence evolve in years to come? Share your opinion with us in the comments section below.

Working in digital marketing with mobile app development company in Dallas Branex, Muneeb Qadar Siddiqui has earned 8 years of experience with skills in digital marketing. Paid marketing, affiliate marketing, search engine marketing and search engine optimization are his strengths. He is also a connoisseur of fine dining in his free time. Do connect with him:Facebook |Twitter |Linkedin

Read more here:

The Future of AI 7 Stages of Evolution You Need to Know About - ReadWrite

Posted in Ai | Comments Off on The Future of AI 7 Stages of Evolution You Need to Know About – ReadWrite

This Is the Most Powerful Artificial Intelligence Tool in the World – Entrepreneur

Posted: at 2:41 am

April7, 20215 min read

Opinions expressed by Entrepreneur contributors are their own.

In June 2020, the Californian company OpenAI announcedGPT-2's upgrade to GPT-3 , a language model based on artificial intelligence and deep learning with cognitive capabilities. It is atechnology that has generated great expectationsand that has been presented as the most important and useful advance in AI in recent years.

OpenAI is a non-profit company founded by Elon Musk, co-founder and director of Tesla and SpaceX, which was born with the aim of researching and democratizing access to General Artificial Intelligence. Originally, it was a non-profit organization. However, in 2020, it becamea company andpartnered with Microsoft in order to achieve new advances, both in the field of language with GPT-3 models, and in the field of robotics and vision.

GPT-3 (Generative Pre-Training Transformer 3) is what is known as an autoregressive language model, which uses deep learning to produce texts that simulate human writing.

Unlike most artificial intelligence systems that are designed for a use case, this API (Application Programming Interface) provides a general-purpose "text input and output" interface, allowing users to test it. in practically any assignment in English. This tool is capable of, among other functions, generating a text on any subject that is proposed to it in the same way as a human would, programming (in HTML code) and generating ideas.

As Nerea Luis, an expertin artificial intelligence and engineer at Sngular, says, GPT-3 is living confirmation that the Natural Language Processing area is advancing more than ever by leaps and bounds."

Do you want to know how GPT-3 works? Now I explain it to you.

The user only has to start writing a paragraph, and the system itself takes care of completing the rest of the text in the most coherent way possible. Also, with GPT-3 you can generate conversations and the answers provided by the system will be based on the context of the previous questions and answers.

It should be noted that the tool generates text using algorithms that were previously trained, and that have already received all the data they need to carry out their task. On time, they have received around 570 GB of text information collected by crawling the Internet (a publicly available dataset known as CommonCrawl) along with other text selected by OpenAI, including text from Wikipedia .

GPT-3 has aroused a lot of interest because it is one of the first systems to show the possibilities of general artificial intelligence, because it completes with surprisingly reasonable results tasks that until now required a specially built system to solve that particular task. Furthermore, it does so from just a few examples, says Csar de Pablo, data scientist at BBVA Data & Analytics.

As for the possible applications that this tool may have, I mention the following:

GPT-3 will be able to generate text for websites , social media ads, scripts , etc. In this way, with a few simple guidelines of the needs you have, GPT-3 will transform it into a precise text. In addition, you can select the type of text you need, from the most generic to the most strategic and creative.

With GPT-3 you will be able to compose emails (among other functions) by simply giving it some guidelines of what you want to say and communicate. For example, through magicemail.io, a portal where you can test the tool (there is a waiting list of about 6000 users), you can see how it works. As a Google Chrome extension, Magicemail will be installed in our Gmail.

When an email arrives, we will simply have to click on the tool to receive a phrase of what they want to tell us in the email.

GPT-3 will develop the code just by telling it how we want our landing page or website to look. Once you give us the HTML code, we will only need a "copy and paste" to have an optimal result. The tool will significantly streamline web development processes.

With this model, chatbots will be much more accurate, with more accurate responses, generating in the user a value of more personalized and effective attention.

Furthermore, GPT-3 could have huge implications for the way software and applications are developed in the future.

A sample of how this technology works can be seen in the essay Are you still scared, human?,published by The Guardian that, as its editor comments, was made from the best fragments of eight articles generated with GPT-3 in order to capture the different styles and registers of artificial intelligence.

Another demo available online is "GPT-3: Build Me A Photo App,"which shows the creation of an application that looks and works similar to the Instagram application, using a plugin for the Figma software tool, which is widely used for application design.

Let us remember that currently the use that is given to the GPT-3 model is mainly limited to the research community. However, it is clear that GPT-3 in the near future may create everything that has a language structure, such as: answering questions, writing essays, summarizing texts, translating, taking notes and even creating code for computers.

Therefore, GPT-3 is positioned as an artificial intelligence tool with great potential for the future. And, surely, when it is open to the public, its reach will be much more surprising.

Continue reading here:

This Is the Most Powerful Artificial Intelligence Tool in the World - Entrepreneur

Posted in Ai | Comments Off on This Is the Most Powerful Artificial Intelligence Tool in the World – Entrepreneur

In an AI world we need to teach students how to work with robot writers – The Conversation AU

Posted: at 2:41 am

Robots are writing more of what we read on the internet. And artificial intelligence (AI) writing tools are becoming freely available for anyone, including students, to use.

In a period of rapid change, there are enormous ethical implications for post-human authorship in which humans and machines collaborate. The study of AI ethics needs to be central to education as we increasingly use machine-generated content to communicate with others.

AI robot writers, such as GPT-3 (Generative Pre-trained Transformer) take seconds to create text that seems like it was written by humans. In September, 2020 GPT-3 wrote an entire essay in The Guardian to convince people not to fear artificial intelligence.

AI does this through whats called natural language processing and generation. This involves converting human language to computer commands and vice versa. To do this, machine algorithms study millions of text samples, words, sentences and paragraphs humans use to gain a general understanding of the context of human language. Then machines use that knowledge to put together text.

As recently as 2019, this kind of technology seemed a long way off. But today, it is readily available.

For example, the free Zyro content generator provides text for websites. I can select the categories Health and Living and Personal Trainer (I happen to be one, as well as an academic). In two seconds, the following text appears:

I am experienced with personal training, athletic performance, and nutrition. This includes working with children, pregnant women, older adults, people with disabilities, athletes, and anyone looking to achieve a competitive edge, improve their overall health, and overall performance. I also have an interest in alternative health, including wellness coaching and nutrition counselling. In my spare time, I enjoy spending time with my family, the outdoors, and writing.

This sums me up (despite the program knowing nothing about me) and would be fine for my website, if I had one. I have the option to generate more responses, and in two seconds, the program offers:

I am experienced in working with adults and children to improve their overall physical and emotional well-being. I have a strong desire to help you reach your goal [] I love helping others achieve their goals. I love working with others to achieve their goals. I am a mother of two healthy children and a wife to two incredible people.

While this is repetitive and Zyro does not notice the apparent error in the last sentence, these issues would be easy to correct. Text, even for niche purposes, can now be generated in a few clicks.

There are other digital tools such as paraphrasers and rewriters that can generate up to 1,000 articles from a single seed article, each of them substantially unique. Quillbot and WordAI, for instance, can rapidly rewrite text and make it difficult to detect plagiarism. WordAI boasts unlimited human quality content at your fingertips.

So what does this mean for education, writing, and society?

Of course, theres the issue of cheating on essays and other assignments. School and university leaders need to have difficult conversations about what constitutes authorship and editorship in the post-human age. We are all (already) writing with machines, even just via spelling and grammar checkers.

Tools such as Turnitin originally developed for detecting plagiarism are already using more sophisticated means of determining who wrote a text by recognising a human authors unique fingerprint. Part of this involves electronically checking a submitted piece of work against a students previous work.

Many student writers are already using AI writing tools. Perhaps, rather than banning or seeking to expose machine collaboration, it should be welcomed as co-creativity. Learning to write with machines is an important aspect of the workplace writing students will be doing in the future.

Read more: OK computer: to prevent students cheating with AI text-generators, we should bring them into the classroom

AI writers work lightning fast. They can write in multiple languages and can provide images, create metadata, headlines, landing pages, Instagram ads, content ideas, expansions of bullet points and search-engine optimised text, all in seconds. Students need to exploit these machine capabilities, as writers for digital platforms and audiences.

Perhaps assessment should focus more on students capacities to use these tools skilfully instead of, or at least in addition to, pursuing pure human writing.

Yet the question of fairness remains. Students who can access better AI writers (more natural, with more features) will be able to produce and edit better text.

Better AI writers are more expensive and are available on monthly plans or high one-off payments wealthy families can afford. This will exacerbate inequality in schooling, unless schools themselves provide excellent AI writers to all.

We will need protocols for who gets credit for a piece of writing. We will need to know who gets cited. We need to know who is legally liable for content and potential harm it may create. We need transparent systems for identifying, verifying and quantifying human content.

Read more: When does getting help on an assignment turn into cheating?

And most importantly of all, we need to ask whether the use of AI writing tools is fair to all students.

For those who are new to the notion of AI writing, it is worthwhile playing and experimenting with the free tools available online, to better understand what creation means in our robot future.

See more here:

In an AI world we need to teach students how to work with robot writers - The Conversation AU

Posted in Ai | Comments Off on In an AI world we need to teach students how to work with robot writers – The Conversation AU

Spurred by the pandemic, AI is driving decentralized clinical trials – Healthcare IT News

Posted: at 2:41 am

With clinical oncology trials put on hold during the COVID-19 pandemic, researchers turned to troves of data to find patients across the country who would qualify for trials, even if they weren't physically there.

Artificial intelligence enabled this process, and may have created a move toward decentralized trials that potentially could last long after the pandemic is over.

Jeff Elton is CEO of ConcertAI, which works with some of the biggest oncology pharmaceutical companies and research organizations. Healthcare IT News interviewed Elton to get his thoughts on this shift and what it means for both treatments and patient outcomes.

Q: With trials on hold, researchers have been working with all of this data to find patients who would qualify for trials, even if they are not physically there. How did artificial intelligence technology enable this?

A: By putting the data in cancer centers to work. We process structured and unstructured data combing through EHRs as well as other sources of patient information that EHRs might not include. Natural language processors and other tools integral to workflows are critical here.

The clinical settings have mountains of data. When participation in trials plunged, they had to quickly and efficiently leverage all the data at their fingertips to find as many potential eligible patients. People working manually would have taken too long and might overlook something. AI has been able to do it. AI enhances the ability to identify patients eligible for clinical studies.

It's a complex process. We need to eliminate false negatives, meaning that if a patient is potentially eligible for a clinical trial, we identify them. We also make sure that we don't have too many false positives. Otherwise, we just create work.

We also use AI tools to ensure we are seeing what we expect and need in clinical setting data exception and anomaly detection and reporting tools are key to identifying and understanding the correct data.

It is critical to understand that if there is no datathere is no AI. Meaningful AI and machine learning capabilities require broad data access, the ability to prepare data for specific AI methods and tools, and reserved data for independent validation. Of course, we also must be vigilant of underlying health and biological trends for retraining or re-specification of AI models.

We can also generate evidence from complementary data from retrospective sources for prospective studies and sometimes retrospective data alone for label expansions.

Increasingly, the FDA is accepting studies with retrospective data provided in replacement for forward-recruited patients in standard-of-care controls as "external control arms." This shift is in the best interest of patients and allows a more efficient study execution, since patients can be recruited exclusively to the treatment arm with the novel therapeutic.

Q: Has AI sparked a move toward decentralized clinical trials, a move that potentially could stick around long after the pandemic is over?

A: We are not going backwards. Decentralized trials have been emerging over the past several years. COVID-19 was the tipping event, or shock,that accelerated the trend.

Decentralized trials do not require AI at all, incidentally, but can leverage AI given that workflows are all digital and most data is machine readable. We will enter a period where decentralized trials are at scale, coexisting with legacy approaches.

But that will only exist for an interim period eventually digital onlywith deeply embedded AI ... the only approach. I use the term "integrated digital trials" to describe what's ahead.

With integrated digital trials, clinical studies are integral to the care process itself, versus being imposed on it. Trials don't need to place a higher burden on providers and patients than the standard of care.

This point is incredibly important. Reducing the burden that trials put on patients and providers allows us to move clinical trials into the community where 80% of patients receive their care. It is both the democratization and ubiquity of clinical trials.

Q: What does this shift mean for both treatments and patient outcomes?

A: All of this is good. It's good for patients, first and foremost, because they can participate in trials in a broader array of treatment settings. It's good for treatment innovation, because more study alternatives are available in more settings with lower barriers to participation.

Standard-of-care treatment for novel therapeutics versus a separate clinical trial should increase the likelihood of a positive clinical outcome. We want to bring more potentially beneficial options to patients, faster and with greater precision.

Q: Please share an anecdote of your work this past year with pharma companies and research organizations about how AI has improved or enhanced oncology clinical trials.

A: One of our partners had a study that was unable to accrue patients. The trial sponsor wanted our tools, clinical sites and data to solve their problem. We did, but the problem turned out to be a trial design that was inexecutable. Our AI-optimized study design solution found the problem. It was not the insight that was expected, but it was nonetheless valuable.

Of greater significance, we and our sponsor partners in the past year have affirmed our commitment to eliminating the research disparities that sometimes underlie health and other inequities.

We have successfully brought together our combination of rich clinical data and AI optimizations to reconsider clinical trial designs to ensure diversity, avoid unintentional exclusions, and identify sites and investigators that can assure study success and timeliness for completion.

Twitter:@SiwickiHealthITEmail the writer:bsiwicki@himss.orgHealthcare IT News is a HIMSS Media publication.

Go here to see the original:

Spurred by the pandemic, AI is driving decentralized clinical trials - Healthcare IT News

Posted in Ai | Comments Off on Spurred by the pandemic, AI is driving decentralized clinical trials – Healthcare IT News

Fiddler AI Named to the 2021 CB Insights AI 100 List of Most Innovative Artificial Intelligence Startups – Yahoo Finance

Posted: at 2:41 am

Fiddler AI honored for achievements in ML Model Performance Monitoring and Explainable AI to build trustworthy and reliable AI solutions

PALO ALTO, Calif., April 7, 2021 /PRNewswire/ -- CB Insights today named Fiddler AI to the fifth annual AI 100 ranking, showcasing the 100 most promising private artificial intelligence companies in the world.

"This is the fifth year CB Insights has recognized the most promising private artificial intelligence companies with the AI 100, and this is one of the most global groups we've ever seen. This year's cohort spans 18 industries, and is working on everything from climate risk to accelerating drug R&D," said CB Insights CEO Anand Sanwal. "Last year's AI 100 companies had a remarkable run after being named to the list, with more than 50% going on to raise additional financing (totaling $5.2B), including 16 $100 million+ mega-rounds. Many also went on to exit via M&A, SPAC or IPO. As industry after industry adopts AI, we expect this year's class will see similar levels of interest from investors, acquirers and customers."

"We're honored to be named to the AI 100 list and excited that our mission to build trust in AI is quickly becoming critical in today's world. Algorithms rule our lives - from news consumption to mortgage financing, our lives are driven by algorithms. Most algorithms are AI-based and increasingly black boxes. We cannot allow algorithms to operate with a lack of transparency. We need accountability to build trust between humans and AI. Fiddler's mission is to build trust with AI by continuously monitoring models and unlocking the AI black box with explainability," said CEO & Founder, Krishna Gade.

Through an evidence-based approach, the CB Insights research team selected the AI 100 from a pool of over 6,000 companies based on several factors including patent activity, investor quality, news sentiment analysis, proprietary Mosaic scores, market potential, partnerships, competitive landscape, team strength, and tech novelty. The Mosaic Score, based on CB Insights' algorithm, measures the overall health and growth potential of private companies to help predict a company's momentum.

Story continues

Fiddler's Model Performance Monitoring solution enables data science and AI/ML teams to validate, monitor, explain, and analyze their AI solutions to accelerate AI adoption, meet regulatory compliance and build trust with end-users. They provide complete visibility and understanding of AI solutions to customers. Fiddler has been recognized for its industry-leading capabilities and innovation - they were named a Technology Pioneer 2020 by the World Economic Forum, one of Forbes' companies to watch on its 2020 AI 50 list, and a 2019 Cool Vendor in Gartner's Enterprise AI Governance and Ethical Response Report.

Quick facts on the 2021 AI 100:

Equity funding and deals: Since 2010, the AI 100 2021 cohort has raised over $11.7B in equity funding across 370+ deals from more than 700 investors.

12 unicorns: Companies with $1B+ valuations on the list span applications as varied as data annotation, cybersecurity, sales & CRM platforms, and enterprise search.

Geographic distribution: 64% of the selected companies are headquartered in the US. Eight of the winners are based in the UK, followed by six each in China and Israel, and five in Canada. Other countries represented in this year's list include Japan, Denmark, Czech Republic, France, Poland, Germany, and South Korea.

About CB InsightsCB Insights builds software that enables the world's best companies to discover, understand and make technology decisions with confidence. By marrying data, expert insights and work management tools, clients manage their end-to-end technology making process on CB Insights. To learn more, please visit http://www.cbinsights.com.

Contact:CB Insightsawards@cbinsights.com

About Fiddler AIFiddler's mission is to enable businesses of all sizes to unlock the AI blackbox and deliver transparent AI experiences to end-users. We enable businesses to build, deploy, and maintain trustworthy AI solutions. Fiddler's next-generation ML Model Performance Management solution enables data science and technical teams to monitor, explain, and analyze their AI solutions, providing responsible and reliable experiences to business stakeholders and customers. Fiddler works with pioneering Fortune 500 companies as well as emerging tech companies. For more information please visit http://www.fiddler.ai or follow us on Twitter @fiddlerlabs and LinkedIn.

CONTACT: Fiddler AImedia@fiddler.ai

Cision

View original content:http://www.prnewswire.com/news-releases/fiddler-ai-named-to-the-2021-cb-insights-ai-100-list-of-most-innovative-artificial-intelligence-startups-301265389.html

SOURCE Fiddler AI

Go here to see the original:

Fiddler AI Named to the 2021 CB Insights AI 100 List of Most Innovative Artificial Intelligence Startups - Yahoo Finance

Posted in Ai | Comments Off on Fiddler AI Named to the 2021 CB Insights AI 100 List of Most Innovative Artificial Intelligence Startups – Yahoo Finance

Page 140«..1020..139140141142..150160..»