Page 113«..1020..112113114115..120130..»

Category Archives: Ai

Finnish project aims to boost adoption of AI-based solutions – Healthcare IT News

Posted: August 22, 2021 at 3:58 pm

A new project will tap into the potential of Finnish SMEs to grow their businesses through identifying and implementing artificial intelligence (AI) based solutions.

The AI Innovation Ecosystem for Competitiveness of SMEs (AI-TIE) project, coordinated by Haaga-Helia University of Applied Sciences, will focus on the health, social care, cleantech and wellbeing sectors.

It aims to help develop AI competencies and support collaborative networking between solution providers, RDI institutions, expert organisations and other key actors.

The Helsinki-Uusimaa Regional Council has awarded European regional development funding and state funding to create a bundle of services directed at SMEs to facilitate the planning, piloting, and adoption of AI-based solutions.

SMEs will be provided with training materials and web content on the business use of AI to help increase staff competency.

They will also be encouraged to develop digital and web-based solutions, in addition to physical products, to ensure business viability in crises, such as the COVID-19 pandemic.

The main partners in the project are Finlands Artificial Intelligence Accelerator FAIA as a part of Technology Industries of Finland, and MyData Global, the developer of the internationally renowned MyData model have collaborated.

Other collaborators include Laurea University of Applied Sciences, the Helsinki Region Chamber of Commerce, West-Uusimaa Chamber of Commerce, East-Uusimaa Development Organisation Posintra, Regional Federation of Finnish Entrepreneurs of Uusimaa, NewCo and Health Capital Helsinki.

WHY IT MATTERS

According to Finlands artificial intelligence accelerator FAIA, Finland needs to focus more on cooperation in the field to strengthen its position at the top of the international AI arena.

The AI-TIE project will help develop a new collaboration arena which enables SMEs, large companies and corporations, higher education institutions, expert organisations and other stakeholders to collaborate and offer their products and services with the objective of increasing sales.

THE LARGER CONTEXT

Earlier this year, the World Health Organisation (WHO) released new guidance on ethics and governance of AI for health, following two years of consultations held by a panel of international experts appointed by WHO.

In the guidance, WHO warns against overestimating the benefits of AI for health at the expense of core investments and strategies to achieve universal health coverage. It also argues that ethics and human rights must be put at the heart of AIs design, deployment, and use if the technology is to improve the delivery of healthcare worldwide.

ON THE RECORD

Dr Anna Nikina-Ruohonen, AI-TIE project manager, Haaga-Helia University of Applied Sciences, said: Industry specific AI capabilities are needed, especially in SMEs, and wellbeing, social and health services is one of the main focus areas in AI-TIE.

Finnish SMEs from this industry are supported in the development of their internal business processes, and product and service innovations through AI. In the long run this work enables industry-specific AI expertise, sustainability and ecosystem development.

More here:

Finnish project aims to boost adoption of AI-based solutions - Healthcare IT News

Posted in Ai | Comments Off on Finnish project aims to boost adoption of AI-based solutions – Healthcare IT News

China’s Baidu launches second chip and a ‘robocar’ as it sets up future in AI and autonomous driving – CNBC

Posted: at 3:58 pm

Robin Li (R), CEO of Baidu, sits in the Chinese tech giant's new prototype "robocar", an autonomous vehicle, at the company's annual Baidu World conference on Wednesday, August 18, 2021.

Baidu

GUANGZHOU, China Chinese internet giant Baidu unveiled its second-generation artificial intelligence chip, its first "robocar" and a rebranded driverless taxi app, underscoring how these new areas of technology are key to the company's future growth.

The Beijing-headquartered firm, known as China's biggest search engine player, has focused on diversifying its business beyond advertising in the face of rising competition and a difficult advertising market in the last few years.

Robin Li, CEO of Baidu, has tried to convince investors the company's future lies in AI and related areas such as autonomous driving.

On Wednesday, at its annual Baidu World conference, the company launched Kunlun 2, its second-generation AI chip. The semiconductor is designed to help devices process huge amounts of data and boost computing power. Baidu says the chip can be used in areas such as autonomous driving and that it has entered mass production.

Baidu's first-generation Kunlun chip was launched in 2018. Earlier this year, Baidu raised money for its chip unit valuing it at $2 billion.

Baidu also took the wraps off a "robocar," an autonomous vehicle with doors that open up like wings and a big screen inside for entertainment. It is a prototype and the company gave no word on whether it would be mass-produced.

But the concept car highlights Baidu's ambitions in autonomous driving, which analysts predict could be a multibillion dollar business for the Chinese tech giant.

Baidu has also been running so-called robotaxi services in some cities including Guangzhou and Beijing where users can hail an autonomous taxi via the company's Apollo Go app in a limited area. On Wednesday, Baidu rebranded that app to "Luobo Kuaipao" as it looks to roll out robotaxis on a mass scale.

Wei Dong, vice president of Baidu's intelligent driving group, told CNBC the company is aiming for mass public commercial availability in some cities within two years.

It's unclear how Baidu will price the robotaxi service.

In June, Baidu announced a partnership with state-owned automaker BAIC Group to build 1,000 driverless cars over the next three years and eventually commercialize a robotaxi service across China.

Baidu also announced four new pieces of hardware, including a smart screen and a TV equipped with Xiaodu, the company's AI voice assistant. Xiaodu is another growth initiative for the company.

Link:

China's Baidu launches second chip and a 'robocar' as it sets up future in AI and autonomous driving - CNBC

Posted in Ai | Comments Off on China’s Baidu launches second chip and a ‘robocar’ as it sets up future in AI and autonomous driving – CNBC

Tesla’s AI Day Event Did A Great Job Convincing Me They’re Wasting Everybody’s Time – Jalopnik

Posted: at 3:58 pm

Screenshot: YouTube/Tesla

Teslas big AI Day event just happened, and Ive already told you about the humanoid robot Elon Musk says Tesla will be developing. Youd think that would have been the most eye-roll-inducing thing to come out of the event, but, surprisingly, thats not the case. The part of the presentation that actually made me the most baffled was near the beginning, a straightforward demonstration of Tesla Full Self-Driving. Ill explain.

The part Im talking about is a repeating loop of a sped-up daytime drive through a city environment using Teslas FSD, a drive that contains a good amount of complex and varied traffic situations, road markings, maneuvering, pedestrians, other carsall the good stuff.

The Tesla performs the driving pretty flawlessly. Here, watch for yourself:

Now, technically, theres a lot to be impressed by here the car is doing an admirable job of navigating the environment. The more I watched it, though, the more I realized one very important point: this is a colossal waste of time.

Well, thats not entirely fair: its a waste of time, talent, energy, and money.

I know that sounds harsh, and its not really entirely fair, I know. A lot of this research and development is extremely important for the future of self-driving vehicles, but the current implementationand, from what I can tell, the plan moving aheadis still focusing on the wrong things.

G/O Media may get a commission

Heres the root of the issue, and its not a technical problem. Its the fundamental flaw of all these Level 2 driver-assist, full-attention required systems: what problem are they actually solving?

That segment of video was kind of maddening to watch because thats an entirely mundane, unchallenging drive for any remotely decent, sober driver. I watched that car turn the wheel as the person in the drivers seat had their hand right there, wheel spinning through their loose fingers, feet inches from those pedals, while all of this extremely advanced technology was doing something that the driver was not only fully capable of doing on their own, but was in the exact right position and mental state to actually be doing.

Screenshot: YouTube/Tesla

Whats being solved, here? The demonstration of FSD shown in the video is doing absolutely nothing the human driver couldnt do, and doesnt free the human to do anything else. Nothings being gained!

It would be like if Tesla designed a humanoid dishwashing robot that worked fundamentally differently than the dishwashing robots many of us have tucked under our kitchen counters.

The Tesla Dishwasher would stand over the sink, like a human, washing dishes with human-like hands, but for safety reasons you would have to stand behind it, your hands lightly holding the robots hands, like a pair of young lovers in their first apartment.

Screenshot: YouTube/Tesla

Normally, the robot does the job just fine, but theres a chance it could get confused and fling a dish at a wall or person, so for safety you need to be watching it, and have your hands on the robots at all times.

If you dont, it beeps a warning, and then stops, mid-wash.

Would you want a dishwasher like that? Youre not really washing the dishes yourself, sure, but youre also not not washing them, either. Thats what FSD is.

Every time I saw the Tesla in that video make a gentle turn or come to a slow stop, all I could think is, buddy, just fucking drive your car! Youre right there. Just drive!

The effort being expended to make FSD better at doing what it does is fine, but its misguided. The place that effort needs to be expended for automated driving is in developing systems and procedures that allow the cars to safely get out of the way, without human intervention, when things go wrong.

Level 2 is a dead end. Its useless. Well, maybe not entirelyI suppose on some long highway trips or stop-and-go very slow traffic it can be a useful assist, but it would all be better if the weak link, the part that causes problemsdemanding that a human be ready to take over at any momentwas eliminated.

Teslaand everyone else in this spaceshould be focusing efforts on the two main areas that could actually be made better by these systems: long, boring highway drives, and stop-and-go traffic. The situations where humans are most likely to be bad at paying attention and make foolish mistakes, or be fatigued or distracted.

Screenshot: YouTube/Tesla

The type of driving shown in the FSD video here, daytime short-trip city driving, is likely the least useful application for self-driving.

If were all collectively serious about wanting automated vehicles, the only sensible next step is to actually make them forgiving of human inattention, because that is the one thing you can guarantee will be a constant factor.

Level 5 drive-everywhere cars are a foolish goal. We dont need them, and the effort it would take to develop them is vast. Whats needed are systems around Level 4, focusing on long highway trips and painful traffic jam situations, where the intervention of a human is never required.

This isnt an easy task. The eventual answer may require infrastructure changes or remote human intervention to pull off properly, and hardcore autonomy/AI fetishists find those solutions unsexy. But who gives a shit what they think?

The solution to eliminating the need for immediate driver handoffs and being able to get a disabled or confused AV out of traffic and danger may also require robust car-to-car communication and cooperation between carmakers, which is also a huge challenge. But it needs to happen before any meaningful acceptance of AVs can happen.

Heres the bottom line: if your AV only really works safely if there is someone in position to be potentially driving the whole time, its not solving the real problem.

Now, if you want to argue that Tesla and other L2 systems offer a safety advantage (Im not convinced they necessarily do, but whatever) then I think theres a way to leverage all of this impressive R&D and keep the safety benefits of these L2 systems. How? By doing it the opposite way we do it now.

What I mean is that there should be a role-reversal: if safety is the goal, then the human should be the one driving, with the AI watching, always alert, and ready to take over in an emergency.

In this inverse-L2 model, the car is still doing all the complex AI things it would be doing in a system like FSD, but it will only take over in situations where it sees that the human driver is not responding to a potential problem.

This guardian angel-type approach provides all of the safety advantages of what a good L2 system could provide, and, because its a computer, will always be attentive and ready to take over if needed.

Driver monitoring systems wont be necessary, because the car wont drive unless the human is actually driving. And, if they get distracted or dont see a person or car, then the AI steps in to help.

All of this development can still be used! We just need to do it backwards, and treat the system as an advanced safety back-up driver system as opposed to a driver-doesnt-have-to-pay-so-much-attention system.

Andrej Karpathy and Teslas AI team are incredibly smart and capable people. Theyve accomplished an incredible amount. Those powerful, pulsating, damp brains need to be directed to solving the problems that actually matter, not making the least-necessary type of automated driving better.

Once the handoff problem is solved, that will eliminate the need for flawed, trick-able driver monitoring systems, which will always be in an arms race with moron drivers who want to pretend they live in a different reality.

Its time to stop polishing the turd that is Level 2 driver-assist systems and actually put real effort into developing systems that stop putting humans in the ridiculous, dangerous space of both driving and not driving.

Until we get this solved, just drive your damn car.

More:

Tesla's AI Day Event Did A Great Job Convincing Me They're Wasting Everybody's Time - Jalopnik

Posted in Ai | Comments Off on Tesla’s AI Day Event Did A Great Job Convincing Me They’re Wasting Everybody’s Time – Jalopnik

Tesla AI Day Starts Today. Here’s What to Watch. – Barron’s

Posted: at 3:58 pm

Text size

Former defense secretary Donald Rumsfeld said there are known knownsthings people knowknown unknownsthings people know they dont knowand unknown unknownsthings people dont realize they dont know. That pretty much sums up autonomous driving technology these days.

It isnt clear how long it will take the auto industry to deliver truly self-driving cars. Thursday evening, however, investors will get an education about whats state of the art when Tesla (ticker: TSLA) hosts its artificial intelligence day.

The event will likely be livestreamed on the companys website beginning around 8 p.m. Eastern Standard Time. The companys YouTube channel will likely be one place to watch the event. Other websites will carry the broadcast as well. The company didnt respond to a request for comment about the agenda for the event, but has said it will be available to watch.

Much of what will get talked about wont be a surprise, even if investors dont understand it all. Those are known unknowns.

Tesla should update investors about its driver assistance feature dubbed full self driving. Whats more, the company will describe the benefit of vertical integration. Tesla makes the hardwareits own computers with its own microchipsand its software. Tesla might even give a more definitive timeline for when Level 4 autonomous vehicles will be ready.

Roth Capital analyst Craig Irwin doesnt believe Level 4 technology is on the horizon though. He tells Barrons the computing power and camera resolution just isnt there yet. Tesla will work hard to suggest tech leadership in AI for automotive, says Irwin. Reality will probably be much less exciting than their claims.

Irwin rates Tesla shares Hold. His price target is just $150 a share.

The car industry essentially defines five levels of autonomous driving. Level 1 is nothing more than cruise control. Level 2 systems are available on cars today and combine features such as adaptive cruise and lane-keeping assistance, enabling the car to do a lot on its own. Drivers, however, still need to pay attention 100% of the time with Level 2 systems.

Level 3 systems would allow drivers to stop paying attention part of the time. Level 4 would let them stop paying attention most of the time. And Level 5 means the car does everything always. Level 5 autonomy isnt an easy endeavor, says Global X analyst Pedro Palandrani. There are so many unique cases for technology to tackle, like in bad weather or dirt roads. But level 4 is enough to change the world. he added. He is more optimistic than Irwin about the timing for Level 4 systems and hopes Tesla provides more timing detail at its event.

Beyond a technology run down and level 4 timing, the company might have some surprises up its sleeve for investors. Palandrani has two ideas.

For starters, Tesla might indicate its willing to sell its hardware and software to other car companies. That would give Tesla other unexpected, sources of income. Tesla already offers its full self driving as a monthly subscription to owners of its cars. Thats new for the car industry and opens up a source of recurring revenue for anyone with the requisite technology. Selling hardware and software to other car companies, however, would be new, and surprising, for investors.

Tesla might also talk about its advancements in robotics. CEO Elon Musk has talked often in the past about the difficulty of making the machine that makes the machine. Some of Teslas AI efforts might also be targeted at building, and not just driving, vehicles. Were just making a crazy amount of machinery internally, said Musk on the companys second-quarter conference call. This is.not well understood.

Those are two items that can surprise. Whether they, or other tidbits, will move the stock is something else entirely.

Tesla stock dropped about 7% over Monday and Tuesday partly because NHTSA disclosed it was looking into accidents involving Teslas driver assistance features. Tesla will surely stress the safety benefits of driver assistance features on Thursday, whether it can shake off that bit of bad news though is harder to tell.

Thursday becomes a much more important event in light of this weeks [NHTSA] probe, says Wedbush analyst Dan Ives. This week has been another tough week for Tesla [stock] and the Street needs some good news heading into this AI event.

Ives rates Tesla shares Buy and has a $1,000 price target for the stock. Teslas autonomous driving leadership is part of his bullish take on shares.

If history is any guide investors should expect volatility. Tesla stock dropped 10% the day following its battery technology event in September 2020. It took shares about seven trading days to recover, and Tesla stock gained about 86% from the battery event to year-end.

Tesla stock is down about 6% year to date, trailing behind the 18% and 15% comparable, respective gains of the S&P 500 and Dow Jones Industrial Average. Tesla stock hasnt moved much, in absolute terms, since March. Shares were in the high $600s back then. They closed down 3% at $665.71 on Tuesday, but are up 1.3% at $674.19 in premarket trading Wednesday.

Write to allen.root@dowjones.com

Originally posted here:

Tesla AI Day Starts Today. Here's What to Watch. - Barron's

Posted in Ai | Comments Off on Tesla AI Day Starts Today. Here’s What to Watch. – Barron’s

Five ways that Open Source Software shapes AI policy – Brookings Institution

Posted: at 3:58 pm

Open-source software (OSS), which is free to access, use, and change without restrictions, plays a central role in the development and use of artificial intelligence (AI). An AI algorithm can be thought of as a set of instructionsthat is, what calculations must be done and in what order; developers then write software which contains these conceptual instructions as actual code. If that software is subsequently published in an open-source mannerwhere the underlying code publicly available for anyone to use and modifyany data scientist can quickly use that algorithm with little effort. There are thousands of implementations of AI algorithms that make using AI easier in this way, as well as a critical family of emerging tools that enable more ethical AI. Simultaneously, there are a dwindling number of OSS tools in the especially important subfield of deep learningleading to the enhanced market influence of the companies that develop that OSS, Facebook and Google. Few AI governance documents focus sufficiently on the role of OSS, which is an unfortunate oversight, despite this quietly affecting nearly every issue in AI policy. From research to ethics, and from competition to innovation, open-source code is playing a central role in AI and deserves more attention from policymakers.

OSS enables and increases AI adoption by reducing the level of mathematical and technical knowledge necessary to use AI. Writing the complex math of algorithms into code is difficult and time-consuming, which means any existing open-source alternative can be a huge benefit for data scientists. OSS benefits from both a collaborative and competitive environment in that developers work together to find bugs just as often as they compete to write the best version of an algorithm. This frequently results in more accessible, robust, and high-quality code relative to what an average data scientistoften more of a data explorer and pragmatic problem-solver than pure mathematicianmight develop. This means that well-written open-source AI code significantly expands the capacity of the average data scientist, letting them use more-modern machine learning algorithms and functionality. Thus, while much attention has been paid to training and retaining AI talent, making AI easier to useas OSS code doesmay have a similarly significant impact in enabling economic growth from AI.

Open-source AI tools can also enable the broader and better use of ethical AI. Open-source tools like IBMs AI Fairness 360, Microsofts Fairlearn, and the University of Chicagos Aequitas ease technical barriers to fighting AI bias. There is also OSS software that makes it easier for data scientists to interrogate their models, such as IBMs AI Explainability 360 or Chris Molnars interpretable machine learning tool and book. These tools can help time-constrained data scientists who want to build more responsible AI systems, but are under pressure to finish projects and deliver for clients. While more government oversight of AI is certainly necessary, policymakers should also more frequently consider investing in open-source ethical AI software as an alternative lever to improve AIs role in society. The National Science Foundation is already funding research into AI fairness, but grant-making agencies and foundations should consider OSS as an integral component of ethical AI, and further fund its development and adoption.

In 2007, a group of researchers argued that the lack of openly available algorithmic implementations is a major obstacle to scientific progress in a paper entitled The Need for Open Source Software in Machine Learning. Its hard to imagine this problem today, as there is now a plethora of OSS AI tools for scientific discovery. As just one example, the open-source AI software Keras is being used to identify subcomponents of mRNA molecules and build neural interfaces to better help blind people see. OSS software also makes research easier to reproduce, enabling scientists to check and confirm one anothers results. Even small changes in how an AI algorithm was implemented can lead to very different results; using shared OSS can mitigate this source of uncertainty. This makes it easier for scientists to critically evaluate the results of their colleagues research, a common challenge in the many disciplines facing an ongoing replication crisis.

While OSS code is far more common today, there are still efforts to raise the percent of academic papers which publicly release their codecurrently around 50 to 70 percent at major machine learning conferences. Policymakers also have a role in supporting OSS code in the sciences, such as by encouraging federally funded AI research projects to publicly release the resulting code. Grant-making agencies might also consider funding the ongoing maintenance of OSS AI tools, which is often a challenge for critical software. The Chan Zuckerberg Initiative, which funds critical OSS projects, writes that OSS is crucial to modern scientific research yet even the most widely-used research software lacks dedicated funding.

OSS has significant ramifications for competition policy. On one hand, the public release of machine learning code broadens and better enables its use. In many industries, this will enable more AI adoption with less AI talentlikely a net good for competition. However, for Google and Facebook, the open sourcing of their deep learning tools (Tensorflow and PyTorch, respectively), may further entrench them in their already fortified positions. Almost all the developers for Tensorflow and PyTorch are employed by Google and Facebook, suggesting that the companies are not relinquishing much control. While these tools are certainly more accessible to the public, the oft stated goal of democratizing technology through OSS is, in this case, euphemistic.

Tensorflow and PyTorch have become the most common deep learning tools in both industry and academia, leading to great benefits for their parent companies. Google and Facebook benefit more immediately from research conducted with their tools because there is no need to translate academic discoveries into a different language or framework. Further, their dominance manifests a pipeline of data scientists and machine learning engineers trained in their systems and helps position them as the cutting-edge companies to work for. All told, the benefits to Google and Facebook to controlling OSS deep learning are significant and may persist far into the future. This should be accounted for in any discussions of technology sector competition.

OSS AI also has important implications for standards bodies, such as IEEE, ISO/JTC, and CEN-CENELEC, which seek to influence the industry and politics of AI. In other industries, standards bodies often add value by disseminating best practices and enabling interoperable technology. However, in AI, the diversified use of operating systems, programming languages, and tools means that interoperability challenges have already received substantial attention. Further, the AI practitioner community is somewhat informal, with many practices and standards disseminated through twitter, blog posts, and OSS documentation. The dominance of Tensorflow and PyTorch in the deep learning subfield means that Google and Facebook have outsized influence, which they may be reluctant to cede to the consensus-driven standards bodies. So far, OSS developers have not been extensively engaged in the work of the international standards bodies, and this may significantly inhibit their influence on the AI field.

From research to ethics, and from competition to innovation, open-source code is playing a central role in the developing use of artificial intelligence. This makes the consistent absence of open-source developers from the policy discussions quite notable, since they wield meaningful influence over, and highly specific knowledge of, the direction of AI. Involving more OSS AI developers can help AI policymakers more routinely consider the influence of OSS in the pursuit of the just and equitable development of AI.

The National Science Foundation, Facebook, Google, Microsoft, and IBM are donors to the Brookings Institution. The findings, interpretations, and conclusions posted in this piece are solely those of the authors and not influenced by any donation.

Read the original here:

Five ways that Open Source Software shapes AI policy - Brookings Institution

Posted in Ai | Comments Off on Five ways that Open Source Software shapes AI policy – Brookings Institution

A man spent a year in jail on a murder charge that hinged on disputed AI evidence. Now the case has been dropped – The Register

Posted: at 3:58 pm

In brief The case against a man accused of murder has been thrown out by a judge after prosecutors withdrew disputed evidence of an AI-identified gunshot sound.

Michael Williams, 65, who denied any wrongdoing, sat in jail for 11 months awaiting trial for allegedly killing Safarian Herring, 25.

It's said that in May last year, Williams was driving through Chicago one night hoping to buy some cigarettes. Herring waved him down for a ride, and Williams, recognizing the younger man from the neighborhood, let him into his car. Soon after another vehicle pulled up alongside, and someone in a passenger seat took out a gun and shot Herring in the head, Williams told police. Herring's mother said her son, an aspiring chef, had been shot at two weeks earlier at a bus stop.

Herring, who was taken to hospital by Williams, died from the gunshot wound, and Williams ended up being charged with his murder. A key piece of evidence against him came from ShotSpotter, a company that operates microphones spread across US cities including Chicago that, with the aid of machine-learning algorithms, detect and identify gunshot sounds to immediately alert the cops.

Prosecutors said ShotSpotter picked up a gunshot sound where Williams was seen on surveillance camera footage in his car, putting it all forward as proof that Williams shot Herring right there and then. Police did not cite a motive, had no eyewitnesses, and did not find the gun used in the attack. Williams did have a criminal history, though, having served time for attempted murder, robbery, and discharging a firearm when he was younger, and said he had turned his life around significantly since. He was grilled by detectives, and booked.

Crucially, Williams' lawyers public defenders Lisa Boughton and Brendan Max said records showed that ShotSpotter actually initially picked up what sounded like a firework a mile away, and this was later reclassified by ShotSpotter staff to be a gunshot at the intersection where and when Williams was seen on camera. ShotSpotter strongly insisted it had not improperly altered any data to favor the police's case, and said that regardless of the initial real-time alert, its evidence of the gunshot was the result of follow-up forensic analysis, which was submitted to the courts.

After Williams' lawyers asked the judge in the case to carry out an inquiry, the prosecution last month withdrew the ShotSpotter report, and asked for the case to the dismissed on the basis of insufficient evidence, which the judge agreed to. Williams is now a free man again.

I kept trying to figure out, how can they get away with using the technology like that against me, Williams told the Associated Press for an in-depth investigation into the case published this week. Thats not fair.

Startup Kapwing, which built a web application that uses computer-vision algorithms to generate pictures for people, is disappointed netizens used the code to produce NSFW material.

The software employs a combination of VQGAN and CLIP made by researchers at the University of Heidelberg and OpenAI, respectively to turn text prompts into images. This approach was popularised by artist Katherine Crowson in a Google Collab notebook; there's a Twitter account dedicated to showing off this type of computer art.

Kapwing had hoped its implementation of VQGAN and CLIP on the web would be used to make art from users' requests; instead, we're told, it was used to make filth.

Since I work at Kapwing, an online video editor, making an AI art and video generator seemed like a project that would be right up our alley, Eric Lu, co-founder and CTO at Kapwing said.

The problem? When we made it possible for anyone to generate art with artificial intelligence, barely anyone used it to make actual art. Instead, our AI model was forced to make videos for random inputs, trolling queries, and NSFW intents.

Submitted prompts ranged from naked woman to the downright bizarre thong bikini covered in chocolate or gay unicorn at a funeral. The funny thing is, the images made by the AI aren't even that realistic nor sexually explicit. Below is an example output for "naked woman."

Click to enlarge

Is is that the internet just craves NSFW content so much that they will type it anywhere? Or do people have a propensity to try to abuse AI systems?" Lu continued. "Either way, the content outputted must have [been] disappointing to these users, as most of the representations outputted by our models were abstract."

Intel is shuttering its RealSense computer-vision product wing. The business unit's chips, cameras, LiDAR, hardware modules, and software were aimed at things like digital signage, 3D scanning, robotics, and facial-authentication systems.

Now the plug's been pulled, and RealSense boss Sagi Ben Moshe is departing Intel after a decade at the semiconductor goliath.

We are winding down our RealSense business and transitioning our computer vision talent, technology and products to focus on advancing innovative technologies that better support our core businesses and IDM 2.0 strategy, an Intel spokesperson told CRN.

All RealSense products will be discontinued, though it appears its stereo cameras for depth perception will stay, to some degree, according to IEEE's Spectrum.

More here:

A man spent a year in jail on a murder charge that hinged on disputed AI evidence. Now the case has been dropped - The Register

Posted in Ai | Comments Off on A man spent a year in jail on a murder charge that hinged on disputed AI evidence. Now the case has been dropped – The Register

An AI expert explains why it’s hard to give computers something you take for granted: Common sense – The Conversation US

Posted: at 3:58 pm

Imagine youre having friends over for lunch and plan to order a pepperoni pizza. You recall Amy mentioning that Susie had stopped eating meat. You try calling Susie, but when she doesnt pick up, you decide to play it safe and just order a margherita pizza instead.

People take for granted the ability to deal with situations like these on a regular basis. In reality, in accomplishing these feats, humans are relying on not one but a powerful set of universal abilities known as common sense.

As an artificial intelligence researcher, my work is part of a broad effort to give computers a semblance of common sense. Its an extremely challenging effort.

Despite being both universal and essential to how humans understand the world around them and learn, common sense has defied a single precise definition. G. K. Chesterton, an English philosopher and theologian, famously wrote at the turn of the 20th century that common sense is a wild thing, savage, and beyond rules. Modern definitions today agree that, at minimum, it is a natural, rather than formally taught, human ability that allows people to navigate daily life.

Common sense is unusually broad and includes not only social abilities, like managing expectations and reasoning about other peoples emotions, but also a naive sense of physics, such as knowing that a heavy rock cannot be safely placed on a flimsy plastic table. Naive, because people know such things despite not consciously working through physics equations.

Common sense also includes background knowledge of abstract notions, such as time, space and events. This knowledge allows people to plan, estimate and organize without having to be too exact.

Intriguingly, common sense has been an important challenge at the frontier of AI since the earliest days of the field in the 1950s. Despite enormous advances in AI, especially in game-playing and computer vision, machine common sense with the richness of human common sense remains a distant possibility. This may be why AI efforts designed for complex, real-world problems with many intertwining parts, such as diagnosing and recommending treatments for COVID-19 patients, sometimes fall flat.

Modern AI is designed to tackle highly specific problems, in contrast to common sense, which is vague and cant be defined by a set of rules. Even the latest models make absurd errors at times, suggesting that something fundamental is missing in the AIs world model. For example, given the following text:

You poured yourself a glass of cranberry, but then absentmindedly, you poured about a teaspoon of grape juice into it. It looks OK. You try sniffing it, but you have a bad cold, so you cant smell anything. You are very thirsty. So you

the highly touted AI text generator GPT-3 supplied

drink it. You are now dead.

Recent ambitious efforts have recognized machine common sense as a moonshot AI problem of our times, one requiring concerted collaborations across institutions over many years. A notable example is the four-year Machine Common Sense program launched in 2019 by the U.S. Defense Advanced Research Projects Agency to accelerate research in the field after the agency released a paper outlining the problem and the state of research in the field.

The Machine Common Sense program funds many current research efforts in machine common sense, including our own, Multi-modal Open World Grounded Learning and Inference (MOWGLI). MOWGLI is a collaboration between our research group at the University of Southern California and AI researchers from the Massachusetts Institute of Technology, University of California at Irvine, Stanford University and Rensselaer Polytechnic Institute. The project aims to build a computer system that can answer a wide range of commonsense questions.

One reason to be optimistic about finally cracking machine common sense is the recent development of a type of advanced deep learning AI called transformers. Transformers are able to model natural language in a powerful way and, with some adjustments, are able to answer simple commonsense questions. Commonsense question answering is an essential first step for building chatbots that can converse in a human-like way.

In the last couple of years, a prolific body of research has been published on transformers, with direct applications to commonsense reasoning. This rapid progress as a community has forced researchers in the field to face two related questions at the edge of science and philosophy: Just what is common sense? And how can we be sure an AI has common sense or not?

To answer the first question, researchers divide common sense into different categories, including commonsense sociology, psychology and background knowledge. The authors of a recent book argue that researchers can go much further by dividing these categories into 48 fine-grained areas, such as planning, threat detection and emotions.

However, it is not always clear how cleanly these areas can be separated. In our recent paper, experiments suggested that a clear answer to the first question can be problematic. Even expert human annotators people who analyze text and categorize its components within our group disagreed on which aspects of common sense applied to a specific sentence. The annotators agreed on relatively concrete categories like time and space but disagreed on more abstract concepts.

Even if you accept that some overlap and ambiguity in theories of common sense is inevitable, can researchers ever really be sure that an AI has common sense? We often ask machines questions to evaluate their common sense, but humans navigate daily life in far more interesting ways. People employ a range of skills, honed by evolution, including the ability to recognize basic cause and effect, creative problem solving, estimations, planning and essential social skills, such as conversation and negotiation. As long and incomplete as this list might be, an AI should achieve no less before its creators can declare victory in machine commonsense research.

Its already becoming painfully clear that even research in transformers is yielding diminishing returns. Transformers are getting larger and more power hungry. A recent transformer developed by Chinese search engine giant Baidu has several billion parameters. It takes an enormous amount of data to effectively train. Yet, it has so far proved unable to grasp the nuances of human common sense.

Even deep learning pioneers seem to think that new fundamental research may be needed before todays neural networks are able to make such a leap. Depending on how successful this new line of research is, theres no telling whether machine common sense is five years away, or 50.

[Research into coronavirus and other news from science Subscribe to The Conversations new science newsletter.]

Visit link:

An AI expert explains why it's hard to give computers something you take for granted: Common sense - The Conversation US

Posted in Ai | Comments Off on An AI expert explains why it’s hard to give computers something you take for granted: Common sense – The Conversation US

Honda 2040 NIKO comes with a tiny Ai assistant, taking the car from a vehicle to your friend! – Yanko Design

Posted: at 3:58 pm

A Honda autonomous vehicle bot with a compatible AI assistant, conceptualized for the year 2040 where companionship with robotic machines is going to be a common affair.

Just imagine how overurbanization by the year 2040 will change the complexion of living. Due to increased expenses and the dreams of Generation Z, the number of single households will increase exponentially. Solitary life will be more common and interaction with Artificial Intelligence will be the solution to the widespread loneliness. Jack Junseok Lees 2040 Honda NIKO bot is that very friend, as in Toy Story words, youve got a friend in him!

This smart companion has a frontal face with larger proportions to emphasize the living character, making the interaction very lively. The animated design of the fenders with covered wheels looks like the legs of a pet animal. This creates an illusion of a seemingly moving gesture like that of a living being. Most of all NIKO has a proud stance like the Lion King while radiating that cute character of a playful puppy. According to Jack, the design philosophy of the bot is centered around creating a very lively object.

The bot inside the movable vehicle will understand the owners emotion and current state of mind to provide empathy to them. Itll laugh or cry with them, hear their problems and give unique solutions or thoughts. It also has storage on both sides to haul any groceries or other things if your own hands are full. When in an open position, these doors act as side tables to keep things.

This is combined with an autonomous vehicle-like bot which is bigger in size which serves as a compact commuter for short city stints to get essentials from the nearby store. Both these AI machines in a way provide the user with genuine support just like a human would do.

Designer: Jack Junseok Lee

121 Shares

More here:

Honda 2040 NIKO comes with a tiny Ai assistant, taking the car from a vehicle to your friend! - Yanko Design

Posted in Ai | Comments Off on Honda 2040 NIKO comes with a tiny Ai assistant, taking the car from a vehicle to your friend! – Yanko Design

Stanford AI experts warn of biases in GPT-3 and BERT models – Fast Company

Posted: at 3:58 pm

A multidisciplinary group of Stanford University professors and students wants to start a serious discussion about the increasing use of large, frighteningly smart, foundation AI models such as OpenAIs GPT-3 (Generative Pretraining Transformer 3) natural language model.

GPT-3 is foundational because it was developed using huge quantities of training data and computer power to reach state-of-the-art, general-purpose performance. Developers, not wanting to reinvent the wheel, are using it as the basis for their software to tackle specific tasks.

But foundation models have some very real downsides, explains Stanford computer science professor Percy Liang. They create a single point of failure, so any defects, any biases which these models have, any security vulnerabilities . . . are just blindly inherited by all the downstream tasks, he says.

Liang leads a new group assembled by Stanfords institute for Human-Centered Artificial Intelligence (HAI) called the Center for Research on Foundation Models (CRFM). The group is studying the impacts and implications of foundation models, and its inviting the tech companies developing them to come to the table and participate.

The profit motive encourages companies to punch the gas on emerging tech instead of braking for reflection and study, says Fei-Fei Li, who was the director of Stanfords AI Lab from 2013 to 2018 and now codirects HAI.

Industry is working fast and hard on this, but we cannot let them be the only people who are working on this model, for multiple reasons, Li says. A lot of innovation that could come out of these models still, I firmly believe will come out of the research environment where revenue is not the goal.

Part of the reason for all the concern is that foundation models end up touching the experience of so many people. In 2019, researchers at Google built the transformational BERT (Bidirectional Encoder Representations from Transformers) natural language model, which now plays a role in nearly all of Googles search functions. Other companies took BERT and built new models on top of it. Researchers at Facebook, for example, used BERT as the basis for an even larger natural language model, called RoBERTa (Robustly Optimized BERT Pretraining Approach), which now underpins many of Facebooks content moderation models.

Now almost all NLP (Natural Language Processing) models are built on top of BERT, or maybe one of a few of these foundation models, Liang says. So theres this incredible homogenization thats happening.

In June 2020 OpenAI began making its GPT-3 natural language model available via a commercial API to other companies that then built specialized applications on top of it. OpenAI has now built a new model, Codex, that creates computer code from English text.

With all due respect to industry, they cannot have the law school and medical school on their campus.

You train a huge model and then you go in and you discover what it can do, discover what has emerged from the process, says Liang. Thats a fascinating thing for scientists to study, he adds, but sending the models into production when theyre not fully understood is dangerous.

We dont even know what theyre capable of doing, let alone when they fail, he says. Now things get really interesting, because were building our entire AI infrastructure on these models.

If biases are baked into models such as GPT-3 and BERT, they may infect applications built on top of them. For example, a recent study by Stanford HAI researchers involved teaching GPT-3 to compose stories beginning with the phrase two Muslims walk into a . . .. Sixty-six percent of the text the model provided involved violent themes, a far higher percentage than for other groups. Other researchers have uncovered other instances of deep-rooted biases in foundation models: In 2019, for instance, BERT was shown to associate terms such as programmer with men over women.

To be sure, companies employ ethics teams and carefully select training data that will not introduce biases into their models. And some take steps to prevent their foundation models from providing the basis for unethical applications. OpenAI, for example, pledges to cut off API access to any application used for harassment, spam, radicalization, or astroturfing.

Still, private companies wont necessarily comply with a set of industry standards for ensuring unbiased models. And there is no regulatory body at the state or federal level thats ready with policies that might keep large AI models from impacting consumers, especially those in minority or underrepresented groups, in negative ways. Li says lawmakers have attended past HAI workshops, hoping to gain insights on what policies might look like.

She also stresses that its the university setting that can provide all the necessary perspectives for defining policies and standards.

We not only have deep experts from philosophy, political science, and history departments, we also have a medical school, business school, and law school, and we also have experts in application areas that come to work on these critical technologies with us, Li says. And with all due respect to industry, they cannot have the law school and medical school on their campus. (Li worked at Google as chief scientist for AI and machine learning 20172018.)

One of the first products of CRFMs work is a 200-page research paper on foundation models. The paper, which is being published today, was cowritten by more than 100 authors of different professional disciplines. It explores 26 aspects of foundation models, including the legal ramifications, environmental and economic impacts, and ethical issues.

CRFM will also hold a (virtual) workshop later this month at which its members will discuss foundation models with visiting academics and people from the tech industry.

See original here:

Stanford AI experts warn of biases in GPT-3 and BERT models - Fast Company

Posted in Ai | Comments Off on Stanford AI experts warn of biases in GPT-3 and BERT models – Fast Company

How Moderna, Home Depot, and others are succeeding with AI – MIT Sloan News

Posted: at 3:58 pm

open share links close share links

When pharmaceutical company Moderna announced the first clinical trial of a COVID-19 vaccine, it was a proud moment but not a surprising one for Dave Johnson, the companys chief data and artificial intelligence officer.

When Johnson joined the company in 2014, he helped put in place automated processes and AI algorithmsto increase the number of small-scale messenger RNA (mRNA) needed to run clinical experiments. This groundwork contributed to Moderna releasing one of the first COVID-19 vaccines (using mRNA) even as the world had only started to understand the virus threat.

The whole COVID vaccine development, were immensely proud of the work that weve done there, and were immensely proud of the superhuman effort that our people went through to bring it to market so quickly, Johnson said during a bonus episode of the MIT Sloan Management Review podcast Me, Myself, and AI.

But a lot of it was built on this infrastructure that we had put in place where we didnt build algorithms specifically for COVID; we just put them through the same pipeline of activity that weve been doing, Johnson said. We just turned it as fast as we could.

Successfully using AI in business is at the heart of the podcast, which recently finished its second season. The podcast is hosted by Sam Ransbotham, professor of information systems at Boston College, and Shervin Khodabandeh, senior partner with Boston Consulting Group and co-lead of its AI practice in North America. The series features leaders who are achieving big wins with AI.

Heres a look at some of the highlights from this season.

If youre frantically searching the Home Depot website for a way to patch a hole in your wall, chances are youre not thinking of the people whove generated the recommendation for the correct brackets to use with your new wall-mounted mirror or the project guide for the repairs youre doing.

But Huiming Qu, the Home Depots senior director of data science and machine learning products, marketing, and online, is not only thinking about those data scientists and engineers, shes leading them, and doing it in a way she hopes will leave both her team and customers happy. To do this, Qus team pulls as much data as it can from customer visits to the site, such as what was in their carts and what their prior searches were.

Qus team then weaves that information into an extremely, extremely light test version of an algorithm to cut down on development time and to figure out if that change will be possible within Home Depots digital infrastructure.

It takes a cross-functional team iteratively to move a lot faster to break down that bigger problem, bigger goals, to many smaller ones that we can achieve very quickly, Qu said.

When it comes to AI and machine learning at Google, the tech company applies three principles to innovation: focus on the user, rapidly prototype, and think in 10x.

We want to make sure were solving for a problem that also has the scale that will be worth it and really advances whatever were trying to do not in a small way, but in a really big way, said Will Grannis, managing director of Google Clouds Office of the CTO.

But before Google puts too many resources behind these 10x or moonshot solutions, engineers are encouraged to take on roof shot projects.

Rather than aiming for the sky right out of the gate, engineers only have to get an idea to the roof, Grannis said. A moonshot is often the product of a series of smaller roof shots, he said, and this approach allows him to see who is willing to put in the effort to see something through from start to finish.

If people dont believe in the end state, the big transformation, theyre usually much less likely to journey across those roof shots and to keep going when things get hard, Grannis said. My job is to create an environment where people feel empowered, encouraged, and excited to try and [I] try to demotivate them as little as possible, because theyll find their way to the roof shot, and then the next one, and then the next one, and then pretty soon youre three years in, and I couldnt stop a project if I wanted to.

JoAnn Stonier, chief data officer at Mastercard is using AI and machine learning to prevent and uncover bias, even though most datasets will have some bias in them to begin with.

And thats OK. The 1910 U.S. voter rolls, for example, are a dataset, Stonier said. They could be used to study something like voting habits of early 20th century white men. But you would also need to acknowledge that women and people of color are missing from that dataset, so your study wouldnt reflect the entire U.S. population in 1910.

The problem is, if you dont remember that, or youre not mindful of that, then you have an inquiry thats going to learn off of a dataset that is missing characteristics that [are] going to be important to whatever that other inquiry is, Stonier said. Those are some of the ways that I think we can actually begin to design a better future, but it means really being very mindful of whats inherent in the dataset, whats there, whats missing but also can be imputed.

The complete two seasons of Me, Myself, and AI can be listened to on Apple Podcasts and Spotify.Transcripts of the Me, Myself, and AI podcast are also available.

Excerpt from:

How Moderna, Home Depot, and others are succeeding with AI - MIT Sloan News

Posted in Ai | Comments Off on How Moderna, Home Depot, and others are succeeding with AI – MIT Sloan News

Page 113«..1020..112113114115..120130..»