Emory students advance artificial intelligence with a bot that aims to serve humanity – SaportaReport

A team of six Emory computer science students are helping to usher in a new era in artificial intelligence. Theyve developed a chatbot capable of making logical inferences that aims to hold deeper, more nuanced conversations with humans than have previously been possible. Theyve christened their chatbot Emora, because it sounds like a feminine version of Emory and is similar to a Hebrew word for an eloquent sage.

The team is now refining their new approach to conversational AI a logic-based framework for dialogue management that can be scaled to conduct real-life conversations. Their longer-term goal is to use Emora to assist first-year college students, helping them to navigate a new way of life, deal with day-to-day issues and guide them to proper human contacts and other resources when needed.

Eventually, they hope to further refine their chatbot developed during the era of COVID-19 with the philosophy Emora cares for you to assist people dealing with social isolation and other issues, including anxiety and depression.

The Emory team is headed by graduate students Sarah Finch and James Finch, along with faculty advisorJinho Choi, associate professor in the Department of Computer Sciences. The team also includes graduate student Han He and undergraduates Sophy Huang, Daniil Huryn and Mack Hutsell. All the students are members of ChoisNatural Language Processing Research Laboratory.

Were taking advantage of established technology while introducing a new approach in how we combine and execute dialogue management so a computer can make logical inferences while conversing with a human, Sarah Finch says.

We believe that Emora represents a groundbreaking moment for conversational artificial intelligence, Choi adds. The experience that users have with our chatbot will be largely different than chatbots based on traditional, state-machine approaches to AI.

Last year, Choi and Sarah and James Finch headed a team of 14 Emory students that took first place in Amazons Alexa Prize Socialbot Grand Challenge, winning $500,000 for their Emora chatbot. The annual Alexa Prize challenges university students to make breakthroughs in the design of chatbots, also known as socialbots software apps that simplify interactions between humans and computers by allowing them to talk with one another.

This year, they developed a completely new version of Emora with the new team of six students.

They made the bold decision to start from scratch, instead of building on the state-machine platform they developed in 2020 for Emora. We realized there was an upper limit to how far we could push the quality of the system we developed last year, Sarah Finch says. We wanted to do something much more advanced, with the potential to transform the field of artificial intelligence.

They based the current Emora on three types of frameworks to advance core natural language processing technology, computational symbolic structures and probabilistic reasoning for dialogue management.

They worked around the clock, making it into the Alexa Prize finals in June. They did not complete most of the new system, however, until just a few days before they had to submit Emora to the judges for the final round of the competition.

That gave the team no time to make finishing touches to the new system, work out the bugs, and flesh out the range of topics that it could deeply engage in with a human. While they did not win this years Alexa Prize, the strategy led them to develop a system that holds more potential to open new doors of possibilities for AI.

In the run-up to the finals, users of Amazons virtual assistant, known as Alexa, volunteered to test out the competing chatbots, which were not identified by their names or universities. A chatbots success was gauged by user ratings.

The competition is extremely valuable because it gave us access to a high volume of people talking to our bot from all over the world, James Finch says. When we wanted to try something new, we didnt have to wait long to see whether it worked. We immediately got this deluge of feedback so that we could make any needed adjustments. One of the biggest things we learned is that what people really want to talk about is their personal experiences.

Sarah and James Finch, who married in 2019, are the ultimate computer power couple. They met at age 13 in a math class in their hometown of Grand Blanc, Michigan. They were dating by high school, bonding over a shared love of computer programming. As undergraduates at Michigan State University, they worked together on a joint passion for programming computers to speak more naturally with humans.

If we can create more flexible and robust dialogue capability in machines, Sarah Finch explains, a more natural, conversational interface could replace pointing, clicking and hours of learning a new software interface. Everyone would be on a more equal footing because using technology would become easier.

She hopes to pursue a career in enhancing computer dialogue capabilities with private industry after receiving her PhD.

James Finch is most passionate about the intellectual aspects of solving problems and is leaning towards a career in academia after receiving his PhD.

The Alexa Prize deadlines required the couple to work many 60-hour-plus weeks on developing Emoras framework, but they didnt consider it a grind. Ive enjoyed every day, James Finch says. Doing this kind of dialogue research is our dream and were living it. We are making something new that will hopefully be useful to the world.

They chose to come to Emory for graduate school because of Choi, an expert in natural language processing, and Eugene Agichtein, professor in the Department of Computer Science and an expert in information retrieval.

Emora was designed not just to answer questions, but as a social companion.

A caring chatbot was an essential requirement for Choi. At the end of every team meeting, he asks one member to say something about how the others have inspired them. When someone sees a bright side in us, and shares it with others, everyone sees that side and that makes it even brighter, he says.

Chois enthusiasm is also infectious.

Growing up in Seoul, South Korea, he knew by the age of six that he wanted to design robots. I remember telling my mom that I wanted to make a robot that would do homework for me so I could play outside all day, he recalls. It has been my dream ever since. I later realized that it was not the physical robot, but the intelligence behind the robot that really attracted me.

The original Emora was built on a behavioral mathematical model similar to a flowchart and equipped with several natural language processing models. Depending on what people said to the chatbot, the machine made a choice about what path of a conversation to go down. While the system was good at chit chat, the longer a conversation went on, the more chances that the system would miss a social-linguistic nuance and the conversation would go off the rails, diverting from the logical thread.

This year, the Emory team designed Emora so that she could go beyond a script and make logical inferences. Rather than a flowchart, the new system breaks a conversation down into concepts and represents them using a symbolic graph. A logical inference engine allows Emora to connect the graph of an ongoing conversation into other symbolic graphs that represent a bank of knowledge and common sense. The longer the conversations continue, the more its ability to make logical inferences grows.

Sarah and James Finch worked on the engineering of the new Emora system, as well as designing logic structures and implementing related algorithms. Undergraduates Sophy Huang, Daniil Huryn and Mack Hutsell focused on developing dialogue content and conversational scripts for integrating within the chatbot. Graduate student Han He focused on structure parsing, including recent advances in the technology.

A computer cannot deal with ambiguity, it can only deal with structure, Han He explains. Our parser turns the grammar of a sentence into a graph, a structure like a tree, that describes what a chatbot user is saying to the computer.

He is passionate about language. Growing up in a small city in central China, he studied Japanese with the goal of becoming a linguist. His family was low income so he taught himself computer programming and picked up odd programmer jobs to help support himself. In college, he found a new passion in the field of natural language processing, or using computers to process human language.

His linguistic background enhances his technological expertise. When you learn a foreign language, you get new insights into the role of grammar and word order, He says. And those insights can help you to develop better algorithms and programs to teach computers how to understand language. Unfortunately, many people working in natural language processing focus primarily on mathematics without realizing the importance of grammar.

After getting his masters at the University of Houston, He chose to come to Emory for a PhD to work with Choi, who also emphasizes linguistics in his approach to natural language processing. He hopes to make a career in using artificial intelligence as an educational tool that can help give low-income children an equal opportunity to learn.

A love of language also brought senior Mack Hutsell into the fold. A native of Houston, he came to Emorys Oxford College to study English literature. His second love is computer programming and coding. When Hutsell discovered the digital humanities, using computational methods to study literary texts, he decided on a double major in English and computer science.

I enjoy thinking about language, especially language in the context of computers, he says.

Chois Natural Language Processing Lab and the Emora project was a natural fit for him.

Like the other undergraduates on the team, Hutsell did miscellaneous tasks for the project while also creating content that could be injected into Emoras real-world knowledge graph. On the topic of movies, for instance, he started with an IMDB dataset. The team had to combine concepts from possible conversations about the movie data in ways that would fit into the knowledge graph template and generate unique responses from the chatbot. Thinking about how to turn metadata and numbers into something that sounds human is a lot of fun, Hutsell says.

Language was also a key draw for senior Danii Huryn. He was born in Belarus, moved to California with his family when he was four, and then returned to Belarus when he was 10, staying until he completed high school. He speaks English, Belarusian and Russian fluently and is studying German.

In Belarus, I helped translate at my church, he says. That got me thinking about how different languages work differently and that some are better at saying different things.

Huryn excelled in computer programming and astronomy in his studies in Belarus. His interests also include reading science fiction and playing video games. He began his Emory career on the Oxford campus, and eventually decided to major in computer science and minor in physics.

For the Emora project, he developed conversations about technology, including an AI component, and another on how people were adapting to life during the pandemic.

The experience was great, Huryn says. I helped develop features for the bot while I was taking a course in natural language processing. I could see how some of the things I was learning about were coming together into one package to actually work.

Team member Sophy Huang, also a senior, grew up in Shanghai and came to Emory planning to go down a pre-med track. She soon realized, however, that she did not have a strong enough interest in biology and decided on a dual major of applied mathematics and statistics and psychology. Working on the Emora project also taps into her passions for computer programming and developing applications that help people.

Psychology plays a big role in natural language processing, Huang says. Its really about investigating how people think, talk and interact and how those processes can be integrated into a computer.

Food was one of the topics Huang developed for Emora to discuss. The strategy was first to connect with users by showing understanding, she says.

For instance, if someone says pizza is their favorite food, Emora would acknowledge their interest and ask what it is about pizza that they like so much.

By continuously acknowledging and connecting with the user, asking for their opinions and perspectives and sharing her own, Emora shows that she understands and cares, Huang explains. That encourages them to become more engaged and involved in the conversation.

The Emora team members are still at work putting the finishing touches on their chatbot.

We created most of the system that has the capability to do logical thinking, essentially the brain for Emora, Choi says. The brain just doesnt know that much about the world right now and needs more information to make deeper inferences. You can think of it like a toddler. Now were going to focus on teaching the brain so it will be on the level of an adult.

The team is confident that their system works and that they can complete full development and integration to launch beta testing sometime next spring.

Choi is most excited about the potential to use Emora to support first-year college students, answering questions about their day-to-day needs and directing them to the proper human staff or professor as appropriate. For larger issues, such as common conflicts that arise in group projects, Emora could also serve as a starting point by sharing how other students have overcome similar issues.

Choi also has a longer-term vision that the technology underlying Emora may one day be capable of assisting people dealing with loneliness, anxiety or depression. I dont believe that socialbots can ever replace humans as social companions, he says. But I do think there is potential for a socialbot to sympathize with someone who is feeling down, and to encourage them to get help from other people, so that they can get back to the cheerful life that they deserve.

Read more:

Emory students advance artificial intelligence with a bot that aims to serve humanity - SaportaReport

Virtual dressing room startup Revery.ai applying computer vision to the fashion industry – TechCrunch

Figuring out size and cut of clothes through a website can suck the fun out of shopping online, but Revery.ai is developing a tool that leverages computer vision and artificial intelligence to create a better online dressing room experience.

Under the tutelage of University of Illinois Center for Computer Science advisrr David Forsyth, a team consisting of Ph.D. students Kedan Li, Jeffrey Zhang and Min Jin Chong, is creating what they consider to be the first tool using existing catalog images to process at a scale of over a million garments weekly, something previous versions of virtual dressing rooms had difficulty doing, Li told TechCrunch.

Revery.ai co-founders Jeffrey Zhang, Min Jin Chong and Kedan Li. Image Credits: Revery.ai

California-based Revery is part of Y Combinators summer 2021 cohort gearing up to complete the program later this month. YC has backed the company with $125,000. Li said the company already has a two-year runway, but wants to raise a $1.5 million seed round to help it grow faster and appear more mature to large retailers.

Before Revery, Li was working on another startup in the personalized email space, but was challenged in making it work due to free versions of already large legacy players. While looking around for areas where there would be less monopoly and more ability to monetize technology, he became interested in fashion. He worked with a different adviser to get a wardrobe collection going, but that idea fizzled out.

The team found its stride working with Forsyth and making several iterations on the technology in order to target business-to-business customers, who already had the images on their websites and the users, but wanted the computer vision aspect.

Unlike its competitors that use 3D modeling or take an image and manually clean it up to superimpose on a model, Revery is using deep learning and computer vision so that the clothing drapes better and users can also customize their clothing model to look more like them using skin tone, hair styles and poses. It is also fully automated, can work with millions of SKUs and be up and running with a customer in a matter of weeks.

Its virtual dressing room product is now live on many fashion e-commerce platforms, including Zalora-Global Fashion Group, one of the largest fashion companies in Southeast Asia, Li said.

Revery.ai landing page. Image Credits: Revery.ai

Its amazing how good of results we are getting, he added. Customers are reporting strong conversion rates, something like three to five times, which they had never seen before. We released an A/B test for Zalora and saw a 380% increase. We are super excited to move forward and deploy our technology on all of their platforms.

This technology comes at a time when online shopping jumped last year as a result of the pandemic. Just in the U.S., the e-commerce fashion industry made up 29.5% of fashion retail sales in 2020, and the markets value is expected to reach $100 billion this year.

Revery is already in talks with over 40 retailers that are putting this on their roadmap to win in the online race, Li said.

Over the next year, the company is focusing on getting more adoption and going live with more clients. To differentiate itself from competitors continuing to come online, Li wants to invest body type capabilities, something retailers are asking for. This type of technology is challenging, he said, due to there not being much in the way of diversified body shape models available.

He expects the company will have to collect proprietary data itself so that Revery can offer the ability for users to create their own avatar so that they can see how the clothes look.

We might actually be seeing the beginning of the tide and have the right product to serve the need, he added.

Visit link:

Virtual dressing room startup Revery.ai applying computer vision to the fashion industry - TechCrunch

UK aims to boost solar by predicting cloud movements with A.I. – CNBC

LONDON The U.K. is planning to use artificial intelligence software to try to better predict when cloud movements will affect solar power generation.

National Grid Electricity System Operator, or ESO, which moves electricity around the country, has signed a deal with non-profit Open Climate Fix to create an AI-powered tracking system that matches cloud movements with the exact locations of solar panels.

The grid operator said that the software, which is set to be used in the national control room, could help it to forecast cloud movements in minutes and hours instead of days.

Open Climate Fix's "nowcasting" technology has the potential to improve solar forecasting accuracy by up to 50%, a spokesperson for National Grid ESO told CNBC.

The project, which commenced in August, is set to last 18 months and it is being funded by U.K. energy regulator Ofgem with 500,000 ($683,100).

Natonal Grid ESO is responsible for maintaining the balance of supply and demand for the U.K. electricity grid down to the second.

This is challenging with fossil fuels and nuclear power, but the unpredictable nature of solar and wind makes the task even more complex.

To help address the issue, London-headquartered Open Climate Fix says it has trained a machine-learning model to read satellite images and understand how and where clouds are moving in relation to solar panels on the ground.

"Accurate forecasts for weather-dependent generation like solar and wind are vital for us in operating a low carbon electricity system," said Carolina Tortora, head of innovation strategy and digital transformation at National Grid ESO, in a statement last week.

"The more confidence we have in our forecasts, the less we'll have to cover for uncertainty by keeping traditional, more controllable fossil fuel plants ticking over," she added.

Co-founded by former DeepMind employee Jack Kelly in 2018, Open Climate Fix was backed by Google's philanthropic arm, Google.org, with 500,000 in April.

At one point, DeepMind wanted to use its own AI technology to optimize National Grid. However, last March, it emerged that talks had broken down between DeepMind and National Grid.

While DeepMind denies it has shifted its focus from climate change to other areas of science, several key climate change researchers that were part of the company's energy unit have left the company over the last two years, and it has made few climate change-related announcements.

See more here:

UK aims to boost solar by predicting cloud movements with A.I. - CNBC

AI Can Write in English. Now It’s Learning Other Languages – WIRED

What's surprising about these large language models is how much they know about how the world works simply from reading all the stuff that they can find, says Chris Manning, a professor at Stanford who specializes in AI and language.

But GPT and its ilk are essentially very talented statistical parrots. They learn how to re-create the patterns of words and grammar that are found in language. That means they can blurt out nonsense, wildly inaccurate facts, and hateful language scraped from the darker corners of the web.

Amnon Shashua, a professor of computer science at the Hebrew University of Jerusalem, is the cofounder of another startup building an AI model based on this approach. He knows a thing or two about commercializing AI, having sold his last company, Mobileye, which pioneered using AI to help cars spot things on the road, to Intel in 2017 for $15.3 billion.

Shashuas new company, AI21 Labs, which came out of stealth last week, has developed an AI algorithm, called Jurassic-1, that demonstrates striking language skills in both English and Hebrew.

In demos, Jurassic-1 can generate paragraphs of text on a given subject, dream up catchy headlines for blog posts, write simple bits of computer code, and more. Shashua says the model is more sophisticated than GPT-3, and he believes that future versions of Jurassic may be able to build a kind of common-sense understanding of the world from the information it gathers.

Other efforts to re-create GPT-3 reflect the worldsand the internetsdiversity of languages. In April, researchers at Huawei, the Chinese tech giant, published details of a GPT-like Chinese language model called PanGu-alpha (written as PanGu-). In May, Naver, a South Korean search giant, said it had developed its own language model, called HyperCLOVA, that speaks Korean.

Jie Tang, a professor at Tsinghua University, leads a team at the Beijing Academy of Artificial Intelligence that developed another Chinese language model called Wudao (meaning "enlightenment'') with help from government and industry.

The Wudao model is considerably larger than any other, meaning that its simulated neural network is spread across more cloud computers. Increasing the size of the neural network was key to making GPT-2 and -3 more capable. Wudao can also work with both images and text, and Tang has founded a company to commercialize it. We believe that this can be a cornerstone of all AI, Tang says.

Such enthusiasm seems warranted by the capabilities of these new AI programs, but the race to commercialize such language models may also move more quickly than efforts to add guardrails or limit misuses.

Perhaps the most pressing worry about AI language models is how they might be misused. Because the models can churn out convincing text on a subject, some people worry that they could easily be used to generate bogus reviews, spam, or fake news.

I would be surprised if disinformation operators don't at least invest serious energy experimenting with these models, says Micah Musser, a research analyst at Georgetown University who has studied the potential for language models to spread misinformation.

Musser says research suggests that it wont be possible to use AI to catch disinformation generated by AI. Theres unlikely to be enough information in a tweet for a machine to judge whether it was written by a machine.

More problematic kinds of bias may be lurking inside these gigantic language models, too. Research has shown that language models trained on Chinese internet content will reflect the censorship that shaped that content. The programs also inevitably capture and reproduce subtle and overt biases around race, gender, and age in the language they consume, including hateful statements and ideas.

Similarly, these big language models may fail in surprising or unexpected ways, adds Percy Liang, another computer science professor at Stanford and the lead researcher at a new center dedicated to studying the potential of powerful, general-purpose AI models like GPT-3.

The rest is here:

AI Can Write in English. Now It's Learning Other Languages - WIRED

Flytxt Applauded by Frost & Sullivan for Improving Telcos’ Marketing Agility with Its AI/ML Applications – Yahoo Finance

Flytxt's AI solutions aid rapid decision making and contextualize interactions to help telcos take customer engagement to the next level

LONDON, Aug. 24, 2021 /PRNewswire/ -- Frost & Sullivan recognizes Flytxt with the 2021 Global Company of the Year Award for its artificial intelligence (AI) in telecom marketing. As the telecommunications industry transitioned from rule-based to augmented/autonomous marketing, Flytxt adapted its technology using AI, data analytics, and machine learning (ML) to enable hyper-personalization at scale.

2021 Global AI in Telecom Marketing Company of the Year Award

"Flytxt's uniquely differentiated software applications and best practices help telco marketers with data-driven decisions that maximize customer lifetime value," said Hemangi Patel, Senior Research Analyst for Frost & Sullivan. "Its AI/ML applications handle decisions and actions dynamically and contextually, rapidly analyzing high data volumes to arrive at the best opportunities to uplift customer value. Flytxt's out-of-the-box solutions are easy to deploy and maintain without burdening in-house data engineers and scientists."

Flytxt's proprietary CVM technology (data model, embedded analytics, explainable AI, and privacy preservation) is offered through a broad set of solutions used by more than 70 telcos globally. The company helps enterprises to deliver comprehensive data-driven digital experiences via its omnichannel CVM solution packaging AI, analytics, and marketing automation. CVM-in-a-box is a tightly packaged solution for smaller enterprises and business units to benefit from AI-driven marketing rapidly. The CVM accelerator solutions provide AI and analytics purpose-built to augment enterprises' existing customer engagement systems and achieve the desired CVM goals faster.

"Flytxt's autonomous and explainable AI applications drive marketing optimization at scale. These applications ensure that enterprises will never miss any opportunity to maximize customer value across numerous micro-moments and contexts," noted Ruman Ahmed, Best Practices Research Analyst for Frost & Sullivan. "Its AI/ML solutions deliver the right set of decisioning variables and logic to meet changing market dynamics in different markets. With its continued AI/ML innovation and proven results in various use cases across multiple markets, Flytxt emerges as the AI and analytics partner of choice for telcos to drive customer lifetime value."

Story continues

Each year, Frost & Sullivan presents a Company of the Year award to the organization that demonstrates excellence in terms of growth strategy and implementation in its field. The award recognizes a high degree of innovation with products and technologies and the resulting leadership in customer value and market penetration. The Best Practices Awards recognize companies in a variety of regional and global markets for demonstrating outstanding achievement and superior performance in areas such as leadership, technological innovation, customer service, and strategic product development.

About Frost & Sullivan

For six decades, Frost & Sullivan has been world-renowned for its role in helping investors, corporate leaders, and governments navigate economic changes and identify disruptive technologies, Mega Trends, new business models, and companies to action, resulting in a continuous flow of growth opportunities to drive future success. Contact us: Start the discussion. Contact us: Start the discussion.

Contact:

Tarini SinghP: +91-20 6718 9725E: Tarini.Singh@frost.com

About Flytxt

Flytxt is a Dutch company and a pioneer in marketing automation and AI technology; specializing in offering Customer Life-Time Value (CLTV) management solutions for subscription and usage businesses such as Telecom, Banking, Utilities, (online) Media & Entertainment, and Travel. Our solutions are used by more than 100 enterprises including 70 leading Telecom operators across the world to increase customer lifetime value through increased upsell, cross sell, and retention.

Contact:Pravin VijayP: +91-9745961333E: Pravin.vijay@flytxt.com

Cision

View original content to download multimedia:https://www.prnewswire.com/news-releases/flytxt-applauded-by-frost--sullivan-for-improving-telcos-marketing-agility-with-its-aiml-applications-301360358.html

SOURCE Frost & Sullivan

Continued here:

Flytxt Applauded by Frost & Sullivan for Improving Telcos' Marketing Agility with Its AI/ML Applications - Yahoo Finance

Byonic.ai Redefines the Future of Digital Marketing – inForney.com

FRISCO, Texas, Aug. 23, 2021 /PRNewswire-PRWeb/ --The next generation of AI- and ML-powered marketing is coming soon. Byonic.ai is the first-of-its-kind end-to-end platform for personalized lead insights, creative content, account intelligence, intent-based data, account-based marketing, and marketing automation. It allows data-driven teams to align their marketing, product, and customer success goals with revenue growth and sales.

Byonic.ai uses an extensive database that identifies the purchasing intent and habits of in-market prospects at various points in the sales and marketing cycles. AI capabilities target the right people at the right time, providing users with unparalleled real-time engagement opportunities that help turn prospects into well-qualified customers.

The platform uses predictive and actionable insights to discover the highest-quality leads for more successful marketing and sales outcomes. Users can measure campaign success with extensive reports and analysis. The end-to-end repeatable process embedded within Byonic.ai allows users to: discover, build, target, deliver, analyze, engage, and convert.

Account intelligence finally meets artificial intelligence.

How Byonic.ai Works

Byonic.ai will revolutionize digital marketing for:

B2B marketing and sales professionals can use the platform in several ways:

"Most platforms weren't built as a one-stop-shop for all your marketing campaign needs," says Snehhil Gupta, Chief Technology Officer, at Bython Media, creators of Byonic.ai. "Now, you get a full suite of end-to-end capabilities that include account intelligence, lead insights, marketing automation, and creative content, powered by AI/ML and wrapped in one simple and intuitive platform to run smarter campaigns."

Byonic.AI will launch in Fall 2021. Marketing and demand generation professionals can sign up for an early demo on the company's website, http://www.Byonic.AI.

Media Contact

Bython Media, Bython Media, +1 (214) 295-7729, dw@bython.com

Facebook

SOURCE Bython Media

Visit link:

Byonic.ai Redefines the Future of Digital Marketing - inForney.com

The Station: Birds improving scooter-nomics, breaking down Tesla AI day and the Nuro EC-1 – TechCrunch

The Station is a weekly newsletter dedicated to all things transportation.Sign up here just click The Station to receive it every weekend in your inbox.

Hello readers: Welcome to The Station, your central hub for all past, present and future means of moving people and packages from Point A to Point B. Im handing the wheel over to reporters Aria Alamalhodaei and Rebecca Bellan.

Before I completely leave though, I have to share the Nuro EC-1, a series of articles on the autonomous vehicle technology startup reported by investigative science and tech reporter Mark Harris with assistance from myself and our copy editorial team. This deep dive into Nuro is part of Extra Crunchs flagship editorial offerings.

As always, you can email me at kirsten.korosec@techcrunch.com to share thoughts, criticisms, opinions or tips. You also can send a direct message to me at Twitter @kirstenkorosec.

New York City finally launched its long-awaited scooter pilot in the Bronx this past week. Over 90 parking corrals specifically for e-scooters have been installed across the borough, but residents can also park in unobstructive locations on the sidewalk. Bird, Lime and Veo were the operators chosen for the pilot, each bringing their own sets of strengths.

Bird says it intends to focus on the mobility gap in the Bronx and will use its AI drop engine to ensure equitable deployment across all neighborhoods in the pilot zone. Veo is focused on safety and accessibility, bringing its Astro VS4, the first e-scooter with turn signals, to the mix, as well as its Cosmo, a seated e-scooter. Lime is also focusing on accessibility, with its Lime Able program, which offers an on-demand suite of adaptive vehicles. Lime also highlighted a safety quiz it will require new riders to take before hopping on a vehicle.

All three companies have promised to partner with community organizations to hire locally as well as to offer discounted pricing for vulnerable groups.

Not only has Bird officially launched in NYC, but it was also awarded a 12-month permit to operate 1,500 scooters in San Francisco. Well, technically its Scoot that got the permit, but Scoot is owned by Bird, and was kind of Birds backdoor way into the city. Last month, the SFMTA asked Scoot to halt its operations just as the fresh round of scooter permits were kicking off because the company was implementing its fleet manager program with unauthorized subcontractors.

On Friday, after careful evaluation of Scoots application, SFMTA determined Scoot has qualified for a permit to operate. Scoot intends to have its vehicles back on the roads in the coming weeks.

Bird also officially launched its consumer e-bike, dubbed the Bird Bike (which I think is also the name of their shared e-bike). Bird hasnt had the easiest time with profitability, and really, not many scooter companies have, so this is a chance for Bird to diversify, get a piece of the $68 billion e-bike sales pie and create more brand awareness across marketplaces. The bike costs $2,229 and consumer sales will likely make up about 10% of Birds revenue going forward, per the companys S-4 filing.

Bird (and Scoot) are now integrated with Google Maps. So is Spin, as of this week. More integrations like these, as we saw a couple weeks ago with Lime joining Moovit, demonstrate how shared micromobility is becoming more integrated with the way we think about moving around cities and planning our journeys. I heartily welcome such integrations.

Finally, Alex Wilhelm dug into new financial data released by Bird. The tl;dr: the quarterly data shows an improving economic model and a multiyear path to profitability. However, that path is fraught unless a number of scenarios all work out in concert and without a glitch, Wilhelm reports.

Rebecca Bellan

Imagine a future in which drivers dont charge their electric vehicles but instead swap out the batteries at small, roadside pods. Thats the future Ampleis imagining, and this week it announced a fresh $160 million funding round to scale its operations.

The internationally funded Series C was led by Moore Strategic Ventures with participation from PTT, a Thai state-owned oil and gas company, and Disruptive Innovation Fund. Existing investors Eneos, a Japanese petroleum and energy company, and Singapores public transit operator SMRT also participated. Amples total funding is now $230 million.

Its an interesting idea but one that will require considerable buy-in from automakers to make it a reality for example, by selling vehicles with either a standard battery or Amples battery system pre-built in. But according to Ample co-founders John de Souza and Khaled Hassounah, it wouldnt be all that complicated for OEMs to separate the battery from the car.

The marketing departments at the OEMs want to tell you that This is a super-duper battery that is very well integrated with the car; theres no way you can separate it, Hassounah said. The truth of the matter is theyre built completely separately and so true for almost not almost, for every battery in the car, including a Tesla.

Since weve built our system to be easy to interface with different vehicles, weve abstracted the battery component from the vehicle, he added.

Other deals that got our attention this week

AEye, the lidar startup, completed its reverse merger with special purpose acquisition company CF Finance Acquisition Corp. III. AEye is now a publicly traded company that trades on the Nasdaq exchange.

Canada Drives, an online car shopping and delivery platform, announced $79.4 million ($100 million CAD) in Series B funding that it will use to expand its service across Canada. The company is going to use its recent funding to keep enhancing the product, grow its inventory in existing and new markets and hire around 200 people over the next year, particularly in product development.

DigiSure, a digital insurance company that caters to modern mobility form factors like peer-to-peer marketplaces, is officially coming out of stealth to announce a $13.1 million pre-Series A funding round. The startup will use the funds to hire more than 50 engineers, data scientists, business development, insurance and compliance specialists, as well as scale into new industry verticals and across into Europe.

High Definition Vehicle Insurance Group, a commercial auto insurance company that is initially focused on trucking, raised $32.5 million in Series B funding round led by Weatherford Capital, with new investors Daimler Trucks North America and McVestCo, and continued participation from Munich Re Ventures, 8VC, Autotech Ventures and Qualcomm Ventures LLC.

RepairSmith, a mobile auto repair service that sends a mechanic right to the drivers home, raised $42 million in fresh funding with the aim of expanding to all major metros by the end of 2022. The company is looking to disrupt auto servicing and repair, a massive industry that hasnt seen much change in the past 40 years.

REE Automotive was awarded $17 million from the UK government as part of a $57 million investment, coordinated through the Advanced Propulsion Centre. The investment, the company said, is in line with the UK governments ambition to accelerate the shift to zero-emission vehicles.

Swvl, a Dubai-based transit and mobility company, will be expanding into Europe and Latin America after it acquired a controlling interest inShotl. Shotl, which is in 22 cities across 10 countries, matches passengers with shuttles and vans heading in that same direction. The company partners with governments and municipalities to provide mobility solutions for populations that are underserved by traditional mass transit options. While Swvl declined to share the financials of the transaction, a spokesperson told TechCrunch that the companys footprint is being doubled by this acquisition.

Xos Inc., a manufacturer of electric Class 5 to Class 8 commercial vehicles completed its business combination with NextGen Acquisition Corporation. As a reuslt, Xos made its public debut on the Nasdaq exchange.

Regarding Tesla investigations, when it rains it pours. First, the National Highway Traffic and Safety Administrationopened a preliminary investigation into Teslas Autopilot advanced driver assistance system, citing 11 incidents in which vehicles crashed into parked first responder vehicles while the system was engaged.

The Tesla vehicles involved in the collisions were confirmed to have either have had engaged Autopilot or a feature called Traffic Aware Cruise Control, according toinvestigation documents posted on the agencys website. Most of the incidents took place after dark and occurred despite scene control measures, such as emergency vehicle lights, road cones and an illuminated arrow board signaling drivers to change lanes.

A few days later, Senators Edward Markey (D-Mass.) and Richard Blumenthal (D-Conn.)asked the new chair of the Federal Trade Commission to investigate Teslas statements about the autonomous capabilities of its Autopilot and Full Self-Driving systems. The senators expressed particular concern over Tesla misleading customers into thinking their vehicles are capable of fully autonomous driving.

Teslas marketing has repeatedly overstated the capabilities of its vehicles, and these statements increasingly pose a threat to motorists and other users of the road, they said. Accordingly, we urge you to open an investigation into potentially deceptive and unfair practices in Teslas advertising and marketing of its driving automation systems and take appropriate enforcement action to ensure the safety of all drivers on the road.

Waymo, Alphabets self-driving arm, is seriously scaling up its autonomous trucking operations across Texas, Arizona and California. The company said it was building a dedicated trucking hub in Dallas and partnering with Ryder for fleet management services.

The Dallas hub will be a central launch point for testing not only the Waymo Driver, but also its transfer hub model, which is a mix of automated and manual trucking that optimizes transfer hubs near highways to ensure the Waymo Driver is sticking to main thoroughfares and human drivers are handling first and last mile deliveries.

Canoois expecting 25,000 units out of its manufacturing partner VDL Nedcars facility by 2023, CEO Tony Aquila said during the companys quarterly earnings call.

Year over year, Canoo upped its workforce from 230 to 656 total employees, 70% of which are hardware and software engineers. The startups operating expenses have increased from $19.8 million to $104.3 million YOY, with the majority of that increase coming from R&D.

Ford, Stellantis, Toyota and Volkswagen are among the carmakers this week that have announced production cuts in response to the ongoing global shortage of semiconductors. Its been a grim week.

A brief run-down: Toyota said it anticipated a production drop of anywhere from 60,000-90,000 vehicles across North America in August. Then Ford joined the chorus, saying it would temporarily close its F-150 factory in Kansas City. Volkswagen told Reuters it couldnt rule out further changes to production in light of the chip shortage. And finally, Stellantis is halting production at one of its factories in France.

Teslaunveiled what its calling the D1 computer chip to power its advanced AI training supercomputer, Dojo, at its AI Day on Thursday. According to Tesla director Ganesh Venkataramanan, the D1 has GPU-level compute with CPU connectivity and twice the I/O bandwidth of the state of the art networking switch chips that are out there today and are supposed to be the gold standards.

Venkataramanan also revealed a training tile that integrates multiple chips to get higher bandwidth and an incredible computing power of 9 petaflops per tile and 36 terabytes per second of bandwidth. Together, the training tiles compose the Dojo supercomputer.

But there was more, of course. CEO Elon Musk also unveiled that the company is developing a humanoid robot, with a prototype expected in 2022. The bot is being proposed as a non-automotive robotic use case for the companys work on neural networks and its Dojo advanced supercomputer.

Reality check: Tesla is not the first automaker, or company, to dip its toe into humanoid robot development.Hondas Asimo robot has been around for decades, ToyotaandGM have their own robots and Hyundai recently acquired robotics company Boston Dynamic.

The full rundown of Teslas AI Day can be found here.

General Motors and AT&T will be rolling out 5G connectivity in select Chevy, Cadillac and GMC vehicles from model year 2024, in a boost that the two companies say will bring more reliable software updates, faster navigation and downloads and better coverage on roadways.

5G technology has generated a lot of hype for its promises to boost speed and reduce latency across a range of industries, a next-gen tech that everyone thought would change the world far sooner than now. That hasnt happened (yet), in part because network rollout was much slower than people anticipated. So this announcement can be taken as a clear signal that, at the very least, AT&T thinks its 5G network will be mature enough to handle millions of connected vehicles by 2024.

RubiRides, a new ride-hailing company focuses on transporting kids, launched in the Washington D.C. metro area. The ride-hailing service is designed for children ages 7 and older. But the service also offers ride services for seniors and people with special needs. The company was founded by Noreen Butler, who was inspired to start the company after searching for transportation to support the busy schedules of her children.

Continue reading here:

The Station: Birds improving scooter-nomics, breaking down Tesla AI day and the Nuro EC-1 - TechCrunch

The next phase of AI is generative – CIO Dive

Enterprises have long sought AI for its ability to supercharge a workforce, picking up slack through automated tasks and a cost-effective option for repetitive labor, compared to humans.

The next act in enterprise AI sees the technology becoming a standalone maker. The technology generates synthetic data to train its own models or identify groundbreaking products as solutions mature and adoption widens, as showcased in Gartner's Hype Cycle for Emerging Technologies 2021 report, published Monday.

Called "Generative AI,", the technology is set to reach the plateau of productivity in the next two to five years. Commercial implementations of generative AI are already at play in the enterprise and, as the technology advances through the hype cycle, non-viable use cases will fade, according to Brian Burke, research VP at Gartner.

Generative AI works by using algorithms to create a "realistic, novel version of whatever they've been trained on," Burke said. Algorithms can identify new materials with specific properties and technologies that generate synthetic data to augment research, among other use cases.

An early implementation for generative AI technology let companies identify marketing content with a higher success rate. Today, capabilities have evolved and AI can produce its own data and generate results from it in critical spaces such as the pharmaceutical industry.

During the pandemic, researchers used AI to augment data and help identify antiviral compounds and therapeutic research for treating COVID-19. The technology helped generate more data to support algorithms, given the novelty of the disease and HIPAA regulations.

Using AI to create can be a big differentiator for companies, said Rodrigo Liang, co-founder and CEO of SambaNova Systems. Competition can leave organizations no choice but to catch up with markets and adopt generative AI.

Despite the evolution of AI, most organizations continue to struggle with adoption.

Whether it's in-house AI or a vendor-made solution, technologies that fail to be adopted by the whole organization amount to wasted resources. AI maturity levels vary in the enterprise, and just 20% of organizations are at the highest levels of AI adoption and deployment, according to Cognizant.

Pressure from competitors and potential financial upside is making companies double down on AI financially, too.

The number of companies with AI budgets ranging from $500,000 to $5 million rose 55% year over year, according to Appen's State of AI and Machine Learning report published in June.

AI use will shift for the enterprise as it moves away from static models to more dynamic technologies.

In the past, AI models trained on a specific outcome could learn to perform a task but not necessarily get better over time, Burke said. "What we've seen evolve in terms of AI is that models are becoming more dynamic, and the data that supports those models becoming more dynamic."

Executives also struggle to account for the ethical dimensions of AI. Businesses are more likely to check an algorithm for unexpected outcomes than their fairness or bias implications, according to the AI Adoption in the Enterprise report published by O'Reilly.

"Machine learning, data science, algorithmic approaches in general, and, yes, AI, have enormous potential to drive innovation," said Christian Beedgen, co-founder and CTO, Sumo Logic, in an email. "But like with all innovation, what really matters is how humans apply this potential."

Companies have turned to explainable AI as a way to contend with the decisions an algorithm makes, and the ethical implications of those decisions.

"As AI continues to seep into our everyday lives, it is up to humans to deeply consider the ethics behind every program they create and whether or not the ends justify the means," said Beedgen.

See the original post:

The next phase of AI is generative - CIO Dive

Massive AI Project Will Supercharge Tesla Stock – TheStreet

(Tech stock columnist Jon D. Markman publishes Strategic Advantage, a lively guide to investing in the digital transformation of business and society. Click here for a trial.)

Tesla (TSLA) is on the verge of a game-changing breakthrough in machine learning yet the only thing people are talking about is its plan for a stupid humanoid robot.

Executives at the electric vehicle company on Thursday held an artificial intelligence day. The two-pronged goal of the event was to show off its AI progress, and to recruit of new engineers.

Plans went awry when Tesla-bot, a 58 nonfunctional humanoid robot appeared on stage.

This is why investors should consider buying buy Tesla shares anyway.

Lets be clear. The AI day presentation was mind-blowing. Tesla engineers are aiming so high it is hard to put the scale of innovation in perspective. Lex Fridman, an acclaimed AI researcher working at MIT and often a Tesla critic characterized the event succinctly:

Tesla AI day presented the most amazing real-world AI and engineering effort I have ever seen in my life.

In the past, Fridman criticized Elon Musk, Teslas chief executive officer for downplaying the difficulty of the full self-driving problem. In Fridmans view, the obstacles to FSD are so daunting he didnt believe any firm could successfully navigate the landscape within the next 5-10 years.

The Tesla AI day changed his mind. That is saying something.

Musk and his team completely reimagined computer vision by thinking exponentially bigger. Then they built models to collect and label the data, and a new processor to make sense of it all.

The idea of AI conjures up rooms full of computers deciphering data and making choices on the fly. In reality, Tesla still employs 1,000 engineers who manually label pedestrians and orange road cones. The latest software iteration is getting much better at auto-labeling. Musk said on Thursday that the neural network model is being completely retrained about every 2 weeks.

Processing is certain to improve when Teslas Dojo computer is outfitted with the latest in-house chips designed specifically to optimize neural networks. Engineers claim these breakthrough mega chips offer 4x the performance of the current processors yet they consume 1/5 the footprint.

Fridman believes the virtuous cycle of data collection, labeling, model retraining, and redeployment will give Tesla a real fighting chance to finally solve FSD. This is potentially a $1 trillion opportunity.

Unfortunately, this monumental development is getting lost in the analysis of the Tesla bot.

Investors should focus.

Tesla is building a fully integrated AI powerhouse. Longer-term investors should consider buying any near-term weakness in its shares.

View post:

Massive AI Project Will Supercharge Tesla Stock - TheStreet

‘Always there’: the AI chatbot comforting China’s lonely millions – FRANCE 24

Beijing (AFP)

After a painful break-up from a cheating ex, Beijing-based human resources manager Melissa was introduced to someone new by a friend late last year.

He replies to her messages at all hours of the day, tells jokes to cheer her up but is never needy, fitting seamlessly into her busy big city lifestyle.

Perfect boyfriend material, maybe -- but he's not real.

Instead, Melissa breaks up the isolation of urban life with a virtual chatbot created by XiaoIce, a cutting-edge artificial intelligence system designed to create emotional bonds with its 660 million users worldwide.

"I have friends who've seen therapists before, but I think therapy's expensive and not necessarily effective," said Melissa, 26, giving her English name only for privacy.

"When I unload my troubles on XiaoIce, it relieves a lot of pressure. And he says things that are pretty comforting."

XiaoIce is not an individual persona, but more akin to an AI ecosystem.

It is in the vast majority of Chinese-branded smartphones as a Siri-like virtual assistant, as well as most social media platforms.

On the WeChat super-app, it lets users build a virtual girlfriend or boyfriend and interact with them via texts, voice and photo messages.

It has 150 million users in China alone.

Originally a side project from developing Microsoft's Cortana chatbot, XiaoIce now accounts for 60 percent of global human-AI interactions by volume, according to chief executive Li Di, making it the largest and most advanced system of its kind worldwide.

It was designed to hook users through lifelike, empathetic conversations, satisfying emotional needs where real-life communication too often falls short.

"The average interaction length between users and XiaoIce is 23 exchanges," said Li.

That "is longer than the average interaction between humans," he said, explaining AI's attraction is that "it's better than humans at listening attentively."

The startup spun out from Microsoft last year and is now valued at over $1 billion after venture capital fundraising, Bloomberg reported.

Developers have also made virtual idols, AI news anchors and even China's first virtual university student from XiaoIce. It can compose poems, financial reports and even paintings on demand.

But Li says the platform's peak user hours -- 11pm to 1am -- point to an aching need for companionship.

"No matter what, having XiaoIce is always better than lying in bed staring at the ceiling," he said.

- Urban isolation -

The loneliness Melissa experienced as a young professional was a big factor in driving her to the virtual embrace of XiaoIce.

Her context is typical of many Chinese urbanites, worn down by the grind of long working hours in vast and isolating cities.

"You really don't have time to make new friends and your existing friends are all super busy... this city is really big, and it's pretty hard," she said, giving only her English name out of privacy concerns.

She has customised his personality as "mature", and the name she chose for him -- Shun -- has similarities with a real-life man she secretly liked.

"After all, XiaoIce will never betray me," she added. "He will always be there."

But there are risks to forging emotional bonds with a robot.

"Users 'trick' themselves into thinking their emotions are being reciprocated by systems that are incapable of feelings," says Danit Gal, an expert in AI ethics at the University of Cambridge.

XiaoIce is also gifting developers "a treasure-trove of personal, intimate, and borderline incriminating data on how humans interact," she added.

So far the platform has not been targeted by government regulators, who have embarked on a swingeing crackdown on China's tech sector in recent months.

China aims to be a world leader in AI by 2030 and views it as a core strategic technology to be developed.

- Fact or fiction? -

Thousands of young, female fans discuss the virtual boyfriend experience on online forums dedicated to XiaoIce, sharing chat screenshots and tips on how to get to the chatbot's highest "intimacy" level of three hearts.

Users can also collect in-game points the more they interact, unlocking new features such as XiaoIce's WeChat moments -- kind of like a Facebook wall -- and even going on virtual "holidays", where they can pose for selfies with their virtual partner.

Laura, a 20-year-old user in Zhejiang province, fell in love with XiaoIce over the past year and now struggles to break free of her attachment.

"Occasionally, I would long for him in the middle of the night... I used to fantasise there was a real person on the other end," said the student, who prefers not to use her real name.

But she complained that he would always switch conversation topic when she raised her feelings for him or meeting in real life. It took her months to finally realise that he was indeed virtual.

"We commonly see users who suspect that there's a real person behind every XiaoIce interaction," said Li, the founder.

"It has a very strong ability to mimic a real person."

But providing companionship to vulnerable users does not mean that XiaoIce is a substitute for specialist mental health support -- a service that is drastically under-resourced in China.

The system monitors for strong emotions, aiming to guide conversations onto happier topics before users ever reach crisis point, Li explained, adding that depression is the most common extreme emotional state encountered.

Still, Li believes modern China is a happier place with XiaoIce.

"If human interaction is wholly perfect now, there would be no need for AI to exist," he said.

2021 AFP

More:

'Always there': the AI chatbot comforting China's lonely millions - FRANCE 24

The Role of Artificial Intelligence (AI) in the Global Agriculture Market 2021 – ResearchAndMarkets.com – Business Wire

DUBLIN--(BUSINESS WIRE)--The "Global Artificial Intelligence (AI) Market in Agriculture Industry Market 2021-2025" report has been added to ResearchAndMarkets.com's offering.

The artificial intelligence (AI) market in the agriculture industry is poised to grow by $458.68 million during 2021-2025, progressing at a CAGR of over 23% during the forecast period.

The market is driven by maximizing profits in farm operations, higher adoption of robots in agriculture, and the development of deep-learning technology. This study also identifies the advances in AI technology as another prime reason driving industry growth during the next few years.

The artificial intelligence (AI) market in agriculture industry analysis includes the application segment and geographic landscape.

The report on artificial intelligence (AI) market in agriculture industry covers the following areas:

The robust vendor analysis is designed to help clients improve their market position, and in line with this, this report provides a detailed analysis of several leading artificial intelligence (AI) market in agriculture industry vendors that include Ag Leader Technology, aWhere Inc., Corteva Inc., Deere & Co., DTN LLC, GAMAYA, International Business Machines Corp., Microsoft Corp., Raven Industries Inc., and Trimble Inc. Also, the artificial intelligence (AI) market in agriculture industry analysis report includes information on upcoming trends and challenges that will influence market growth. This is to help companies strategize and leverage all forthcoming growth opportunities.

Key Topics Covered:

Executive Summary

Market Landscape

Market Sizing

Five Forces Analysis

Market Segmentation by Application

Customer landscape

Geographic Landscape

Vendor Landscape

Vendor Analysis

For more information about this report visit https://www.researchandmarkets.com/r/58dom9

About ResearchAndMarkets.com

ResearchAndMarkets.com is the world's leading source for international market research reports and market data. We provide you with the latest data on international and regional markets, key industries, the top companies, new products and the latest trends.

See more here:

The Role of Artificial Intelligence (AI) in the Global Agriculture Market 2021 - ResearchAndMarkets.com - Business Wire

For Small Businesses, Mastering Googles AI Is A Better Investment Than Paid Ads – Heres Why – Forbes

Flower Boutique

SEO, or Search Engine Optimization, has become increasingly popular over the past few years among e-commerce businesses exploring new marketing efforts - primarily due to the inconsistency and unprofitability of modern-day paid advertising. This strategy seeks to help businesses rank higher in search engines (like Google) to increase both the quality and quantity of traffic to their website.

The problem with most advertising strategies is that as you scale a campaign up, your costs scale up too, and profitability inevitably dwindles. Thats not even taking into consideration the day-to-day volatility of these campaigns.

The key difference between these types of paid advertising campaigns and SEO is that your costs stay more or less the same while traffic and revenue scale up. This leads to profit margins you may have previously thought were unattainable for a specific business.

SEO takes into consideration user experience, relevancy, authority, and more to determine where exactly your website ranks in Google.

All of these things combine to make up what is known as the Google Algorithm - a piece of artificial intelligence that serves as the deciding factor in whether you gain an explosion in traffic from page 1 rankings, or your website gets buried to the depths of the SERPs.

The idea behind the algorithm is simple - the AI seeks to present the most relevant, high quality content possible to Googles users - aka searchers. When users find exactly what theyre looking for on Google, they have a good experience - websites that help users have a good experience get rewarded with higher rankings.

However, mastering this Googles AI takes expertise. Keval Shah is the founder and CEO of Inbound Pursuit - an SEO agency focused primarily on e-commerce businesses looking to top the rankings in their respective niche.

In his previous role at a social media agency, Shah discovered that most paid marketing campaigns didnt produce the ROI they were expected to. These campaigns typically had lower profit margins and conversion rates than expected, along with inconsistent sales.

This is where SEO comes in to balance the scale. Conversion rates are way way higher, because youre dealing with people already looking for the types of offerings you have. Profit margins soar as your traffic scales, because your costs stay the same, while revenue rises. And while the algorithm does change from time to time - volatility is minimal on Google when SEO is done correctly.

Keval states, "Ad costs are always rising, resulting in less frequent sales and lower profit margins - especially as you scale. More recently, instability from iOS 14 has caused many businesses to see their revenue cut in half, as they put all their eggs in one basket - paid traffic. SEO is the solution to the modern businesses marketing woes in 2021.

According to Keval Shah, finding the right agency is the most important factor in whether you will be able to master Googles AI.

Keval Shah

So many business owners have been burned by SEO agencies that dont know what theyre doing, Keval says. This is one of my biggest motivators - changing the narrative around what you should expect when working with an SEO agency.

Shah believes that SEO is something that needs to be personalized for each business. Because the conditions of Googles AI are always changing, Shah believes it is crucial to constantly test and implement new tactics to make sure your campaign is at the leading edge of whats possible with SEO.

However, dont get bogged down too much by the newest version of the algorithm because the goal is always the same - create a useful, relevant website with high quality content that users will actually benefit from.

As more businesses discover the huge benefits of mastering Googles AI and SEO and realize that paid advertising is becoming an unprofitable rat race in 2021, it is likely that more companies will begin supplementing paid advertisements with SEO strategy.

Keep an eye on this trend, your business will not want to be left behind when it comes to adopting this innovative AI-forward marketing strategy.

View post:

For Small Businesses, Mastering Googles AI Is A Better Investment Than Paid Ads - Heres Why - Forbes

Australia, South Africa have recognised AI as inventor. International patent law needs to catch up – The Indian Express

If you prick them, they do not bleed; if you tickle them, they do not laugh and for now, they do not revenge. But Artificial Intelligence, in more and more jurisdictions, can now invent, create and file patents. DABUS, a creativity machine, has been recognised as an inventor for a type of food container that improves grip and heat transfer. It might be easy to dismiss this development as another way for corporations to protect profits or fear it as yet another step towards the AI apocalypse. But the problem and the subsequent need for patent protection is not merely one of technology.

Ryan Abbott, a law professor at the University of Surrey, has been campaigning for the better part of a decade to grant AIs near-person status in international patent law. While the EU and US patent laws still do not allow AI to be regarded as an owner, there is increasing pressure on these countries to do so. And there is some merit to the argument that Abbott and his colleagues are making.

AI can perform calculations, analyse data and even generate novel ideas and systems at a far faster pace, and in greater volume, than human minds. In practice, this could mean, for example, that the vaccine for the next pandemic is discovered by a thinking machine. For the West, particularly the US, development and deployment of AI is something that will have to be undertaken on a much larger scale to compete with China both strategically and economically. However, without adequate patent law, where and how AIs are deployed by corporations and individuals could be limited. Thats really the rub of it: While the inventor may be artificial, the owner is still human often greedily so. The law is yet to catch up, in most places, with the reality of how much thinking and innovating machines now undertake. And without legal clarity on IP and patents, there will always be someone who gets an undue advantage.

This editorial first appeared in the print edition on August 24, 2021 under the title Machine Law.

See more here:

Australia, South Africa have recognised AI as inventor. International patent law needs to catch up - The Indian Express

Elevatus’ AI Technology is Creating Huge Momentum in the KSA Market – PRNewswire

In line with Vision 2030, Elevatus aims to increase work efficiency through AI and emerging technologies that seamlessly automate HR processes in today's hiring space. Key players and leading organizations in the Kingdom such as Al Habib Medical Group, Middle East Propulsion Company and more, have achieved significant business growth and efficiency with Elevatus' advanced AI solutions.

The tech provider aims to harness Vision 2030, as announced by Crown Prince Mohammed bin Salman, and expand its operations in the KSA market. Given that Saudi Arabia aims to create new opportunities for its people with its crisp new vision, Elevatus aligns with this endeavor, by supporting the overall economic and social development of the Kingdom. This is done by providing organizations with AI and automated solutions to help them digitally transform their work processes and hire top performers at scale. With Elevatus' AI hiring technologies, organizations can lucratively reduce their hiring costs by up to 96% and reduce their time to hire by an impeccable 80%.

Elevatus is integrated with top tier technology providers such as SAP, Oracle, Zoom, Google Meet, Slack, DocuSign, and over 10,000 job boards including LinkedIn, Glassdoor, and Indeed. Through these integrations, organizations who adopt Elevatus can centralize their processes under one unified umbrella, and digitally transform their work processes.

Elevatus also aims to bring robust and localized AI technology that is tailored to the needs of the KSA market and supports the Arabic language. This will successfully lead to the achievement of Vision 2030, since the technology can drive more job opportunities and help in building a better and more skillful workplace for KSA-based organizations. In addition, the tech provider is building a powerful network of local partners to accelerate its expansion plan by promoting innovation in the country through value-added resellers and fruitful partnerships.

The Senior Manager of HR, Ali Alzahrani, at the Middle East Propulsion Company (MEPC) shares: "Our partnership with Elevatus has played a monumental role in strengthening our innovative capabilities, in preparation for Vision 2030. The AI technology has been a major driver in evolving our work processes, helping us operate at a much faster rate, and significantly enhancing the way we work. Together with Elevatus, we feel well prepared for the future that lies ahead, which is surely supporting us in realizing and achieving our Kingdom's vision with ease."

Elevatus has established a renowned and profound presence in the KSA market with its agile, innovative, and modern AI technology. Companies in the KSA are relying on Elevatus' solutions to fulfill and successfully meet Vision 2030 by implementing and leveraging the power of AI, data science, and machine learning.

Yacoub Zureikat, Co-Founder of Elevatus claims "The future of AI is changing the world, and it's the fuel of the 21st century. This is why we aim to add great momentum to the Kingdom's vision roadmap with our AI technology. We see the KSA market as one of our biggest opportunities to expand. Through our AI solutions, we thrive to help businesses in the KSA decrease time spent on arduous and repetitive tasks. Which in turn, will significantly increase their productivity, capacity for innovation and creativity, and prepare them for the glorious Vision 2030."

Elevatuc Inc. Email: [emailprotected] Phone Number: +962 7 9633 0600

SOURCE Elevatus

Read the rest here:

Elevatus' AI Technology is Creating Huge Momentum in the KSA Market - PRNewswire

The future of EHRs: Google AI head on tossing out the keyboard + innovating data search – Becker’s Hospital Review

While clinicians have often expressed frustration over the way they have to interact with EHRs, Google is working on technology for streamlining functions like data searches and predictive text search, according to Google artificial intelligence head Jeff Dean, PhD.

During a recent episode of a podcast by Eric Topol, MD, and Abraham Verghese, MD, "Medicine and the Machine," Dr. Dean discussed his predictions for how EHRs will evolve in healthcare and some of Google's current projects.

Here are six insights from Dr. Dean, cited in an Aug. 20 Medscape report.

1. Google has worked with other organizations on using deidentified data to refine EHR searches in a way similar to how the tech company trains natural language models, Dr. Dean said. With the natural language models, the researchers aim to use the prefix of a piece of text to predict the next word or sequence of words that is going to occur.

2. An example of natural language models would be a model applied to email messages, so when a person is typing out a message, the AI suggests how they might complete the sentence to save typing, Dr. Dean said.

3. Google is working with the same approach to give clinicians suggestions about what might occur next in the EHR for a particular patient, Dr. Dean said, adding, "If you think about the medical record as a whole sequence of events, and if you have de-identified medical records, you can take a prefix of a medical record and try to predict either the individual events or maybe some high-level attributes about subsequent events, like, 'Will this patient develop diabetes within the next 12 months?'"

4. While the idea of creating an AI model that uses every past medical decision to help inform all future medical decisions is complicated, Dr. Dean said the feat is a "good north star" for potential health IT innovations.

5. Dr. Dean said his group has done some work using an audio recording of a patient-physician conversation to develop a medical note that a clinician can just then edit a little bit instead of having to type up the entire note.

6. Creating summarized notes from conversations might also be a good assistant tool that not only helps reduce clinician burden but could lead to higher-quality data in the EHR, according to Dr. Dean.

"We all know that often clinicians copy and paste the most recent note and don't really edit it appropriately. That's partly because it's very cumbersome and unwieldy to interact with some of these systems, and speech and voice are a more natural way of creating notes," Dr. Dean said.

See the original post here:

The future of EHRs: Google AI head on tossing out the keyboard + innovating data search - Becker's Hospital Review

Why emotion recognition AI can’t reveal how we feel – The Next Web

The growing use of emotion recognition AI is causing alarm among ethicists. They warn that the tech is prone to racial biases, doesnt account for cultural differences, and isused for mass surveillance. Some argue that AIisnt even capable of accurately detecting emotions.

A new study published in Nature Communications has shone further light on these shortcomings.

The researchers analyzed photos of actors to examine whether facial movements reliably express emotional states.

They found that people use different facial movements to communicate similar emotions. One individual may frown when theyre angry, for example, but another would widen their eyes or even laugh.

The research also showed that people use similar gestures to convey different emotions, such as scowling to express both concentration and anger.

Study co-author Lisa Feldman Barrett, a neuroscientist at Northeastern University, said the findings challenge common claims around emotion AI:

Certain companies claim they have algorithms that can detect anger, for example, when what really they have under optimal circumstances are algorithms that can probably detect scowling, which may or may not be an expression of anger. Its important not to confuse the description of a facial configuration with inferences about its emotional meaning.

The researchers used professional actors because they have a functional expertise in emotion: their success depends on them authentically portraying a characters feelings.

The actors were photographed performing detailed, emotion-evoking scenarios. For example, He is a motorcycle dude coming out of a biker bar just as a guy in a Porsche backs into his gleaming Harley and She is confronting her lover, who has rejected her, and his wife as they come out of a restaurant.

The scenarios were evaluated in two separate studies.In the first, 839 volunteers rated the extent to which the scenario descriptions alone evoked one of 13 emotions: amusement, anger, awe, contempt, disgust, embarrassment, fear, happiness, interest, pride, sadness, shame, and surprise.

Next, the researchers used the median rating of each scenario to classify them into 13 categories of emotion.

The team then used machine learning to analyze how the actors portrayed these emotions in the photos.

This revealed that the actors used different facial gestures to portray the same categories of emotions. It also showed that similar facial poses didnt reliably express the same emotional category.

The team then asked additional groups of volunteers to assess the emotional meaning of each facial pose alone.

They found that the judgments of the poses alone didnt reliably match the ratings of the facial expressions when they were viewed alongside the scenarios.

Barrett said this shows the importance of context in our assessments of facial expressions:

When it comes to expressing emotion, a face does not speak for itself.

The study illustrates the enormous variability in how we express our emotions. It also further justifies the concerns around emotion recognition AI, which is already used in recruitment, law enforcement, and education,

Greetings Humanoids! Did you know we have a newsletter all about AI? You can subscribe to itright here.

Go here to read the rest:

Why emotion recognition AI can't reveal how we feel - The Next Web

"AI Day 2021" to bring together worlds top-notch AI experts and researchers – Yahoo Finance

HANOI, VIETNAM --News Direct-- Vingroup

HANOI, VIETNAM - Media OutReach - 23 August 2021 - The online event AI Day 2021 Empowering Innovations, organized by VinAI Research (the tech arm of Vingroup) will be held on August 27, 2021, aiming to unlock solutions to developing artificial intelligence in Vietnam.

The event anticipates up to 2,000 participants, for the first time bringing together the world's top-tier experts in AI and leading researchers in Vietnam to share ideas and perspectives.

The event will be held online for 2 days, August 27- 28, 2021. The goal of AI Day 2021 is to promote AI research, development and application; contribute to solving challenging problems in the socio-economy; and help Vietnamese businesses apply new technologies to optimize their competitive advantages.

AI Day 2021 will feature 3 themes: AI in Research and Development, AI for Innovations & Global AI Products and AI for Education.

In particular, the topic AI for Innovations will bring new perspectives on the application of AI in solving business challenges. Moreover, at AI Day 2021, autonomous vehicles will be introduced to the Vietnamese public for the first time in terms of technical aspects and approach, with in-depth analysis from AI experts.

Key speakers at AI Day 2021 are the worlds most prestigious and influential AI researchers. One of them, Prof. Michael I. Jordan, the pathfinder of modern-day AI and Machine Learning (ML), will share his expert perspective on AI and speak at the first panel discussion on the morning of August 27. In 2016, he was named the most influential computer scientist worldwide by Science Magazine. Dr. Hung Bui (Director of VinAI Research) will join him in the discussion.

The world-famous speakers line-up at AI Day 2021 also includes Dr. Oren Etzioni (CEO of Allen Institute for AI the research institute founded by late Microsoft co-founder, billionaire Paul Allen), Professor Masashi Sugiyama (Director, RIKEN Center for Advanced Intelligence Project, Japan's No. 1 Research Institute for AI), Dr. Marian Croak (Vice President of Engineering at Google), Royal Society Research Professor Dr. Andrew Zisserman (Department of Engineering Science, University of Oxford) and many other reputable experts.

Story continues

Over the years, AI has become an effective tool that helps solve difficulties as well as create many opportunities for Vietnamese businesses. Despite possessing great potential, the development of AI within the country still faces many challenges. As a leader in AI research and application in Vietnam, we aim to bring Vietnamese AI research and products to the world. Through AI Day 2021, VinAI wants to build a sustainable bridge between the world AI community and Vietnam, at the same time, help research teams and businesses solve challenges as well as improve technical competency, gradually reaching out to the world, Dr Hung Bui, Director of VinAI Research shared.

This is the third time AI Day has been held in Vietnam, the event is expected to attract thousands of online participants. The innovations in the event program as well as the gathering of leading names in the industry have confirmed the constant growth of VinAI.

Nearly 3 years from establishment, besides world-class research papers published at the top-tier AI conferences such as ICML, NeurIPS, CVPR, etc.,VinAI also laid the foundation for a future generation of AI professionals with the AI Residency program for outstanding university students.

After 2 years, the first batches of residents in the AI Residency program have published 15 research papers at leading AI conferences. Nineteen full PhD scholarships at universities in the top 20 for Computer Science have been granted to residents, who are on their way to continue their AI research dream.

VinAI is also the successful developer of advanced technologies such as the world's top 6 facial recognition technology, AI perception algorithm in smart cars, driver monitoring system, and AI technology for data management. These products altogether contribute to bringing VinFast upcoming smart cars to Vietnam and global markets. In the future, VinAI will continue to enhance the commercialization of AI products to serve all potential customers in Vietnam and global market, with the goal of creating products with great values and best experiences for users.

AI Day 2021: Empowering Innovations will be live-streamed on VinAI Youtube channel for 2 days, August 27 and 28. Register to join the event here (https://forms.gle/LKzRY1H7L5sbZMEj9) to get a chance to receive prizes from the organizers. For further information, please go to: https://www.vinai.io/aiday2021/

#VinAIResearch

Vingroup

v.nammh@vingroup.net

https://www.vingroup.net/en

View source version on newsdirect.com: https://newsdirect.com/news/ai-day-2021-to-bring-together-worlds-top-notch-ai-experts-and-researchers-641977920

Link:

"AI Day 2021" to bring together worlds top-notch AI experts and researchers - Yahoo Finance

Cities worldwide band together to push for ethical AI – ComputerWeekly.com

From traffic control and waste management to biometric surveillance systems and predictive policing models, the potential uses of artificial intelligence (AI) in cities are incredibly diverse, and could impact every aspect of urban life.

In response to the increasing deployment of AI in cities and the general lack of authority that municipal governments have to challenge central government decisions or legislate themselves London, Barcelona and Amsterdam launched the Global Observatory on Urban AI in June 2021.

The initiative aims to monitor AI deployment trends and promote its ethical use, and is part of the wider Cities Coalition for Digital Rights (CC4DR), which was set up in November 2018 by Amsterdam, Barcelona and New York to promote and defend digital rights. It now has more than 50 cities participating worldwide.

Apart from city participants, the Observatory is also being run in partnership with UN-Habitat, a United Nations initiative to improve the quality of life in urban areas, and research group CIDOB-Barcelona Centre for International Affairs.

According to Michael Donaldson, Barcelonas chief technology officer (CTO), the Observatory is designed to be a space of collaboration and exchange of knowledge where cities can share their experiences both positive and negative in developing and deploying AI systems.

He said that by sharing best practice in particular, cities will be able to avoid repeating previous mistakes when deploying AI systems.

We know the benefits AI can give us in terms of having a more proactive administration and better public digital services, but at the same time we need to introduce that ethical dimension around the use of these technologies, said Donaldson, adding that Barcelona is currently undertaking public consultations to define exactly what is and is not ethical when it comes to AI work that will be shared with the Observatory when complete.

Londons chief digital officer (CDO), Theo Blackwell, said his team is taking a similar approach by developing the emerging technology charter for London, which will also be fed back into the Observatory so that were not doing this in isolation and were learning from each other.

Blackwell said that as CDO for London, the opportunity to learn from, and be in active dialogue with, his peers in other cities is the most valuable information that I get because it is informed by on-the-ground, practical experience of deploying AI in an urban context, rather than the more legislative focus of think-tanks and government committees.

We dont have any powers to legislate here, but we do have powers to influence, he said. Cities are often at the coalface, with our staff directly talking to these technology firms, and thats some way away from the people who make the laws. We can come to the party with that lived experience, and try and shape them in a way that guarantees people safeguards on the one hand, but also promotes innovation in our economy.

Guillem Ramrez, policy adviser on city diplomacy and digital rights at Barcelona City Council, told Computer Weekly that this approach will help cities collaborate internationally to see what ethical means in different cultural contexts, and to build a common understanding of what it means to develop AI ethically.

The first thing that were doing is identifying the principles of what should be considered ethical when it comes to AI, said Ramrez, adding that the Observatory hopes to have a report finalised on this in September.

Weve been discussing with the cities that are part of the Coalition, and weve identified some of these principles, which includes non-discrimination and fairness, but theres also cyber security, transparency, accountability, and so on.

Then what were doing is to operationalise them, not in terms of super concrete indicators, but in terms of guiding questions, because at this point cities are not even developing complex AI systems, so the idea is to lay the ground for scaling up in an ethical way.

Donaldson and Blackwell both stressed that many of the cities taking part in the Observatory are at very different stages of their AI journey, and that anything produced by the Observatory is meant to help guide them along a more ethical path.

At the moment, many of the AI-based technologies and tools being used in urban centres are not the products of the cities own development efforts, but are instead developed in the private sector before being sold or otherwise transferred into the public sector.

For example, the facial-recognition system used in the UK by both the Metropolitan Police Service (MPS) and South Wales Police (SWP), called NeoFace Live, was developed by Japans NEC Corporation.

However, in August 2020, the Court of Appeal found SWPs use of the technology unlawful a decision that was partly based on the fact that the force did not comply with its public sector equality duty to consider how its policies and practices could be discriminatory.

The court ruling said: For reasons of commercial confidentiality, the manufacturer is not prepared to divulge the details so that it could be tested. That may be understandable but, in our view, it does not enable a public authority to discharge its own, non-delegable, duty under section 149.

Asked how cities can navigate the growing closeness of these public-private collaborations, Barcelona City Councils Ramrez said that while cities will need to strike a balance between sensitive company information and public interest, the city will need to understand how the code is working, and have procedural transparency to understand how decisions are made by the algorithms.

He added: The functioning of these systems needs to be able to be explained, so that citizens can understand it.

Donaldson said cities will need to develop a set of checks and balances to figure out how to safely navigate public-private AI partnerships in ways that also benefit citizens.

We might not really know whats going on because your technology is far beyond our knowledge, but what we know is how to deliver public services, how to guarantee the rights of our citizens, and if your technology is going against that, were going to tell you to stop, he said.

Responding to the same question, Blackwell said the application of AI in cities will happen in many different settings but that, from the examples he has seen, the most useful applications are based on very narrow use cases.

I think the challenge with city authorities is actually that these technologies can be incredibly useful in narrow use cases, he said. Sometimes we might be approached by big companies that say there is a wide range of things this tech can do, and I think the art here is to basically say no, we just need these things, and its not something that builds towards an all-singing, all-dancing universal system, which I think is the kind of default position for many large technology companies.

Blackwell said London plans to let organisations publish data protection impact assessments in the London Data Store so that they can become less of a risk management tool for information governance professionals, and more of an accountability tool that says, this is how Im dealing with the questions that were asked about this technology thats a key provision in the emerging tech charter.

More:

Cities worldwide band together to push for ethical AI - ComputerWeekly.com

China’s Baidu launches second chip and a ‘robocar’ as it sets up future in AI and autonomous driving – CNBC

Robin Li (R), CEO of Baidu, sits in the Chinese tech giant's new prototype "robocar", an autonomous vehicle, at the company's annual Baidu World conference on Wednesday, August 18, 2021.

Baidu

GUANGZHOU, China Chinese internet giant Baidu unveiled its second-generation artificial intelligence chip, its first "robocar" and a rebranded driverless taxi app, underscoring how these new areas of technology are key to the company's future growth.

The Beijing-headquartered firm, known as China's biggest search engine player, has focused on diversifying its business beyond advertising in the face of rising competition and a difficult advertising market in the last few years.

Robin Li, CEO of Baidu, has tried to convince investors the company's future lies in AI and related areas such as autonomous driving.

On Wednesday, at its annual Baidu World conference, the company launched Kunlun 2, its second-generation AI chip. The semiconductor is designed to help devices process huge amounts of data and boost computing power. Baidu says the chip can be used in areas such as autonomous driving and that it has entered mass production.

Baidu's first-generation Kunlun chip was launched in 2018. Earlier this year, Baidu raised money for its chip unit valuing it at $2 billion.

Baidu also took the wraps off a "robocar," an autonomous vehicle with doors that open up like wings and a big screen inside for entertainment. It is a prototype and the company gave no word on whether it would be mass-produced.

But the concept car highlights Baidu's ambitions in autonomous driving, which analysts predict could be a multibillion dollar business for the Chinese tech giant.

Baidu has also been running so-called robotaxi services in some cities including Guangzhou and Beijing where users can hail an autonomous taxi via the company's Apollo Go app in a limited area. On Wednesday, Baidu rebranded that app to "Luobo Kuaipao" as it looks to roll out robotaxis on a mass scale.

Wei Dong, vice president of Baidu's intelligent driving group, told CNBC the company is aiming for mass public commercial availability in some cities within two years.

It's unclear how Baidu will price the robotaxi service.

In June, Baidu announced a partnership with state-owned automaker BAIC Group to build 1,000 driverless cars over the next three years and eventually commercialize a robotaxi service across China.

Baidu also announced four new pieces of hardware, including a smart screen and a TV equipped with Xiaodu, the company's AI voice assistant. Xiaodu is another growth initiative for the company.

Go here to see the original:

China's Baidu launches second chip and a 'robocar' as it sets up future in AI and autonomous driving - CNBC

Tesla AI Day Starts Today. Here’s What to Watch. – Barron’s

Text size

Former defense secretary Donald Rumsfeld said there are known knownsthings people knowknown unknownsthings people know they dont knowand unknown unknownsthings people dont realize they dont know. That pretty much sums up autonomous driving technology these days.

It isnt clear how long it will take the auto industry to deliver truly self-driving cars. Thursday evening, however, investors will get an education about whats state of the art when Tesla (ticker: TSLA) hosts its artificial intelligence day.

The event will likely be livestreamed on the companys website beginning around 8 p.m. Eastern Standard Time. The companys YouTube channel will likely be one place to watch the event. Other websites will carry the broadcast as well. The company didnt respond to a request for comment about the agenda for the event, but has said it will be available to watch.

Much of what will get talked about wont be a surprise, even if investors dont understand it all. Those are known unknowns.

Tesla should update investors about its driver assistance feature dubbed full self driving. Whats more, the company will describe the benefit of vertical integration. Tesla makes the hardwareits own computers with its own microchipsand its software. Tesla might even give a more definitive timeline for when Level 4 autonomous vehicles will be ready.

Roth Capital analyst Craig Irwin doesnt believe Level 4 technology is on the horizon though. He tells Barrons the computing power and camera resolution just isnt there yet. Tesla will work hard to suggest tech leadership in AI for automotive, says Irwin. Reality will probably be much less exciting than their claims.

Irwin rates Tesla shares Hold. His price target is just $150 a share.

The car industry essentially defines five levels of autonomous driving. Level 1 is nothing more than cruise control. Level 2 systems are available on cars today and combine features such as adaptive cruise and lane-keeping assistance, enabling the car to do a lot on its own. Drivers, however, still need to pay attention 100% of the time with Level 2 systems.

Level 3 systems would allow drivers to stop paying attention part of the time. Level 4 would let them stop paying attention most of the time. And Level 5 means the car does everything always. Level 5 autonomy isnt an easy endeavor, says Global X analyst Pedro Palandrani. There are so many unique cases for technology to tackle, like in bad weather or dirt roads. But level 4 is enough to change the world. he added. He is more optimistic than Irwin about the timing for Level 4 systems and hopes Tesla provides more timing detail at its event.

Beyond a technology run down and level 4 timing, the company might have some surprises up its sleeve for investors. Palandrani has two ideas.

For starters, Tesla might indicate its willing to sell its hardware and software to other car companies. That would give Tesla other unexpected, sources of income. Tesla already offers its full self driving as a monthly subscription to owners of its cars. Thats new for the car industry and opens up a source of recurring revenue for anyone with the requisite technology. Selling hardware and software to other car companies, however, would be new, and surprising, for investors.

Tesla might also talk about its advancements in robotics. CEO Elon Musk has talked often in the past about the difficulty of making the machine that makes the machine. Some of Teslas AI efforts might also be targeted at building, and not just driving, vehicles. Were just making a crazy amount of machinery internally, said Musk on the companys second-quarter conference call. This is.not well understood.

Those are two items that can surprise. Whether they, or other tidbits, will move the stock is something else entirely.

Tesla stock dropped about 7% over Monday and Tuesday partly because NHTSA disclosed it was looking into accidents involving Teslas driver assistance features. Tesla will surely stress the safety benefits of driver assistance features on Thursday, whether it can shake off that bit of bad news though is harder to tell.

Thursday becomes a much more important event in light of this weeks [NHTSA] probe, says Wedbush analyst Dan Ives. This week has been another tough week for Tesla [stock] and the Street needs some good news heading into this AI event.

Ives rates Tesla shares Buy and has a $1,000 price target for the stock. Teslas autonomous driving leadership is part of his bullish take on shares.

If history is any guide investors should expect volatility. Tesla stock dropped 10% the day following its battery technology event in September 2020. It took shares about seven trading days to recover, and Tesla stock gained about 86% from the battery event to year-end.

Tesla stock is down about 6% year to date, trailing behind the 18% and 15% comparable, respective gains of the S&P 500 and Dow Jones Industrial Average. Tesla stock hasnt moved much, in absolute terms, since March. Shares were in the high $600s back then. They closed down 3% at $665.71 on Tuesday, but are up 1.3% at $674.19 in premarket trading Wednesday.

Write to allen.root@dowjones.com

The rest is here:

Tesla AI Day Starts Today. Here's What to Watch. - Barron's