Daily Archives: June 11, 2022

Ron Paul: Respect The Fed? No, End The Fed OpEd – Eurasia Review

Posted: June 11, 2022 at 2:15 am

President Joe Biden has unveiled a three-part plan to fight inflation or at least make people think he is fighting inflation. One part of the plan involves having government agencies fix the supply chain problems that have led to shortages of numerous products. Of course, any attempt by the government to solve the supply chain problems (which were caused by prior government interventions such as shutting down the economy for over a year) will not just fail to solve the supply shortages but will create new problems.

Deficit reduction is another part of Bidens anti-inflation plan. However, Biden is not proposing cutting welfare or warfare spending. Instead, his deficit reduction plan consists of tax reforms to increase revenue, which is DC-speak for tax increases. History shows that tax increases unaccompanied by spending cuts end up increasing the deficit.

The last and most important part of Bidens inflation plan is recognizing that the Federal Reserve has the primary responsibility to control inflation. President Biden has pledged to respect the Feds independence, unlike former President Trump, who Biden accused of demeaning the Fed by subjecting the central bank to mean Tweets.

It is hard to believe that someone who has been in DC as long as Joe Biden really thinks Donald Trump was the first President to try to influence the Feds conduct of monetary policy. Since the Feds creation, Presidents have used public and private pressure to convince the Fed to tailor monetary policy to advance their policy and political goals. When it comes to demeaning the Fed, Trump has nothing on Lyndon Johnson, who, frustrated over the Feds refusal to tailor monetary policy to finance the Great Society and Vietnam war, threw the Fed chairman against a wall.

By passing the buck on inflation, Biden no doubt hopes to deflect blame from himself and his party before the midterm elections. Unlike Bidens previous inflation scapegoats greedy corporations and Vladimir Putin the Fed actually is responsible for creating and controlling inflation.

Price increases in specific sectors of the economy may be caused by a variety of factors, but economy-wide price increases are always the result of the Federal Reserves easy money policies. Inflation is actually the act of money-creation by the central bank. Widespread price increases are a symptom, not a cause, of inflation.

Federal Reserve Chairman Jerome Powell remains committed to more rate increases this year. However, even if the Fed follows through on all its projected rate increases, rates will still be at historic lows. While there are those on the Fed board who want more and bigger rate increases, others worry that going too far too fast in increasing rates will cause a recession. Already many economic experts are saying America should be prepared for increase in unemployment caused by the Feds efforts to vanquish inflation. This tradeoff between high prices and high unemployment illustrates the insanity for our monetary policy.

Treasury Secretary and former Fed Chair Janet Yellen and Chairman Powell have both admitted they were wrong to publicly dismiss inflation as transitory. The fact that the two most recent Fed chairs made such a huge blunder (or purposely refused to admit what was clear to many people for over a year), shows the folly of relying on a secretive central bank to manage monetary policy. Instead of respecting the Feds independence, President Biden should work with Congress to audit, then end the Fed.

This article was published by RonPaul Institute.

View post:
Ron Paul: Respect The Fed? No, End The Fed OpEd - Eurasia Review

Posted in Ron Paul | Comments Off on Ron Paul: Respect The Fed? No, End The Fed OpEd – Eurasia Review

Grassley Joins Barrasso on Letter to HHS Secretary Becerra on Transitioning from the COVID-19 Public Health Emergency – Senator Chuck Grassley

Posted: at 2:15 am

WASHINGTON Sen. Chuck Grassley (R-Iowa) joined Sen. John Barrasso (R-Wyo.) and 24 Senatecolleagues in urging Department of Health and Human Services (HHS) SecretaryXavier Becerra to provide Congress, patients and providers with additionalinsight on the Departments plans for transitioning out of the COVID-19 publichealth emergency.

Theletter specifically requests information on how changes in temporary,pandemic-related policies will affect Medicare, Medicaid and Childrens HealthInsurance Program (CHIP) patients and providers in the coming months.

Asthe American people return to normalcy, workers, families, frontline healthcare providers, and a range of other stakeholders need transparency andcertainty regarding the path forward, theSenators wrote. This unpredictable patchwork of mandates and questionableauthorities will continue to erode the publics confidence in government healthagencies. For frontline health care providers and patients, theadministrations erratic approach to transitioning beyond a perpetual state ofpandemic emergency could prove particularly problematic.

Inaddition to Grassley and Barrasso, the letter was signed by Sens. John Boozman(R-Ark.), Mike Braun (R-Ind.), Richard Burr (R-N.C.), Shelly Moore Capito (R-W.Va.),Bill Cassidy (R-La.), John Cornyn (R-Texas), Mike Crapo (R-Idaho), Steve Daines(R-Mont.), Joni Ernst (R-Iowa), Deb Fischer (R-Neb.), Jim Inhofe (R-Okla.), JamesLankford (R-Okla.), Cynthia Lummis (R-Wyo.), Roger Marshall (R-Kansas), RonPaul (R-Ky.), Rob Portman (R-Ohio), Jim Risch (R-Idaho), Marco Rubio (R-Fla.),Ben Sasse (R-Neb.), Rick Scott (R-Fla.), Tim Scott (R-S.C.), Dan Sullivan(R-Alaska), John Thune (R-S.D.) and Todd Young (R-Ind.).

Read more from the original source:
Grassley Joins Barrasso on Letter to HHS Secretary Becerra on Transitioning from the COVID-19 Public Health Emergency - Senator Chuck Grassley

Posted in Ron Paul | Comments Off on Grassley Joins Barrasso on Letter to HHS Secretary Becerra on Transitioning from the COVID-19 Public Health Emergency – Senator Chuck Grassley

Guardians rally for 3 in 9th, send A’s to 10th straight loss – RiverBender.com

Posted: at 2:15 am

AP Jun 11, 2022 3 hours ago

Cleveland Guardians' Oscar Gonzalez, center, celebrates with Jos Ramrez, left, and Andrs Gimenez, right, after scoring the winning run against the Oakland Athletics during the ninth inning of a baseball game, Friday, June 10, 2022, in Cleveland. (AP Photo/Ron Schwane)

Oakland Athletics starting pitcher Paul Blackburn applauds a defensive play by Elvis Andrus against the Cleveland Guardians during the sixth inning of a baseball game, Friday, June 10, 2022, in Cleveland. (AP Photo/Ron Schwane)

Cleveland Guardians' Jos Ramrez hits a double against the Oakland Athletics during the first inning of a baseball game, Friday, June 10, 2022, in Cleveland. (AP Photo/Ron Schwane)

Cleveland Guardians' Jos Ramrez fields the ball and throws out Oakland Athletics' Elvis Andrus at first base during the fourth inning of a baseball game, Friday, June 10, 2022, in Cleveland. (AP Photo/Ron Schwane)

Cleveland Guardians' Oscar Gonzalez celebrates after scoring the winning run on a sacrifice fly by Luke Maile during the ninth inning of a baseball game against the Oakland Athletics, Friday, June 10, 2022, in Cleveland. (AP Photo/Ron Schwane)

Cleveland Guardians' Luke Maile hits a winning sacrifice fly during the ninth inning of a baseball game against the Oakland Athletics, Friday, June 10, 2022, in Cleveland. (AP Photo/Ron Schwane)

Cleveland Guardians' Oscar Mercado scores on a sacrifice fly by Owen Miller against the Oakland Athletics during the ninth inning of a baseball game, Friday, June 10, 2022, in Cleveland. (AP Photo/Ron Schwane)

Oakland Athletics relief pitcher Dany Jimnez reacts after giving up a solo home run to Cleveland Guardians' Jose Ramrez during the ninth inning of a baseball game, Friday, June 10, 2022, in Cleveland. (AP Photo/Ron Schwane)

Cleveland Guardians' Jose Ramrez celebrates as he rounds the bases after hitting a solo home run against the Oakland Athletics during the ninth inning of a baseball game, Friday, June 10, 2022, in Cleveland. (AP Photo/Ron Schwane)

Oakland Athletics' Seth Brown (15)

Oakland Athletics' Kevin Smith throws out Cleveland Guardians' Amed Rosario at first base during the eighth inning of a baseball game, Friday, June 10, 2022, in Cleveland. (AP Photo/Ron Schwane)

Cleveland Guardians relief pitcher Anthony Gose throws against the Oakland Athletics during the ninth inning of a baseball game, Friday, June 10, 2022, in Cleveland. (AP Photo/Ron Schwane)

CLEVELAND (AP) Jos Ramrez doubled twice, then homered to begin a three-run rally in the bottom of the ninth inning as the Cleveland Guardians sent Oakland to its 10th straight loss, beating the Athletics 3-2 on Friday night.

The As are stuck in their first double-digit skid since 2011 and have been outscored 60-20 during the streak. Oakland has the worst record in the American League at 20-40 and has not won since May 29 against Texas.

Don't miss our top stories and need-to-know news everyday in your inbox.

Every time you think youre going to get a break, they generally dont go your way, As manager Mark Kotsay said. Its never easy getting out of these situations. You have to earn them yourself.

Ramrez, who leads the majors with 56 RBIs, hit his 16th homer to lead off the ninth against Dany Jimnez (2-4). Cleveland then loaded the bases with no outs and Owen Miller delivered the tying sacrifice fly.

Sam Moll relieved and gave up an infield single to Steven Kwan that again loaded the bases. Luke Maile followed with a sacrifice fly that scored rookie Oscar Gonzalez, setting off a celebration in the rain that unexpectedly arrived during the inning.

Jos is the best player in baseball, Ive said it 50 times, Guardians designated hitter Josh Naylor said. Hes incredibly clutch. When he comes up in a close game, you know something is going to go down. Hes incredible.

Ramrez was the only baserunner to get past second until the ninth for the young Guardians, who have won nine of 11 and moved two games above .500.

Gonzalez went 1 for 4, giving him hits in 13 of his first 14 career games. Roger Maris held the previous Cleveland franchise mark with 12.

Sometimes you just get out of their way because you dont want to make them nervous, Guardians manager Terry Francona said. Were going up against some men and weve got some kids, and theyre doing OK.

Oakland right-hander Paul Blackburn pitched eight shutout innings in the longest outing of his career, allowing four hits and striking out three to lower his road ERA to 0.93.

Converted outfielder Anthony Gose (2-0) struck out two of the three batters he faced in the ninth. Cleveland starter Triston McKenzie worked six innings, allowing solo homers by Seth Brown and Sean Murphy.

Article continues after sponsor message

Brown homered in the first and Murphy went deep in the second. The A's have 37 home runs -- the second fewest in baseball -- and only managed five hits to drop their league-low batting average to .209.

Thats a good team and theyre hot right now, Blackburn said. Times like this are tough for anybody, but you try to come in every day with a clear mind and not look at any streak.

DOWNWARD SPIRAL

Athletics RHP Lou Trivino, who posted a team-high 22 saves in 2021, is tied for the most losses by a reliever in the American League with five. The deposed closer has a 9.20 ERA in 21 appearances this season, allowing 15 earned runs in 14 2/3 innings. Lou is one of the guys in the bullpen that we need to have success, Kotsay said. And hes had it here before.

TRAINERS ROOM

Athletics: 2B Jed Lowrie (wrist, shoulder soreness) was not in the lineup after being involved in an collision on the bases Thursday. Kotsay said Lowrie is pretty sore and has been in for treatment, but there is no guarantee hell be available off the bench. Lowrie has gone hitless in nine straight at-bats as part of a 5-for-42 slump.

Guardians: RHP Aaron Civale (left gluteal soreness), who was injured May 20 against Detroit, will make a second rehab start for Triple-A Columbus. Civale threw 50 pitches in two innings Thursday, allowing two runs at Indianapolis. By his account, Aaron was a little rusty, so hell pitch again in five days, manager Terry Francona said.

UP NEXT

Athletics: RHP Frankie Montas (2-6, 3.06 ERA) seeks to stop his career-long losing streak at five. Montas has a 2.87 ERA and is holding opponents to a .214 average over his past nine starts, but has not earned a win.

Guardians: RHP Zach Plesac (2-4, 4.72 ERA) has one win in his last seven starts, striking out eight over six innings in a 3-2 victory at Baltimore on June 5. Plesac has a 1-3 record with a 6.21 ERA during the timeframe.

___

More AP baseball: https://apnews.com/hub/MLB and https://twitter.com/AP_Sports

See the original post:
Guardians rally for 3 in 9th, send A's to 10th straight loss - RiverBender.com

Posted in Ron Paul | Comments Off on Guardians rally for 3 in 9th, send A’s to 10th straight loss – RiverBender.com

Why it’s so hard to market enterprise AI/ML products and what to do about it – TechCrunch

Posted: at 2:14 am

Mike Tong has over a decade of experience leading GTM strategy and operations for tech and data companies as part of McKinsey TMT, AtSpoke, Splunk and the VC firm B Capital.

In 2019, I led the sales team and growth strategy for a venture-backed AI company called atSpoke. The company, which Okta ultimately acquired, used AI to augment traditional IT services management and internal company communication.

At a very early stage, our conversion rate was high. As long as our sales team could talk to a prospect and that prospect spent time with the product they would more often than not become a customer. The problem was getting enough strong prospects to connect with the sales team.

The traditional SaaS playbook for demand generation didnt work. Buying ads and building communities focused on AI were both expensive and drew in enthusiasts who lacked buying power. Buying search terms for our specific value propositions e.g., auto-routing requests didnt work because the concepts were new and no one was searching for those terms. Finally, terms like workflows and ticketing, which were more common, brought us into direct competition with whales like ServiceNow and Zendesk.

In my role advising growth-stage enterprise tech companies as part of B Capital Groups platform team, I observe similar dynamics across nearly every AI, ML and advanced predictive analytics companies I speak with. Healthy pipeline generation is the bugbear of this industry, yet there is very little content on how to address it.

There are four key challenges that stand in the way of demand generation for AI and ML companies and tactics for addressing those challenges. While there is no silver bullet, no secret AI buyer conference in Santa Barbara or ML enthusiast Reddit thread, these tips should help you structure your approach to marketing.

If youre reading this, you likely know the story of Salesforce and SaaS as a category, but the brilliance bears repeating. When the company started in 1999, software as a service didnt exist. In the early days, no one was thinking, I need to find a SaaS CRM solution. The business press called the company an online software service or a web service.

Salesforces early marketing focused on the problems of traditional sales software. The company memorably staged an end of software protest in 2000. (Salesforce still uses that messaging.) CEO Marc Benioff also made a point of repeating the term software as a service until it caught on. Salesforce created the category they dominated.

AI and ML companies face a similar dynamic. While terms like machine learning are not new, specific solutions areas like decision intelligence dont fall within a clear category. In fact, even grouping AI/ML companies is awkward, as there is so much crossover with business intelligence (BI), data, predictive analytics and automation. Companies in even newer categories can map to terms like continuous integration or container management.

Visit link:

Why it's so hard to market enterprise AI/ML products and what to do about it - TechCrunch

Posted in Ai | Comments Off on Why it’s so hard to market enterprise AI/ML products and what to do about it – TechCrunch

Give this AI a few words of description and it produces a stunning image but is it art? – The Conversation

Posted: at 2:14 am

A picture may be worth a thousand words, but thanks to an artificial intelligence program called DALL-E 2, you can have a professional-looking image with far fewer.

DALL-E 2 is a new neural network algorithm that creates a picture from a short phrase or sentence that you provide. The program, which was announced by the artificial intelligence research laboratory OpenAI in April 2022, hasnt been released to the public. But a small and growing number of people myself included have been given access to experiment with it.

As a researcher studying the nexus of technology and art, I was keen to see how well the program worked. After hours of experimentation, its clear that DALL-E while not without shortcomings is leaps and bounds ahead of existing image generation technology. It raises immediate questions about how these technologies will change how art is made and consumed. It also raises questions about what it means to be creative when DALL-E 2 seems to automate so much of the creative process itself.

OpenAI researchers built DALL-E 2 from an enormous collection of images with captions. They gathered some of the images online and licensed others.

Using DALL-E 2 looks a lot like searching for an image on the web: you type in a short phrase into a text box, and it gives back six images.

But instead of being culled from the web, the program creates six brand-new images, each of which reflect some version of the entered phrase. (Until recently, the program produced 10 images per prompt.) For example, when some friends and I gave DALL-E 2 the text prompt cats in devo hats, it produced 10 images that came in different styles.

Nearly all of them could plausibly pass for professional photographs or drawings. While the algorithm did not quite grasp Devo hat the strange helmets worn by the New Wave band Devo the headgear in the images it produced came close.

Over the past few years, a small community of artists have been using neural network algorithms to produce art. Many of these artworks have distinctive qualities that almost look like real images, but with odd distortions of space a sort of cyberpunk Cubism. The most recent text-to-image systems often produce dreamy, fantastical imagery that can be delightful but rarely looks real.

DALL-E 2 offers a significant leap in the quality and realism of the images. It can also mimic specific styles with remarkable accuracy. If you want images that look like actual photographs, itll produce six life-like images. If you want prehistoric cave paintings of Shrek, itll generate six pictures of Shrek as if theyd been drawn by a prehistoric artist.

Its staggering that an algorithm can do this. Each set of images takes less than a minute to generate. Not all of the images will look pleasing to the eye, nor do they necessarily reflect what you had in mind. But, even with the need to sift through many outputs or try different text prompts, theres no other existing way to pump out so many great results so quickly not even by hiring an artist. And, sometimes, the unexpected results are the best.

In principle, anyone with enough resources and expertise can make a system like this. Google Research recently announced an impressive, similar text-to-image system, and one startup, HuggingFace, is publicly developing their own version that anyone can try right now on the web, although its not yet as good as DALL-E or Googles system.

Its easy to imagine these tools transforming the way people make images and communicate, whether via memes, greeting cards, advertising and, yes, art.

I had a moment early on while using DALL-E 2 to generate different kinds of paintings, in all different styles like Odilon Redon painting of Seattle when it hit me that this was better than any painting algorithm Ive ever developed. Then I realized that it is, in a way, a better painter than I am.

In fact, no human can do what DALL-E 2 does: create such a high-quality, varied range of images in mere seconds. If someone told you that a person made all these images, of course youd say they were creative.

But this does not make DALL-E 2 an artist. Even though it sometimes feels like magic, under the hood it is still a computer algorithm, rigidly following instructions from the algorithms authors at OpenAI.

If these images succeed as art, they are products of how the algorithm was designed, the images it was trained on, and most importantly how artists use it.

You might be inclined to say theres little artistic merit in an image produced by a few keystrokes. But in my view, this line of thinking echoes the classic take that photography cannot be art because a machine did all the work. Today the human authorship and craft involved in artistic photography are recognized, and critics understand that the best photography involves much more than just pushing a button.

Even so, we often discuss works of art as if they directly came from the artists intent. The artist intended to show a thing, or express an emotion, and so they made this image. DALL-E 2 does seem to shortcut this process entirely: you have an idea and type it in, and youre done.

But when I paint the old-fashioned way, Ive found that my paintings come from the exploratory process, not just from executing my initial goals. And this is true for many artists.

Take Paul McCartney, who came up with the track Get Back during a jam session. He didnt start with a plan for the song; he just started fiddling and experimenting and the band developed it from there.

Picasso described his process similarly: I dont know in advance what I am going to put on canvas any more than I decide beforehand what colors I am going to use Each time I undertake to paint a picture I have a sensation of leaping into space.

In my own explorations with DALL-E 2, one idea would lead to another which led to another, and eventually Id find myself in a completely unexpected, magical new terrain, very far from where Id started.

I would argue that the art, in using a system like DALL-E 2, comes not just from the final text prompt, but in the entire creative process that led to that prompt. Different artists will follow different processes and end up with different results that reflect their own approaches, skills and obsessions.

I began to see my experiments as a set of series, each a consistent dive into a single theme, rather than a set of independent wacky images.

Ideas for these images and series came from all around, often linked by a set of stepping stones. At one point, while making images based on contemporary artists work, I wanted to generate an image of site-specific installation art in the style of the contemporary Japanese artist Yayoi Kusama. After trying a few unsatisfactory locations, I hit on the idea of placing it in La Mezquita, a former mosque and church in Crdoba, Spain. I sent the picture to an architect colleague, Manuel Ladron de Guevara, who is from Crdoba, and we began riffing on other architectural ideas together.

This became a series on imaginary new buildings in different architects styles.

So Ive started to consider what I do with DALL-E 2 to be both a form of exploration as well as a form of art, even if its often amateur art like the drawings I make on my iPad.

Indeed some artists, like Ryan Murdoch, have advocated for prompt-based image-making to be recognized as art. He points to the experienced AI artist Helena Sarin as an example.

When I look at most stuff from Midjourney another popular text-to-image system a lot of it will be interesting or fun, Murdoch told me in an interview. But with [Sarins] work, theres a through line. Its easy to see that she has put a lot of thought into it, and has worked at the craft, because the output is more visually appealing and interesting, and follows her style in a continuous way.

Working with DALL-E 2, or any of the new text-to-image systems, means learning its quirks and developing strategies for avoiding common pitfalls. Its also important to know about its potential harms, such as its reliance on stereotypes, and potential uses for disinformation. Using DALL-E 2, youll also discover surprising correlations, like the way everything becomes old-timey when you use an old painter, filmmaker or photographers style.

When I have something very specific I want to make, DALL-E 2 often cant do it. The results would require a lot of difficult manual editing afterward. Its when my goals are vague that the process is most delightful, offering up surprises that lead to new ideas that themselves lead to more ideas and so on.

These text-to-image systems can help users imagine new possibilities as well.

Artist-activist Danielle Baskin told me that she always works to show alternative realities by real example: either by setting scenarios up in the physical world or doing meticulous work in Photoshop. DALL-E 2, however, is an amazing shortcut because its so good at realism. And thats key to helping others bring possible futures to life whether its satire, dreams or beauty.

She has used it to imagine an alternative transportation system and plumbing that transports noodles instead of water, both of which reflect her artist-provocateur sensibility.

Similarly, artist Mario Klingemanns architectural renderings with the tents of homeless people could be taken as a rejoinder to my architectural renderings of fancy dream homes.

Its too early to judge the significance of this art form. I keep thinking of a phrase from the excellent book Art in the After-Culture The dominant AI aesthetic is novelty.

Surely this would be true, to some extent, for any new technology used for art. The first films by the Lumire brothers in 1890s were novelties, not cinematic masterpieces; it amazed people to see images moving at all.

AI art software develops so quickly that theres continual technical and artistic novelty. It seems as if, each year, theres an opportunity to explore an exciting new technology each more powerful than the last, and each seemingly poised to transform art and society.

Visit link:

Give this AI a few words of description and it produces a stunning image but is it art? - The Conversation

Posted in Ai | Comments Off on Give this AI a few words of description and it produces a stunning image but is it art? – The Conversation

How AI is driving IAMs shift to digital identity – VentureBeat

Posted: at 2:14 am

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!

Identity and access management (IAM) provider ForgeRock recently held its annual IDLive conference in Austin, Texas. One of the most compelling sessions involved ForgeRock CTO Eve Maler, who discussed the future of IAM and how its now being heavily infused with artificial intelligence (AI) to make it more effective.

The future that Maler described is very much aligned with the companys mission to help people safely and simply access the connected world and its vision of never having to log in again. While IAM has historically been a part of the IT plumbing to manage employee access within companies, it has emerged as a technology with a significant impact on all users employees, consumers, citizens and others in the new post-pandemic digital world that is evolving into Web3.

Its well-documented that the past two years have greatly accelerated digital transformation. Were now in the experience era, in which businesses define themselves by their ease of use and low customer friction. In fact, one datapoint presented during CEO Fran Roschs keynote is that 90% of businesses now compete on the basis of customer experience. This is consistent with what we see at ZK Research, and well add the datapoint that two-thirds of millennials admitted to dropping a brand in 2021 because of a single bad experience.

IAM has a direct impact on user experiences from the time a customer first signs up for a new service to every subsequent time she accesses that companys products and services. Often, the one bad experience that causes a consumer to drop a brand is the registration or login experience.

The first notable point from Malers presentation was a more expanded vision of digital identity that replaces the traditional concept of identity in the context of IAM. The latter is an old-school construct for a more traditional workforce environment. Today, a digital identity isnt just our given credential, but it also encapsulates the devices we use, our patterns of behavior, our location and so on.

Our digital identities are used not only at the time of access, but throughout our digital interactions with a company. Traditional IAM solutions that focus only on authenticating users during login might not detect a user whose credentials were stolen and then used by a threat actor overseas. But a modern IAM platform detects anomalous behavior, even after a user has logged in, and can trigger an alert to block access.

Thats a basic example, but to realize its vision of simplicity, the ForgeRock platform must work across all systems. It doesnt matter if there is a heterogeneous environment no gaps, no lack of scale or performance it all just has to work, Maler said. This is certainly a bold vision, and AI is the enabler to, as Maler put it, make the right, intelligent decisions.

The reason AI is needed is to analyze and find insights into increasingly large amounts of data. We are seeing an ocean of data and our customers are drowning in it and are unable to make the right decisions, Maler said. Most tools that make use of the data are inflexible and a bit dumb, which leads to coarse-grained decisions, resulting in poor experiences. This creates an opportunity for much more automation across the identity lifecycle.

The addition of AI to digital identity will cause this market to shift again, and that shift will be to zero-trust identity (ZTI). Zero trust is obviously a big topic because companies are looking to use the technology to help with the transition to hybrid work.

Most zero trust is done in the network layer, but that causes problems because its easy for bad actors to hide from the network. When zero trust is used in identity, it follows the digital identity. Maler gave an example of ForgeRocks recently released Autonomous Access product that uses AI/ML to process all the signals associated with a users digital identity to either give them seamless access, intervene with stepped-up authentication when unsure of the users identity, or block them when they are fraudulent.

During her presentation, Maler discussed four ways AI will enable zero-trust identity in the future:

Security professionals need to understand that the technology environment has changed. The IT organization no longer has control over apps, where people work, the network, or other infrastructure. In the business-to-consumer world, this IT control is nonexistent. Security controls need to shift to digital identity, and the IAM industry must evolve away from legacy constructs, such as allow/deny access, to an AI-powered analytics system that is always on.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.

Go here to read the rest:

How AI is driving IAMs shift to digital identity - VentureBeat

Posted in Ai | Comments Off on How AI is driving IAMs shift to digital identity – VentureBeat

How to use human-centered AI with Forethought and NEA – TechCrunch

Posted: at 2:13 am

Deon Nicholas is the CEO and co-founder of Forethought, the AI company whose mission is to transform customer experiences with human-centered AI. Forethought has raised over $100 million in venture capital, including from NEA, which led Forethoughts $9.52 million Series A. Hear from NEAs Vanessa Larco on what convinced the firm to invest in Forethought, and see Forethoughts early pitch deck that promised to up-end the customer service industry.

This event opens on June 29 at 11:30 a.m. PDT/2:30 p.m. EDT with networking and pitch-practice submissions. The interview begins at 12 p.m. PDT followed by the TCL Pitch Practice at 12:30 p.m. PDT. Register here for free.

TechCrunch Live records weekly on Wednesdays at 11:30 a.m. PDT/2:30 p.m. EDT. Join us! Click here to register for free and gain access to Forethoughts pitch deck, enter the pitch-practice session and access the livestream, where you can ask the speakers questions.

See the original post here:

How to use human-centered AI with Forethought and NEA - TechCrunch

Posted in Ai | Comments Off on How to use human-centered AI with Forethought and NEA – TechCrunch

The next frontier for AI in China could add $600 billion to its economy – McKinsey

Posted: at 2:13 am

In the past decade, China has built a solid foundation to support its AI economy and made significant contributions to AI globally. Stanford Universitys AI Index, which assesses AI advancements worldwide across various metrics in research, development, and economy, ranks China among the top three countries for global AI vibrancy. On research, for example, China produced about one-third of both AI journal papers and AI citations worldwide in 2021. In economic investment, China accounted for nearly one-fifth of global private investment funding in 2021, attracting $17 billion for AI start-ups.

Today, AI adoption is high in China in finance, retail, and high tech, which together account for more than one-third of the countrys AI market (see sidebar Five types of AI companies in China). In tech, for example, leaders Alibaba and ByteDance, both household names in China, have become known for their highly personalized AI-driven consumer apps. In fact, most of the AI applications that have been widely adopted in China to date have been in consumer-facing industries, propelled by the worlds largest internet consumer base and the ability to engage with consumers in new ways to increase customer loyalty, revenue, and market valuations.

So whats next for AI in China?

In the coming decade, our research indicates that there is tremendous opportunity for AI growth in new sectors in China, including some where innovation and R&D spending have traditionally lagged global counterparts: automotive, transportation, and logistics; manufacturing; enterprise software; and healthcare and life sciences. (See sidebar About the research.) In these sectors, we see clusters of use cases where AI can create upwards of $600 billion in economic value annually. (To provide a sense of scale, the 2021 gross domestic product in Shanghai, Chinas most populous city of nearly 28 million, was roughly $680 billion.) In some cases, this value will come from revenue generated by AI-enabled offerings, while in other cases, it will be generated by cost savings through greater efficiency and productivity. These clusters are likely to become battlegrounds for companies in each sector that will help define the market leaders.

Unlocking the full potential of these AI opportunities typically requires significant investmentsin some cases, much more than leaders might expecton multiple fronts, including the data and technologies that will underpin AI systems, the right talent and organizational mindsets to build these systems, and new business models and partnerships to create data ecosystems, industry standards, and regulations. In our work and global research, we find many of these enablers are becoming standard practice among companies getting the most value from AI.

To help leaders and investors marshal their resources to accelerate, disrupt, and lead in AI, we dive into the research, first sharing where the biggest opportunities lie in each sector and then outlining the core enablers to be tackled first.

We looked at the AI market in China to determine where AI could deliver the most value in the future. We studied market projections at length and dug deep into country and segment-level reports worldwide to see where AI was delivering the greatest value across the global landscape. We then spoke in depth with experts across sectors in China to understand where the greatest opportunities could emerge next. Our research led us to several sectors: automotive, transportation, and logistics, which are collectively expected to contribute the majorityaround 64 percentof the $600 billion opportunity; manufacturing, which will drive another 19 percent; enterprise software, contributing 13 percent; and healthcare and life sciences, at 4 percent of the opportunity.

Within each sector, our analysis shows the value-creation opportunity concentrated within only two to three domains. These are typically in areas where private-equity and venture-capital-firm investments have been high in the past five years and successful proof of concepts have been delivered.

Chinas auto market stands as the largest in the world, with the number of vehicles in use surpassing that of the United States. The sheer sizewhich we estimate to grow to more than 300 million passenger vehicles on the road in China by 2030provides a fertile landscape of AI opportunities. Indeed, our research finds that AI could have the greatest potential impact on this sector, delivering more than $380 billion in economic value. This value creation will likely be generated predominantly in three areas: autonomous vehicles, personalization for auto owners, and fleet asset management.

Autonomous, or self-driving, vehicles. Autonomous vehicles make up the largest portion of value creation in this sector ($335 billion). Some of this new value is expected to come from a reduction in financial losses, such as medical, first-responder, and vehicle costs. Roadway accidents stand to decrease an estimated 3 to 5 percent annually as autonomous vehicles actively navigate their surroundings and make real-time driving decisions without being subject to the many distractions, such as text messaging, that tempt humans. Value would also come from savings realized by drivers as cities and enterprises replace passenger vans and buses with shared autonomous vehicles.

Already, significant progress has been made by both traditional automotive OEMs and AI players to advance autonomous-driving capabilities to level 4 (where the driver doesnt need to pay attention but can take over controls) and level 5 (fully autonomous capabilities in which inclusion of a steering wheel is optional). For instance, WeRide, which achieved level 4 autonomous-driving capabilities, completed a pilot of its Robotaxi in Guangzhou, with nearly 150,000 trips in one year without any accidents with active liability.

Personalized experiences for car owners. By using AI to analyze sensor and GPS dataincluding vehicle-parts conditions, fuel consumption, route selection, and steering habitscar manufacturers and AI players can increasingly tailor recommendations for hardware and software updates and personalize car owners driving experience. Automaker NIOs advanced driver-assistance system and battery-management system, for instance, can track the health of electric-car batteries in real time, diagnose usage patterns, and optimize charging cadence to improve battery life span while drivers go about their day. Our research finds this could deliver $30 billion in economic value by reducing maintenance costs and unanticipated vehicle failures, as well as generating incremental revenue for companies that identify ways to monetize software updates and new capabilities.

Fleet asset management. AI could also prove critical in helping fleet managers better navigate Chinas immense network of railway, highway, inland waterway, and civil aviation routes, which are some of the longest in the world. Our research finds that $15 billion in value creation could emerge as OEMs and AI players specializing in logistics develop operations research optimizers that can analyze IoT data and identify more fuel-efficient routes and lower-cost maintenance stops for fleet operators. One automotive OEM in China now offers fleet owners and operators an AI-driven management system for monitoring fleet locations, tracking fleet conditions, and analyzing trips and routes. It is estimated to save up to 15 percent in fuel and maintenance costs.

In manufacturing, China is evolving its reputation from a low-cost manufacturing hub for toys and clothes to a leader in precision manufacturing for processors, chips, engines, and other high-end components. Our findings show AI can help facilitate this shift from manufacturing execution to manufacturing innovation and create $115 billion in economic value.

The majority of this value creation ($100 billion) will likely come from innovations in process design through the use of various AI applications, such as collaborative robotics that create the next-generation assembly line, and digital twins that replicate real-world assets for use in simulation and optimization engines. With digital twins, manufacturers, machinery and robotics providers, and system automation providers can simulate, test, and validate manufacturing-process outcomes, such as product yield or production-line productivity, before commencing large-scale production so they can identify costly process inefficiencies early. One local electronics manufacturer uses wearable sensors to capture and digitize hand and body movements of workers to model human performance on its production line. It then optimizes equipment parameters and setupsfor example, by changing the angle of each workstation based on the workers heightto reduce the likelihood of worker injuries while improving worker comfort and productivity.

The remainder of value creation in this sector ($15 billion) is expected to come from AI-driven improvements in product development. Companies could use digital twins to rapidly test and validate new product designs to reduce R&D costs, improve product quality, and drive new product innovation. On the global stage, Google has offered a glimpse of whats possible: it has used AI to rapidly assess how different component layouts will alter a chips power consumption, performance metrics, and size. This approach can yield an optimal chip design in a fraction of the time design engineers would take alone.

As in other countries, companies based in China are undergoing digital and AI transformations, leading to the emergence of new local enterprise-software industries to support the necessary technological foundations.

Solutions delivered by these companies are estimated to deliver another $80 billion in economic value. Offerings for cloud and AI tooling are expected to provide more than half of this value creation ($45 billion). In one case, a local cloud provider serves more than 100 local banks and insurance companies in China with an integrated data platform that enables them to operate across both cloud and on-premises environments and reduces the cost of database development and storage. In another case, an AI tool provider in China has developed a shared AI algorithm platform that can help its data scientists automatically train, predict, and update the model for a given prediction problem. Using the shared platform has reduced model production time from three months to about two weeks.

AI-driven software-as-a-service (SaaS) applications are expected to contribute the remaining $35 billion in economic value in this category. Local SaaS application developers can apply multiple AI techniques (for instance, computer vision, natural-language processing, machine learning) to help companies make predictions and decisions across enterprise functions in finance and tax, human resources, supply chain, and cybersecurity. A leading financial institution in China has deployed a local AI-driven SaaS solution that uses AI bots to offer personalized training recommendations to employees based on their career path.

In recent years, China has stepped up its investment in innovation in healthcare and life sciences with AI. Chinas 14th Five-Year Plan targets 7 percent annual growth by 2025 for R&D expenditure, of which at least 8 percent is devoted to basic research.

One area of focus is accelerating drug discovery and increasing the odds of success, which is a significant global issue. In 2021, global pharma R&D spend reached $212 billion, compared with $137 billion in 2012, with an approximately 5 percent compound annual growth rate (CAGR). Drug discovery takes 5.5 years on average, which not only delays patients access to innovative therapeutics but also shortens the patent protection period that rewards innovation. Despite improved success rates for new-drug development, only the top 20 percent of pharmaceutical companies worldwide realized a breakeven on their R&D investments after seven years.

Another top priority is improving patient care, and Chinese AI start-ups today are working to build the countrys reputation for providing more accurate and reliable healthcare in terms of diagnostic outcomes and clinical decisions.

Our research suggests that AI in R&D could add more than $25 billion in economic value in three specific areas: faster drug discovery, clinical-trial optimization, and clinical-decision support.

Rapid drug discovery. Novel drugs (patented prescription drugs) currently account for less than 30 percent of the total market size in China (compared with more than 70 percent globally), indicating a significant opportunity from introducing novel drugs empowered by AI in discovery. We estimate that using AI to accelerate target identification and novel molecules design could contribute up to $10 billion in value. Already more than 20 AI start-ups in China funded by private-equity firms or local hyperscalers are collaborating with traditional pharmaceutical companies or independently working to develop novel therapeutics. Insilico Medicine, by using an end-to-end generative AI engine for target identification, molecule design, and lead optimization, discovered a preclinical candidate for pulmonary fibrosis in less than 18 months at a cost of under $3 million. This represented a significant reduction from the average timeline of six years and an average cost of more than $18 million from target discovery to preclinical candidate. This antifibrotic drug candidate has now successfully completed a Phase 0 clinical study and entered a Phase I clinical trial.

Clinical-trial optimization. Our research suggests that another $10 billion in economic value could result from optimizing clinical-study designs (process, protocols, sites), optimizing trial delivery and execution (hybrid trial-delivery model), and generating real-world evidence. These AI use cases can reduce the time and cost of clinical-trial development, provide a better experience for patients and healthcare professionals, and enable higher quality and compliance. For instance, a global top 20 pharmaceutical company leveraged AI in combination with process improvements to reduce the clinical-trial enrollment timeline by 13 percent and save 10 to 15 percent in external costs. The global pharmaceutical company prioritized three areas for its tech-enabled clinical-trial development. To accelerate trial design and operational planning, it utilized the power of both internal and external data for optimizing protocol design and site selection. For streamlining site and patient engagement, it established an ecosystem with API standards to leverage internal and external innovations. To establish a clinical-trial development cockpit, it aggregated and visualized operational trial data to enable end-to-end clinical-trial operations with full transparency so it could predict potential risks and trial delays and proactively take action.

Video

Clinical-decision support. Our findings indicate that the use of machine learning algorithms on medical images and data (including examination results and symptom reports) to predict diagnostic outcomes and support clinical decisions could generate around $5 billion in economic value. A leading AI start-up in medical imaging now applies computer vision and machine learning algorithms on optical coherence tomography results from retinal images. It automatically searches and identifies the signs of dozens of chronic illnesses and conditions, such as diabetes, hypertension, and arteriosclerosis, expediting the diagnosis process and increasing early detection of disease.

During our research, we found that realizing the value from AI would require every sector to drive significant investment and innovation across six key enabling areas (exhibit). The first four areas are data, talent, technology, and significant work to shift mindsets as part of adoption and scaling efforts. The remaining two, ecosystem orchestration and navigating regulations, can be considered collectively as market collaboration and should be addressed as part of strategy efforts.

Exhibit

Some specific challenges in these areas are unique to each sector. For example, in automotive, transportation, and logistics, keeping pace with the latest advances in 5G and connected-vehicle technologies (commonly referred to as V2X) is crucial to unlocking the value in that sector. Those in healthcare will want to stay current on advances in AI explainability; for providers and patients to trust the AI, they must be able to understand why an algorithm made the decision or recommendation it did.

Broadly speaking, four of these areasdata, talent, technology, and market collaborationstood out as common challenges that we believe will have an outsized impact on the economic value achieved. Without them, tackling the others will be much harder.

For AI systems to work properly, they need access to high-quality data, meaning the data must be available, usable, reliable, relevant, and secure. This can be challenging without the right foundations for storing, processing, and managing the vast volumes of data being generated today. In the automotive sector, for instance, the ability to process and support up to two terabytes of data per car and road data daily is necessary for enabling autonomous vehicles to understand whats ahead and delivering personalized experiences to human drivers. In healthcare, AI models need to take in vast amounts of omics data to understand diseases, identify new targets, and design new molecules.

Companies seeing the highest returns from AImore than 20 percent of earnings before interest and taxes (EBIT) contributed by AIoffer some insights into what it takes to achieve this. McKinseys 2021 Global AI Survey shows that these high performers are much more likely to invest in core data practices, such as rapidly integrating internal structured data for use in AI systems (51 percent of high performers versus 32 percent of other companies), establishing a data dictionary that is accessible across their enterprise (53 percent versus 29 percent), and developing well-defined processes for data governance (45 percent versus 37 percent).

Participation in data sharing and data ecosystems is also crucial, as these partnerships can lead to insights that would not be possible otherwise. For instance, medical big data and AI companies are now partnering with a wide range of hospitals and research institutes, integrating their electronic medical records (EMR) with publicly available medical-research data and clinical-trial data from pharmaceutical companies or contract research organizations. The goal is to facilitate drug discovery, clinical trials, and decision making at the point of care so providers can better identify the right treatment procedures and plan for each patient, thus increasing treatment effectiveness and reducing chances of adverse side effects. One such company, Yidu Cloud, has provided big data platforms and solutions to more than 500 hospitals in China and has, upon authorization, analyzed more than 1.3 billion healthcare records since 2017 for use in real-world disease models to support a variety of use cases including clinical research, hospital management, and policy making.

In our experience, we find it nearly impossible for businesses to deliver impact with AI without business domain knowledge. Knowing what questions to ask in each domain can determine the success or failure of a given AI effort. As a result, organizations in all four sectors (automotive, transportation, and logistics; manufacturing; enterprise software; and healthcare and life sciences) can benefit from systematically upskilling existing AI experts and knowledge workers to become AI translatorsindividuals who know what business questions to ask and can translate business problems into AI solutions. We like to think of their skills as resembling the Greek letter pi (). This group has not only a broad mastery of general management skills (the horizontal bar) but also spikes of deep functional knowledge in AI and domain expertise (the vertical bars).

To build this talent profile, some companies upskill technical talent with the requisite skills. One AI start-up in drug discovery, for instance, has created a program to train newly hired data scientists and AI engineers in pharmaceutical domain knowledge such as molecule structure and characteristics. Company executives credit this deep domain knowledge among its AI experts with enabling the discovery of nearly 30 molecules for clinical trials. Other companies seek to arm existing domain talent with the AI skills they need. An electronics manufacturer has built a digital and AI academy to provide on-the-job training to more than 400 employees across different functional areas so that they can lead various digital and AI projects across the enterprise.

McKinsey has found through past research that having the right technology foundation is a critical driver for AI success. For business leaders in China, our findings highlight four priorities in this area:

Increasing digital adoption. There is room across industries to increase digital adoption. In hospitals and other care providers, many workflows related to patients, personnel, and equipment have yet to be digitized. Further digital adoption is required to provide healthcare organizations with the necessary data for predicting a patients eligibility for a clinical trial or providing a physician with intelligent clinical-decision-support tools.

The same holds true in manufacturing, where digitization of factories is low. Implementing IoT sensors across manufacturing equipment and production lines can enable companies to accumulate the data necessary for powering digital twins.

Implementing data science tooling and platforms. The cost of algorithmic development can be high, and companies can benefit greatly from using technology platforms and tooling that streamline model deployment and maintenance, just as they benefit from investments in technologies to improve the efficiency of a factory production line. Some essential capabilities we recommend companies consider include reusable data structures, scalable computation power, and automated MLOps capabilities. All of these contribute to ensuring AI teams can work efficiently and productively.

Advancing cloud infrastructures. Our research finds that while the percent of IT workloads on cloud in China is almost on par with global survey numbers, the share on private cloud is much bigger due to security and data compliance concerns. As SaaS vendors and other enterprise-software providers enter this market, we advise that they continue to advance their infrastructures to address these concerns and provide enterprises with a clear value proposition. This will require further advances in virtualization, data-storage capacity, performance, elasticity and resilience, and technological agility to customize business capabilities, which enterprises have come to expect from their vendors.

Investments in AI research and advanced AI techniques. Many of the use cases described here will require fundamental advances in the underlying technologies and techniques. For instance, in manufacturing, additional research is needed to improve the performance of camera sensors and computer vision algorithms to detect and recognize objects in dimly lit environments, which can be common on factory floors. In life sciences, further innovation in wearable devices and AI algorithms is necessary to enable the collection, processing, and integration of real-world data in drug discovery, clinical trials, and clinical-decision-support processes. In automotive, advances for improving self-driving model accuracy and reducing modeling complexity are required to enhance how autonomous vehicles perceive objects and perform in complex scenarios.

For conducting such research, academic collaborations between enterprises and universities can advance whats possible.

AI can present challenges that transcend the capabilities of any one company, which often gives rise to regulations and partnerships that can further AI innovation. In many markets globally, weve seen new regulations, such as Global Data Protection Regulation (GDPR) in Europe and the California Consumer Privacy Act in the United States, begin to address emerging issues such as data privacy, which is considered a top AI relevant risk in our 2021 Global AI Survey. And proposed European Union regulations designed to address the development and use of AI more broadly will have implications globally.

Our research points to three areas where additional efforts could help China unlock the full economic value of AI:

Data privacy and sharing. For individuals to share their data, whether its healthcare or driving data, they need to have an easy way to give permission to use their data and have trust that it will be used appropriately by authorized entities and safely shared and stored. Guidelines related to privacy and sharing can create more confidence and thus enable greater AI adoption. A 2019 law enacted in China to improve citizen health, for instance, promotes the use of big data and AI by developing technical standards on the collection, storage, analysis, and application of medical and health data.

Meanwhile, there has been significant momentum in industry and academia to build methods and frameworks to help mitigate privacy concerns. For example, the number of papers mentioning privacy accepted by the Neural Information Processing Systems, a leading machine learning conference, has increased sixfold in the past five years.

Market alignment. In some cases, new business models enabled by AI will raise fundamental questions around the use and delivery of AI among the various stakeholders. In healthcare, for instance, as companies develop new AI systems for clinical-decision support, debate will likely emerge among government and healthcare providers and payers as to when AI is effective in improving diagnosis and treatment recommendations and how providers will be reimbursed when using such systems. In transportation and logistics, issues around how government and insurers determine culpability have already arisen in China following accidents involving both autonomous vehicles and vehicles operated by humans. Settlements in these accidents have created precedents to guide future decisions, but further codification can help ensure consistency and clarity.

Standard processes and protocols. Standards enable the sharing of data within and across ecosystems. In the healthcare and life sciences sectors, academic medical research, clinical-trial data, and patient medical data need to be well structured and documented in a uniform manner to accelerate drug discovery and clinical trials. A push by the National Health Commission in China to build a data foundation for EMRs and disease databases in 2018 has led to some movement here with the creation of a standardized disease database and EMRs for use in AI. However, standards and protocols around how the data are structured, processed, and connected can be beneficial for further use of the raw-data records.

Likewise, standards can also eliminate process delays that can derail innovation and scare off investors and talent. An example involves the acceleration of drug discovery using real-world evidence in Hainans medical tourism zone; translating that success into transparent approval protocols can help ensure consistent licensing across the country and ultimately would build trust in new discoveries. On the manufacturing side, standards for how organizations label the various features of an object (such as the size and shape of a part or the end product) on the production line can make it easier for companies to leverage algorithms from one factory to another, without having to undergo costly retraining efforts.

Patent protections. Traditionally, in China, new innovations are rapidly folded into the public domain, making it difficult for enterprise-software and AI players to realize a return on their sizable investment. In our experience, patent laws that protect intellectual property can increase investors confidence and attract more investment in this area.

AI has the potential to reshape key sectors in China. However, among business domains in these sectors with the most valuable use cases, there is no low-hanging fruit where AI can be implemented with little additional investment. Rather, our research finds that unlocking maximum potential of this opportunity will be possible only with strategic investments and innovations across several dimensionswith data, talent, technology, and market collaboration being foremost. Working together, enterprises, AI players, and government can address these conditions and enable China to capture the full value at stake.

See the rest here:

The next frontier for AI in China could add $600 billion to its economy - McKinsey

Posted in Ai | Comments Off on The next frontier for AI in China could add $600 billion to its economy – McKinsey

The 15 Best AI Tools To Know – Built In

Posted: at 2:13 am

Once an idea only existing in sci-fi, artificial intelligence now plays a role in our daily lives. In fact, we expect it from our tech products. No one wants to reconfigure their entire tech suite every time a new update is launched. We need technology that can process code for us, solve problems independently, and learn from past mistakes so we have free time to focus on the big picture issues.

Thats where AI comes in. It makes projects run smoother, data cleaner, and our lives easier. Around 37 percent of companies use AI to run their businesses, according to the tech research firm Gartner. That number should only grow in coming years, considering the number of companies using artificial intelligence jumped 270 percent from 2015 to 2019.

AI is already a staple of the business world and helps thousands of companies compete in todays evolving tech landscape. If your company hasnt already adopted artificial intelligence, here the top 15 tools you can choose from.

Specialty: Cybersecurity

Companies that conduct any aspect of their business online need to evaluate their cybersecurity. Symantec Endpoint Protection is one tool that secures digital assets with machine learning technology. As the program encounters different security threats, it can independently learn over time how to distinguish between good and malicious files. This alleviates the human responsibility of configuring software and running updates, because the platforms AI interface can automatically download new updates and learn from each security threat to better combat malware, according to Symantecs website.

Specialty: Recruiting

Rather than siloing recruiting, background checks, resume screening and interview assessments, Outmatch aims to centralize all recruiting steps in one end-to-end, AI-enabled platform. The companys AI-powered hiring workflow helps recruiting teams streamline their operations and cut back on spending by up to 40 percent, according to Outmatchs website. With Outmatchs tools, users can automate reference checks, interview scheduling, and candidate behavioral and cognitive screening.

Specialty: Business intelligence

Tableau is a data visualization software platform with which companies can make industry forecasts and form business strategies. Tableaus AI and augmented analytics features help users get access to data insights more quickly than they would through manual methods, according to the companys site. Some names among Tableaus client base include Verizon, Lenovo, Hello Fresh and REI Co-op.

Specialty: Business intelligence

Salesforce is a cloud-enabled, machine learning integrated software platform that companies can use to manage their customer service, sales and product development operations. The companys AI platform, called Einstein AI, acts as a smart assistant that can offer recommendations and automate repetitive data input to help employees make more data informed decisions, according to the platforms site. Scalable for companies ranging in size from startups to major corporations, Salesforce also offers a variety of apps that can be integrated into their platform so companies can customize their interface to meet their specific needs.

Specialty: Business intelligence

H2O.ai is a machine learning platform that helps companies approach business challenges with the help of real-time data insights. From fraud detection to predictive customer support, H2O.ais tools can handle a broad range of business operations and free up employee time to focus efforts on greater company strategies. Traditionally long term projects can be accomplished by the companys driverless AI in hours or minutes, according to H2Os site.

Specialty: Software development

Specifically designed for developers and engineers, Oracle AI uses machine learning principles to analyze customer feedback and create accurate predictive models based on extracted data. Oracles platform can automatically pull data from open source frameworks so that developers dont need to create applications or software from scratch, said the companys site. Its platform also offers chatbot tools that evaluates customer needs and connects them with appropriate resources or support.

Specialty: Coding

Caffe is an open source machine learning framework with which developers and coders can define, design and deploy their software products. Developed by Berkeley AI Research, Caffe is used by researchers, startups and corporations to launch digital projects, and can be integrated with Python to finetune code models, test projects and automatically solve bug issues, according to Caffes site.

Specialty: Business Intelligence

SAS is an AI data management program that relies on open source and cloud-enablement technologies to help companies direct their progress and growth. SASs platform can handle an array of business functions including customer intelligence, risk assessment, identity verification and business forecasting to help companies better control their direction, according to the companys site.

Specialty: Code development

Specifically designed for integration with Python, Theano is an AI powered library that developers can use to develop, optimize and successfully launch code projects. Because its built with machine learning capabilities, Theano can independently diagnose and solve bugs or system malfunctions with minimal external support, according to the products site.

Specialty: Software development

OpenNN is an open source software library that uses neural network technology to more quickly and accurately interpret data. A more advanced AI tool, OpenNNs advantage is being able to analyze and load massive data sets and train models faster than its competitors, according to its website.

Specialty: Software development

Another open source platform, TensorFlow is specifically designed to help companies build machine learning projects and neural networks. TensorFlow is capable of Javascript integration and can help developers easily build and train machine learning models to fit their companys specific business needs. Some of the companies that rely on its services are Airbnb, Google, Intel and Twitter, according to TensorFlows site.

Specialty: Business intelligence

Tellius is a business intelligence platform that relies on AI technologies to help companies get a better grasp and understanding of their strategies, successes and growth areas. Telliuss platform offers an intelligent search function that can organize data and make it easy for employees to understand, helping them visualize and understand the factors driving their business outcomes. According to Telliuss site, users can ask questions within the platform to discover through lines in their data, sort hefty data and gather actionable insights.

Specialty: Sales

Gong.io is an AI driven sales platform that companies can use to analyze customer interactions, forecast future deals and visualize sales pipelines. Gong.ios biggest asset is its transparency, which gives everyone from employees to leaders insight into team performance, direction changes and upcoming projects. It automatically transforms individual pieces of customer feedback into overall trends that companies can use to discover weak points and pivot their strategies as needed, according to Gong.ios site.

Specialty: Business intelligence

Zia, a product offering from business software company Zoho, is an cloud-integrated AI platform built to help companies gather organizational knowledge and turn customer feedback into strategy. Zias AI tools can analyze customer sales patterns, client schedules and workflow patterns to help employees on every team increase their productivity and success rates, said the companys site.

Specialty: Scheduling

TimeHero is an AI-enabled time management platform that helps users manage their project calendars, to-do lists and schedules as needed. The platforms machine learning capabilities can automatically remind employees when meetings take place, when to send emails and when certain projects are due, according to TimeHeros site. Individual TimeHero users can sync their personal calendars with those of their team so that they can collaborate more efficiently on projects and work around each others due dates.

Read more from the original source:

The 15 Best AI Tools To Know - Built In

Posted in Ai | Comments Off on The 15 Best AI Tools To Know – Built In

For the average AI shop, sparse models and cheap memory will win – The Register

Posted: at 2:13 am

As compelling as the leading large-scale language models may be, the fact remains that only the largest companies have the resources to actually deploy and train them at meaningful scale.

For enterprises eager to leverage AI to a competitive advantage, a cheaper, pared-down alternative may be a better fit, especially if it can be tuned to particular industries or domains.

Thats where an emerging set of AI startups hoping to carve out a niche: by building sparse, tailored models that, maybe not as powerful as GPT-3, are good enough for enterprise use cases and run on hardware that ditches expensive high-bandwidth memory (HBM) for commodity DDR.

German AI startup Aleph Alpha is one such example. Founded in 2019, the Heidelberg, Germany-based companys Luminous natural-language model boasts many of the same headline-grabbing features as OpenAIs GPT-3: copywriting, classification, summarization, and translation, to name a few.

The model startup has teamed up with Graphcore to explore and develop sparse language models on the British chipmaker's hardware.

Graphcores IPUs present an opportunity to evaluate the advanced technological approaches such as conditional sparsity, Aleph Alpha CEO Jonas Andrulius said in a statement. These architectures will undoubtedly play a role in Aleph Alphas future research.

Conditionally sparse models sometimes called mixture of experts or routed models only process data against the applicable parameters, something that can significantly reduce the compute resources needed to run them.

For example, if a language model was trained in all the languages on the internet, and then is asked a question in Russian, it wouldnt make sense to run that data through the entire model, only the parameters related to the Russian language, explained Graphcore CTO Simon Knowles, in an interview with The Register.

Its completely obvious. This is how your brain works, and its also how an AI ought to work, he said. Ive said this many times, but if an AI can do many things, it doesnt need to access all of its knowledge to do one thing.

Knowles, whos company builds accelerators tailored for these kinds of models, unsurprisingly believes theyre the future of AI. Id be surprised if, by next year, anyone is building dense-language models, he added.

Sparse language models arent without their challenges. One of the most pressing, according to Knowles, has to do with the memory. The HBM used in high-end GPUs to achieve the necessary bandwidth and capacities required by these models is expensive and attached to an even more expensive accelerator.

This isnt an issue for dense-language models where you might need all of that compute and memory, but it poses a problem for sparse models, which favor memory over compute, he explained.

Interconnect tech, like Nvidias NVLink, can be used to pool memory across multiple GPUs, but if the model doesnt require all that compute, the GPUs could be left sitting idle. Its a really expensive way to buy memory, Knowles said.

Graphcores accelerators attempt to sidestep this challenge by borrowing a technique as old as computing itself: caching. Each IPU features a relatively large SRAM cache 1GB to satiate the bandwidth requirements of these models, while raw capacity is achieved using large pools of inexpensive DDR4 memory.

The more SRAM you've got, the less DRAM bandwidth you need, and this is what allows us to not use HBM, Knowles said.

By decoupling memory from the accelerator, its far less expensive the cost of a few commodity DDR modules for enterprises to support larger AI models.

In addition to supporting cheaper memory, Knowles claims the companys IPUs also have an architectural advantage over GPUs, at least when it comes to sparse models.

Instead of running on a small number of large matrix multipliers like you find in a tensor processing unit Graphcores chips feature a large number of smaller matrix math units that can address the memory independently.

This provides greater granularity for sparse models, where you need the freedom to fetch relevant subsets, and the smaller the unit youre obliged to fetch, the more freedom you have, he explained.

Put together, Knowles argues this approach enables its IPUs to train large AI/ML models with hundreds of billions or even trillions of parameters, at substantially lower cost compared to GPUs.

However, the enterprise AI market is still in its infancy, and Graphcore faces stiff competition in this space from larger, more established rivals.

So while development on ultra-sparse, cut-rate language models for AI are unlikely to abate anytime soon, it remains to be seen whether itll be Graphcores IPUs or someone elses accelerator that ends up powering enterprise AI workloads.

Original post:

For the average AI shop, sparse models and cheap memory will win - The Register

Posted in Ai | Comments Off on For the average AI shop, sparse models and cheap memory will win – The Register