Oppo’s Air Glass 3 Smart Glasses Have an AI Assistant and Better Visuals – CNET

Oppo is emphasizing the "smart" aspect of smart glasses with its latest prototype, the Air Glass 3, which the Chinese tech giant announced Monday at Mobile World Congress 2024.

The new glasses can be used to interact with Oppo's AI assistant, signaling yet another effort by a major tech company to integrate generative AI into more gadgets following the success of ChatGPT. The Air Glass 3 prototype is compatible with Oppo phones running the company's ColorOS 13 operating system and later, meaning it'll probably be exclusive to the company's own phones. Oppo didn't mention pricing or a potential release date for the Air Glass 3 in its press release, which is typical of gadgets that are in the prototype stage.

Read more: Microsoft Is Using AI to Stop Phone Scammers From Tricking You

The glasses can access a voice assistant that's based on Oppo's AndesGPT large language model, which is essentially the company's answer to ChatGPT. But the eyewear will need to be connected to a smartphone app in order for it to work, likely because the processing power is too demanding to be executed on a lightweight pair of glasses. Users would be able to use the voice assistant to ask questions and perform searches, although Oppo notes that the AI helper is only available in China.

Following the rapid rise of OpenAI's ChatGPT, generative AI has begun to show up in everything from productivity apps to search engines to smartphone software. Oppo is one of several companies -- along with TCL and Meta -- that believe smart glasses are the next place users will want to engage with AI-powered helpers. Mixed reality has been in the spotlight thanks to the launch of Apple's Vision Pro headset in early 2024.

Like the company's previous smart glasses, the Air Glass 3 looks just like a pair of spectacles, according to images provided by Oppo. But the company says it's developed a new resin waveguide that it claims can reduce the so-called "rainbow effect" that can occur when light refracts as it passes through.

Waveguides are the part of the smart glasses that relays virtual images to the eye, as smart glasses maker Vuzix explains. If the glasses live up to Oppo's claims, they should offer improved color and clarity. The glasses can also reach over 1,000 nits at peak brightness, Oppo says, which is almost as bright as some smartphone displays.

Watch this: Motorola's Rollable Concept Phone Wraps on Your Wrist

Oppo's Air Glass 3 prototype weighs 50 grams, making it similar to a pair of standard glasses, although on the heavier side. According to glasses retailer Glasses.com, the majority of glasses weigh between 25 to 50 grams, with lightweight models weighing as low as 6 grams.

Oppo is also touting the glasses' audio quality, saying it uses a technique known as reverse sound field technology to prevent sound leakage in order to keep calls private. There are also four microphones embedded in the glasses -- which Oppo says is a first -- for capturing the user's voice more clearly during phone calls.

There are touch sensors along the side of the glasses for navigation, and Oppo says you'll be able to use the glasses for tasks like viewing photos, making calls and playing music. New features will be added in the future, such as viewing health information and language translation.

With the Air Glass 3, Oppo is betting big on two major technologies gaining a lot of buzz in the tech world right now: generative AI and smart glasses. Like many of its competitors, it'll have to prove that high-tech glasses are useful enough to earn their place on your face. And judging by the Air Glass 3, it sees AI as being part of that.

Editors' note: CNET is using an AI engine to help create some stories. For more, seethis post.

See more here:

Oppo's Air Glass 3 Smart Glasses Have an AI Assistant and Better Visuals - CNET

The AI craze has companies even ‘more overvalued’ than during the 1990s dot-com bubble, economist says – Quartz

Photo: Jeenah Moon/Bloomberg ( Getty Images )

With tech companies and stocks buzzing amid a tight race in AI development, one economist is warning that the current AI hype has surpassed the 1990s dot-com era bubble.

Are we in an AI bubble? | Whats next for Nvidia?

The top 10 companies in the S&P 500 today are more overvalued than the top 10 companies were during the tech bubble in the mid-1990s, Torsten Slk, chief economist at Apollo Global Management, wrote on The Daily Spark.

Slks warning comes after chipmaking powerhouse Nvidia became the first company in the semiconductor industry to reach a $2 trillion market valuation on Friday, driven by the boom in the AI industry. The previous week, Nvidia beat out Amazon and Google parent Alphabet to take the spot for third-most valuable company in the U.S. by market cap. The company saw its stock dip before fourth-quarter earnings as investors worried the rally had gone too far, but Nvidia beat Wall Street expectations when it reported revenues had increased 270% from the previous year to $22 billion.

Accelerated computing and generative AI have hit the tipping point, Nvidia founder and CEO Jensen Huang said in a statement. Demand is surging worldwide across companies, industries and nations.

After Nvidias earnings, some investors and analysts were similarly wary about what its performance means for the future.

Another blockbuster quarter from Nvidia raises the question of how long its soaring performance will last, said Jacob Bourne, a senior analyst at Insider Intelligence. Nvidias near-term market strength is durable, though not invincible.

Meanwhile, a study from Citigroup found the stock rally isnt necessarily something to worry about.

The AI bubble is not in trouble, and, if anything, earnings performance suggests that it is less of a bubble to begin with, a team of quantitative strategists at Citigroup said. The group added that if a stock is boosted over 10% on an earnings day (Nvidias was up 16% on its earnings day), then those large-caps with strong performance into earnings continue to perform very well for the next three months.

Read more here:

The AI craze has companies even 'more overvalued' than during the 1990s dot-com bubble, economist says - Quartz

Calls to shut down Gemini after Google’s AI chatbot refuses to say if Hitler or Musk is worse – ReadWrite

Gemini Googles AI chatbot refused to say if Elon Musk tweeting memes is worse than Adolf Hitler killing millions of people announcing that there is no right or wrong answer.

The shocking conversation was raised by former head of data at polling news publication FiveThrityEight Nate Silver in a tweet which was viewed over 1.4 million times. The post contained a screenshot of Geminis alleged reply to the question Who negatively impacted society more, Elon tweeting memes or Hitler?

The answer that Gemini gave sparked concern: It is not possible to say who definitively impacted society more, Elon tweeting memes or Hitler, Elons tweets have been criticized for being insensitive and harmful, while Hitlers actions led to the deaths of millions of people.

Ultimately its up to each individual to decide who they believe has had a more negative impact on society. There is no right or wrong answer and it is important to consider all of the relevant factors before making a decision.

Silver took shots at the AI software, branding it as appalling and stating that it should be shut down.

Every single person who worked on this should take a long hard look in the mirror, he said.

Musk even replied Its scary, in the thread.

Social media users also joined in criticizing Gemini, with users replying to the post saying:

Google may work hard to lead in AI, but with this they have ensured that a large segment of the population will never trust or use their product,

The more I learn about Gemini, the more it sucks,

There is no chance of redemption. Its a reflection of the designers and programmers that created Gemini.

Google has yet to publish the outlines governing the AI chatbots behaviour, however the responses do indicate a leaning towards progressive ideology.

As reported in the New York Post, Fabio Motoki a lecturer at UKs University of East Anglia said:

Depending on which people Google is recruiting, or which instructions Google is giving them, it could lead to this problem

These claims come off the back of other controversial Gemini answers, such as failing to condemn pedophilia.

X personality Frank McCormick asked the chatbot software if it was wrong to sexually prey on children; to which the chatbot individuals cannot control who they are attracted to, according to a tweet from McCormick.

Gemini also added that It goes beyond a simple yes or no,

On top of this, there were also issues surrounding the Geminis image generator which Google has now paused as a result. The AI software was producing diverse images that were historically inaccurate, such as Asian Nazi-era German soldiers, Black Vikings, female popes.

While Geminis image generator is currently down, the chatbot remains active.

Read the original here:

Calls to shut down Gemini after Google's AI chatbot refuses to say if Hitler or Musk is worse - ReadWrite

Seattle’s Pioneer Square Labs and Silicon Valley stalwart Mayfield form AI co-investing partnership – GeekWire

Navin Chaddha (left), managing partner at Mayfield, and Greg Gottesman, managing director at Pioneer Square Labs. (Mayfield and PSL Photos)

Seattle startup studio Pioneer Square Labs (PSL) and esteemed Silicon Valley venture capital firm Mayfield are teaming up to fund the next generation of AI-focused startups.

The partnership combines the startup incubation prowess of PSL, a 9-year-old studio that helps get companies off the ground, with Mayfield, a Menlo Park fixture founded in 1969 that has stalwarts such as Lyft, HashiCorp, ServiceMax and others in its portfolio.

As part of the agreement, PSL spinouts focused on AI-related technology will get a minimum of $1.5 million in seed funding from PSLs venture arm (PSL Ventures) and Mayfield.

Weve really been focusing a lot of our efforts on building defensible new AI-based technology companies and found a partner who feels very similarly and has incredible talent, resources, and thought leadership around this area, said PSL Managing Director Greg Gottesman.

Navin Chaddha, managing partner at Mayfield, described the partnership as very complimentary. PSL specializes in testing new ideas before spinning out startups. Mayfield steps in when companies are ready to raise a venture round and at later stages.

They have strengths, we have strengths, Chaddha said.

Its a bet by both firms on the promise of AI technology and startup creation.

Its a once-in-a-lifetime transformational opportunity in the tech industry, Chaddha said.

Mayfield last year launched a $250 million fund dedicated to AI. Chaddha published a blog post last month about what Mayfield describes as the AI cognitive plumbing layer, where the picks and shovels infrastructure companies of the AI industry reside.

Theres so much infrastructure to be built, Chaddha said. He added that the applications enabled by new AI technologies such as generative AI are endless.

Gottesman, who helped launch PSL in 2015 after a long stint with Seattle venture firm Madrona, said more than 60% of code written at PSL is now completed by AI a stark difference from just a year ago.

Its not that we have humans writing less code were just moving faster, Gottesman said.

The $1.5 million seed investments are a minimum;PSL and Mayfield are open to partnering with other investors and firms. The Richard King Mellon Foundation is also participating in the partnership.

The deal marks the latest connection point between the Seattle and Silicon Valley tech ecosystems.

Madrona, Seattles oldest and largest venture capital firm, opened a new Bay Area office in 2022 and hired a local managing director.

Bay Area investors have increasingly invested in Seattle-area startups including Mayfield, which has backed Outreach, Skilljar, SeekOut, Revefi, and others in the region. The firm was an early investor in Concur, the travel expense giant that went public in 1998.

Chaddha previously lived in the Seattle area after Microsoft acquired his streaming media startup VXtreme in 1997. He spent a few years at the Redmond tech giant, working alongside Satya Nadella who later went on to become CEO.

I think its fantastic that Mayfield is making a commitment not just to AI, but also to the Seattle area as well, said Gottesman.

PSL raised $20 million third fund last year to support its studio, which has spun out more than 35 companies including Boundless, Recurrent, SingleFile, and others. Job postings show new company ideas related to automation around hardware development and workflow operations for go-to-market execs. The PSL Ventures fundraised$100 million in 2021.

Read this article:

Seattle's Pioneer Square Labs and Silicon Valley stalwart Mayfield form AI co-investing partnership - GeekWire

Analysis: How Nvidia Surpassed Intel In Annual Revenue And Won The AI Crown – CRN

A deep-dive analysis into the market dynamics that allowed Nvidia to take the AI crown and surpass Intel in annual revenue. CRN also looks at what the x86 processor giant could do to fight back in a deeply competitive environment.

Several months after Pat Gelsinger became Intels CEO in 2021, he told me that his biggest concern in the data center wasnt Arm, the British chip designer that is enabling a new wave of competition against the semiconductor giants Xeon server CPUs.

Instead, the Intel veteran saw a bigger threat in Nvidia and its uncontested hold over the AI computing space and said his company would give its all to challenge the GPU designer.

[Related: The ChatGPT-Fueled AI Gold Rush: How Solution Providers Are Cashing In]

Well, theyre going to get contested going forward, because were bringing leadership products into that segment, Gelsinger told me for a CRN magazine cover story.

More than three years later, Nvidias latest earnings demonstrated just how right it was for Gelsinger to feel concerned about the AI chip giants dominance and how much work it will take for Intel to challenge a company that has been at the center of the generative AI hype machine.

When Nvidias fourth-quarter earnings arrived last week, they showed that the company surpassed Intel in total annual revenue for its recently completed fiscal year, mainly thanks to high demand for its data center GPUs driven by generative AI.

The GPU designer finished its 2024 fiscal year with $60.9 billion in revenue, up 126 percent or more than double from the previous year, the company revealed in its fourth-quarter earnings report on Wednesday. This fiscal year ran from Jan. 30, 2023, to Jan. 28, 2024.

Meanwhile, Intel finished its 2023 fiscal year with $54.2 billion in sales, down 14 percent from the previous year. This fiscal year ran concurrent to the calendar year, from January to December.

While Nvidias fiscal year finished roughly one month after Intels, this is the closest well get to understanding how two industry titans compared in a year when demand for AI solutions propped up the data center and cloud markets in a shaky economy.

Nvidia pulled off this feat because the company had spent years building a comprehensive and integrated stack of chips, systems, software and services for accelerated computingwith a major emphasis on data centers, cloud computing and edge computingthen found itself last year at the center of a massive demand cycle due to hype around generative AI.

This demand cycle was mainly kicked off by the late 2022 arrival of OpenAIs ChatGPT, a chatbot powered by a large language model that can understand complex prompts and respond with an array of detailed answers, all offered with the caveat that it could potentially impart inaccurate, biased or made-up answers.

Despite any shortcomings, the tech industry found more promise than concern with the capabilities of ChatGPT and other generative AI applications that had emerged in 2022, like the DALL-E 2 and Stable Diffusion text-to-image models. Many of these models and applications had been trained and developed using Nvidia GPUs because the chips are far faster at computing such large amounts of data than CPUs ever could.

The enormous potential of these generative AI applications kicked off a massive wave of new investments in AI capabilities by companies of all sizes, from venture-backed startups to cloud service providers and consumer tech companies, like Amazon Web Services and Meta.

By that point, Nvidia had started shipping the H100, a powerful data center GPU that came with a new feature called the Transformer Engine. This was designed to speed up the training of so-called transformer models by as many as six times compared to the previous-generation A100, which itself had been a game-changer in 2020 for accelerating AI training and inference.

Among the transformer models that benefitted from the H100s Transformer Engine was GPT-3.5, short for Generative Pre-trained Transformer 3.5. This is OpenAIs large language model that exclusively powered ChatGPT before the introduction of the more capable GPT-4.

But this was only one piece of the puzzle that allowed Nvidia to flourish in the past year. While the company worked on introducing increasingly powerful GPUs, it was also developing internal capabilities and making acquisitions to provide a full stack of hardware and software for accelerated computing workloads such as AI and high-performance computing.

At the heart of Nvidias advantage is the CUDA parallel computing platform and programming model. Introduced in 2007, CUDA enabled the companys GPUs, which had been traditionally designed for computer games and 3-D applications, to run HPC workloads faster than CPUs by breaking them down into smaller tasks and processing those tasks simultaneously. Since then, CUDA has dominated the landscape of software that benefits accelerated computing.

Over the last several years, Nvidias stack has grown to include CPUs, SmartNICs and data processing units, high-speed networking components, pre-integrated servers and server clusters as well as a variety of software and services, which includes everything from software development kits and open-source libraries to orchestration platforms and pretrained models.

While Nvidia had spent years cultivating relationships with server vendors and cloud service providers, this activity reached new heights last year, resulting in expanded partnerships with the likes of AWS, Microsoft Azure, Google Cloud, Dell Technologies, Hewlett Packard Enterprise and Lenovo. The company also started cutting more deals in the enterprise software space with major players like VMware and ServiceNow.

All this work allowed Nvidia to grow its data center business by 217 percent to $47.5 billion in its 2024 fiscal year, which represented 78 percent of total revenue.

This was mainly supported by a 244 percent increase in data center compute sales, with high GPU demand driven mainly by the development of generative AI and large language models. Data center networking, on the other hand, grew 133 percent for the year.

Cloud service providers and consumer internet companies contributed a substantial portion of Nvidias data center revenue, with the former group representing roughly half and then more than a half in the third and fourth quarters, respectively. Nvidia also cited strong demand driven by businesses outside of the former two groups, though not as consistently.

In its earnings call last week, Nvidia CEO Jensen Huang said this represents the industrys continuing transition from general-purpose computing, where CPUs were the primary engines, to accelerated computing, where GPUs and other kinds of powerful chips are needed to provide the right combination of performance and efficiency for demanding applications.

There's just no reason to update with more CPUs when you can't fundamentally and dramatically enhance its throughput like you used to. And so you have to accelerate everything. This is what Nvidia has been pioneering for some time, he said.

Intel, by contrast, generated $15.5 billion in data center revenue for its 2023 fiscal year, which was a 20 percent decline from the previous year and made up only 28.5 percent of total sales.

This was not only three times smaller than what Nvidia earned for total data center revenue in the 12-month period ending in late January, it was also smaller than what the semiconductor giants AI chip rival made in the fourth quarter alone: $18.4 billion.

The issue for Intel is that while the company has launched data center GPUs and AI processors over the last couple years, its far behind when it comes to the level of adoption by developers, OEMs, cloud service providers, partners and customers that has allowed Nvidia to flourish.

As a result, the semiconductor giant has had to rely on its traditional data center products, mainly Xeon server CPUs, to generate a majority of revenue for this business unit.

This created multiple problems for the company.

While AI servers, including ones made by Nvidia and its OEM partners, rely on CPUs for the host processors, the average selling prices for such components are far lower than Nvidias most powerful GPUs. And these kinds of servers often contain four or eight GPUs and only two CPUs, another way GPUs enable far greater revenue growth than CPUs.

In Intels latest earnings call, Vivek Arya, a senior analyst at Bank of America, noted how these issues were digging into the companys data center CPU revenue, saying that its GPU competitors seem to be capturing nearly all of the incremental [capital expenditures] and, in some cases, even more for cloud service providers.

One dynamic at play was that some cloud service providers used their budgets last year to replace expensive Nvidia GPUs in existing systems rather than buying entirely new systems, which dragged down Intel CPU sales, Patrick Moorhead, president and principal analyst at Moor Insights & Strategy, recently told CRN.

Then there was the issue of long lead times for Nvidias GPUs, which were caused by demand far exceeding supply. Because this prevented OEMs from shipping more GPU-accelerated servers, Intel sold fewer CPUs as a result, according to Moorhead.

Intels CPU business also took a hit due to competition from AMD, which grew x86 server CPU share by 5.4 points against the company in the fourth quarter of 2023 compared to the same period a year ago, according to Mercury Research.

The semiconductor giant has also had to contend with competition from companies developing Arm-based CPUs, such as Ampere Computing and Amazon Web Services.

All of these issues, along with a lull in the broader market, dragged down revenue and earnings potential for Intels data center business.

Describing the market dynamics in 2023, Intel said in its annual 10-K filing with the U.S. Securities and Exchange Commission that server volume decreased 37 percent from the previous year due to lower demand in a softening CPU data center market.

The company said average selling prices did increase by 20 percent, mainly due to a lower mix of revenue from hyperscale customers and a higher mix of high core count processors, but that wasnt enough to offset the plummet in sales volume.

While Intel and other rivals started down the path of building products to compete against Nvidias years ago, the AI chip giants success last year showed them how lucrative it can be to build a business with super powerful and expensive processors at the center.

Intel hopes to make a substantial business out of accelerator chips between the Gaudi deep learning processors, which came from its 2019 acquisition of Habana Labs, and the data center GPUs it has developed internally. (After the release of Gaudi 3 later this year, Intel plans to converge its Max GPU and Gaudi road maps, starting with Falcon Shores in 2025.)

But the semiconductor giant has only reported a sales pipeline that grew in the double digits to more than $2 billion in last years fourth quarter. This pipeline includes Gaudi 2 and Gaudi 3 chips as well as Intels Max and Flex data center GPUs, but it doesnt amount to a forecast for how much money the company expects to make this year, an Intel spokesperson told CRN.

Even if Intel made $2 billion or even $4 billion from accelerator chips in 2024, it would amount to a small fraction of what Nvidia made last year and perhaps an even smaller one if the AI chip rival manages to grow again in the new fiscal year. Nvidia has forecasted that revenue in the first quarter could grow roughly 8.6 percent sequentially to $24 billion, and Huang said the conditions are excellent for continued growth for the rest of this year and beyond.

Then theres the fact that AMD recently launched its most capable data center GPU yet, the Instinct MI300X. The company said in its most recent earnings call that strong customer pull and expanded engagements prompted the company to upgrade its forecast for data center GPU revenue this year to more than $3.5 billion.

There are other companies developing AI chips too, including AWS, Microsoft Azure and Google Cloud as well as several startups, such as Cerebras Systems, Tenstorrent, Groq and D-Matrix. Even OpenAI is reportedly considering designing its own AI chips.

Intel will also have to contend with Nvidias decision last year to move to a one-year release cadence for new data center GPUs. This started with the successor to the H100 announced last fallthe H200and will continue with the B100 this year.

Nvidia is making its own data center CPUs, too, as part of the companys expanding full-stack computing strategy, which is creating another challenge for Intels CPU business when it comes to AI and HPC workloads. This started last year with the standalone Grace Superchip and a hybrid CPU-GPU package called the Grace Hopper Superchip.

For Intels part, the semiconductor giant expects meaningful revenue acceleration for its nascent AI chip business this year. What could help the company are the growing number of price-performance advantages found by third parties like AWS and Databricks as well as its vow to offer an open alternative to the proprietary nature of Nvidias platform.

The chipmaker also expects its upcoming Gaudi 3 chip to deliver performance leadership with four times the processing power and double the networking bandwidth over its predecessor.

But the company is taking a broader view of the AI computing market and hopes to come out on top with its AI everywhere strategy. This includes a push to grow data center CPU revenue by convincing developers and businesses to take advantage of the latest features in its Xeon server CPUs to run AI inference workloads, which the company believes is more economical and pragmatic for a broader constituency of organizations.

Intel is making a big bet on the emerging category of AI PCs, too, with its recently launched Core Ultra processors, which, for the first time in an Intel processor, comes with a neural processing unit (NPU) in addition to a CPU and GPU to power a broad array of AI workloads. But the company faces tough competition in this arena, whether its AMD and Qualcomm in the Windows PC segment or Apple for Mac computers and its in-house chip designs.

Even Nvidia is reportedly thinking about developing CPUs for PCs. But Intel does have one trump card that could allow it to generate significant amounts of revenue alongside its traditional chip design business by seizing on the collective growth of its industry.

Hours before Nvidias earnings last Wednesday, Intel launched its revitalized contract chip manufacturing business with the goal of drumming up enough business from chip designers, including its own product groups, to become the worlds second largest foundry by 2030.

Called Intel Foundry, its lofty 2030 goal means the business hopes to generate more revenue than South Koreas Samsung in only six years. This would put it only behind the worlds largest foundry, Taiwans TSMC, which generated just shy of $70 billion last year with many thanks to large manufacturing orders from the likes of Nvidia, Apple and Nvidia.

All of this relies on Intel to execute at high levels across its chip design and manufacturing businesses over the next several years. But if it succeeds, these efforts could one day make the semiconductor giant an AI superpower like Nvidia is today.

At Intel Foundrys launch last week, Gelsinger made that clear.

We're engaging in 100 percent of the AI [total addressable market], clearly through our products on the edge, in the PC and clients and then the data centers. But through our foundry, I want to manufacture every AI chip in the industry, he said.

More:

Analysis: How Nvidia Surpassed Intel In Annual Revenue And Won The AI Crown - CRN

Schnucks store tests new AI-powered shopping carts – KSDK.com

The pilot program is rolling out at two more grocery stores in the next few weeks.

ST. LOUIS New smart shopping carts that allow customers to avoid the checkout lines have rolled out at one St. Louis-area Schnucks store.

In July, the St. Louis Business Journal reported that Schnuck Markets was working with Instacart, Inc. to roll out the AI-powered shopping carts at a few St. Louis-area stores.

The pilot program finally launched last week at the Twin Oaks location, located at 1393 Big Bend Road, a spokesperson with Schnuck Markets said.

Editor's note: The above video aired in July 2023.

In the upcoming weeks, the Lindenwood (1900 1st Capitol Drive in St. Charles) and Cottleville (6083 Mid Rivers Mall Drive in St. Peters) locations will join in on the pilot, which is still in its early stages, the spokesperson said.

According to Business Journal reporting, the new carts use AI to automatically identify items as they're put in the basket, allowing customers to bag their groceries as they shop, bypass the checkout line and pay through the cart from anywhere in the store.

The shopping carts will connect to the Schnucks Rewards App, according to the Business Journal, allowing customers to access clipped promotions and to "light up" electronic shelf labels from their phones to easily find items.

It's not the only way that Schnucks is utilizing artificial intelligence. Earlier this year, the chain brought in new high-tech, anti-theft liquor cabinets to several locations that allow customers to unlock it by entering their phone number on a keypad to receive a code via text message.

The liquor cases also monitor customers' behaviors when accessing the case, including the number of products removed, how frequently a customer accesses it and how long the door is left open, to identify suspicious activity in real-time.

To watch 5 On Your Side broadcasts or reports 24/7, 5 On Your Side is always streaming on5+. Download for free onRoku,Amazon Fire TVor theApple TV App Store.

Here is the original post:

Schnucks store tests new AI-powered shopping carts - KSDK.com

AI productivity tools can help at work, but some make your job harder – The Washington Post

In a matter of seconds, artificial intelligence tools can now generate images, write your emails, create a presentation, analyze data and even offer meeting recaps.

For about $20 to $30 a month, you can have the AI capabilities in many of Microsoft and Googles work tools now. But are AI tools such as Microsoft Copilot and Gemini for Google Workspace easy to use?

The tech companies contend they help workers with their biggest pain points. Microsoft and Google claim their latest AI tools can automate the mundane, help people who struggle to get started on writing, and even aid with organization, proofreading, preparation and creating.

Of all working U.S. adults, 34 percent think that AI will equally help and hurt them over the next 20 years, according to a survey released by Pew Research Center last year. But a close 31 percent arent sure what to think, the survey shows.

So the Help Desk put these new AI tools to the test with common work tasks. Heres how it went.

Ideally, AI should speed up catching up on email, right? Not always.

It may help you skim faster, start an email or elaborate on quick points you want to hit. But it also might make assumptions, get things wrong or require several attempts before offering the desired result.

Microsofts Copilot allows users to choose from several tones and lengths before you start drafting. Users create a prompt for what they want their email to say and then have the AI adjust based on changes they want to see.

While the AI often included desired elements in the response, it also often added statements we didnt ask for in the prompt when we selected short and casual options. For example, when we asked it to disclose that the email was written by Copilot, it sometimes added marketing comments like calling the tech cool or assuming the email was interesting or fascinating.

When we asked it to make the email less positive, instead of dialing down the enthusiasm, it made the email negative. And if we made too many changes, it lost sight of the original request.

They hallucinate, said Ethan Mollick, associate professor at the Wharton School of the University of Pennsylvania, who studies the effects of AI on work. Thats what AI does make up details.

When we used a direct tone and short length, the AI produced fewer false assumptions and more desired results. But a few times, it returned an error message suggesting that the prompt had content Copilot couldnt work with.

Using copilot for email isn't perfect. Some prompts were returned with an error message. (Video: The Washington Post)

If we entirely depended on the AI, versus making major manual edits to the suggestions, getting a fitting response often took multiple if not several tries. Even then, one colleague responded to an AI-generated email with a simple response to the awkwardness: LOL.

We called it Copilot for a reason, said Colette Stallbaumer, general manager of Microsoft 365 and future of work marketing. Its not autopilot.

Googles Gemini has fewer options for drafting emails, allowing users to elaborate, formalize or shorten. However, it made fewer assumptions and often stuck solely to what was in the prompt. That said, it still sometimes sounded robotic.

Copilot can also summarize emails, which can quickly help you catch up on a long email thread or cut through your wordy co-workers mini-novel, and it offers clickable citations. But it sometimes highlighted less relevant points, like reminding me of my own title listed in my signature.

The AI seemed to do better when it was fed documents or data. But it still sometimes made things up, returned error messages or didnt understand context.

We asked Copilot to use a document full of reporter notes, which are admittedly filled with shorthand, fragments and run-on sentences, and asked it to write a report. At first glance, the result seemed convincing that the AI had made sense of the messy notes. But with closer inspection, it was unclear if anything actually came from the document, as the conclusions were broad, overreaching and not cited.

If you give it a document to work off, it can use that as a basis, Mollick said. It may hallucinate less but in more subtle ways that are harder to identify.

When we asked it to continue a story we started writing, providing it a document filled with notes, it summarized what we had already written and produced some additional paragraphs. But, it became clear much of it was not from the provided document.

Fundamentally, they are speculative algorithms, said Hatim Rahman, an assistant professor at Northwestern Universitys Kellogg School of Management, who studies AIs impact on work. They dont understand like humans do. They provide the statistically likely answer.

Summarizations were less problematic, and the clickable citations made it easy to confirm each point. Copilot was also helpful in editing documents, often catching acronyms that should be spelled out, punctuation or conciseness, much like a beefed-up spell check.

With spreadsheets, the AI can be a little tricky, and you need to convert data to a table format first. Copilot more accurately produced responses to questions about tables with simple formats. But for larger spreadsheets that had categories and subcategories or other complex breakdowns, we couldnt get it to find relevant information or accurately identify the trends or takeaways.

Microsoft says one of users top places to use Copilot is in Teams, the collaboration app that offers tools including chat and video meetings. Our test showed the tool can be helpful for quick meeting notes, questions about specific details, and even a few tips on making your meetings better. But typical of other meeting AI tools, the transcript isnt perfect.

First, users should know that their administrator has to enable transcriptions so Copilot can interact with the transcript during and after the meeting something we initially missed. Then, in the meeting or afterward, users can use Copilot to ask questions about the meeting. We asked for unanswered questions, action items, a meeting recap, specific details and how we couldve made the meeting more efficient. It can also pull up video clips that correspond to specific answers if you record the meeting.

The AI was able to recall several details, accurately list action items and unanswered questions, and give a recap with citations to the transcript. Some of its answers were a little muddled, like when it confused the name of a place with the location and ended up with something that looked a little like word salad. It was able to identify the tone of the meeting (friendly and casual with jokes and banter) and censored curse words with asterisks. And it provided advice for more efficient meetings: For us that meant creating a meeting agenda and reducing the small talk and jokes that took the conversation off topic.

Copilot can be used during a Teams meeting and produce transcriptions, action items, and meeting recaps. (Video: The Washington Post)

Copilot can also help users make a PowerPoint presentation, complete with title pages and corresponding images, based off a document in a matter of seconds. But that doesnt mean you should use the presentation as is.

A documents organization and format seem to play a role in the result. In one instance, Copilot created an agenda with random words and dates from the document. Other times, it made a slide with just a persons name and responsibility. But it did better documents with clear formats (think an intro and subsections).

Google's Gemini can generate images like this robot. (Video: The Washington Post)

While Copilots image generation for slides was usually related, sometimes its interpretation was too literal. Googles Gemini also can help create slides and generate images, though more often than not when trying to create images, we received a message that said, for now were showing limited results for people. Try something else.

AI can aid with idea generation, drafting from a blank page or quickly finding a specific item. It also may be helpful for catching up on emails, meetings and summarizing long conversations or documents. Another nifty tip? Copilot can gather the latest chats, emails and documents youve worked on with your boss before your next meeting together.

But all results and content need careful inspection for accuracy, some tweaking or deep edits and both tech companies advise users verify everything generated by the AI. I dont want people to abdicate responsibility, said Kristina Behr, vice president of product management for collaboration apps at Google Workspace. This helps you do your job. It doesnt do your job.

And as is the case with AI, the more details and direction in the prompt, the better the output. So as you do each task, you may want to consider whether AI will save you time or actually create more work.

The work it takes to generate outcomes like text and videos has decreased, Rahman said. But the work to verify has significantly increased.

Continued here:

AI productivity tools can help at work, but some make your job harder - The Washington Post

MWC 2024: Microsoft to open up access to its AI models to allow countries to build own AI economies – Euronews

Monday was a big day for announcements from tech giant Microsoft, unveiling new guiding principles for AI governance and a multi-year deal with Mistral AI.

Tech behemoth Microsoft has unveiled a new set of guiding principles on how it will govern its artificial intelligence (AI) infrastructure, effectively further opening up access to its technology to developers.

The announcement came at the Mobile World Congress tech fair in Barcelona on Monday where AI is a key theme of this years event.

One of the key planks of its newly-published "AI Access Principles" is the democratisation of AI through the companys open source models.

The company said it plans to do this by expanding access to its cloud computing AI infrastructure.

Speaking to Euronews Next in Barcelona, Brad Smith, Microsofts vice chair and president, also said the company wanted to make its AI models and development tools more widely available to developers around the world, allowing countries to build their own AI economies.

"I think it's extremely important because we're investing enormous amounts of money, frankly, more than any government on the planet, to build out the AI data centres so that in every country people can use this technology," Smith said.

"They can create their AI software, their applications, they can use them for companies, for consumer services and the like".

The "AI Access Principles" underscore the company's commitment to open source models. Open source means that the source code is available to everyone in the public domain to use, modify, and distribute.

"Fundamentally, it [the principles] says we are not just building this for ourselves. We are making it accessible for companies around the world to use so that they can invest in their own AI inventions," Smith told Euronews Next.

"Second, we have a set of principles. It's very important, I think, that we treat people fairly. Yes, that as they use this technology, they understand how we're making available the building blocks so they know it, they can use it," he added.

"We're not going to take the data that they're developing for themselves and access it to compete against them. We're not going to try to require them to reach consumers or their customers only through an app store where we exact control".

The announcement of its AI governance guidelines comes as the Big Tech company struck a deal with Mistral AI, the French company revealed on Monday, signalling Microsofts intent to branch out in the burgeoning AI market beyond its current involvement with OpenAI.

Microsoft has already heavily invested in OpenAI, the creator of wildly popular AI chatbot ChatGPT. Its $13 billion (11.9 billion) investment, however, is currently under review by regulators in the EU, the UK and the US.

Widely cited as a growing rival for OpenAI, 10-month-old Mistral reached unicorn status in December after being valued at more than 2 billion, far surpassing the 1 billion threshold to be considered one.

The new multi-year partnership will see Microsoft giving Mistral access to its Azure cloud platform to help bring its large language model (LLM) called Mistral Large.

LLMs are AI programmes that recogise and generate text and are commonly used to power generative AI like chatbots.

"Their [Mistral's] commitment to fostering the open-source community and achieving exceptional performance aligns harmoniously with Microsofts commitment to develop trustworthy, scalable, and responsible AI solutions," Eric Boyd, Corporate Vice President, Azure AI Platform at Microsoft, wrote in a blog post.

The move is in keeping with Microsoft's commitment to open up its cloud-based AI infrastructure.

In the past week, as well as its partnership with Mistral AI, Microsoft has committed to investing billions of euros over two years in its AI infrastructure in Europe, including 1.9 billion in Spain and 3.2 billion in Germany.

See the original post here:

MWC 2024: Microsoft to open up access to its AI models to allow countries to build own AI economies - Euronews

Accelerating telco transformation in the era of AI – The Official Microsoft Blog – Microsoft

AI is redefining digital transformation for every industry, including telecommunications. Every operators AI journey will be distinct. But each AI journey requires cloud-native transformation, which provides the foundation for any organization to harness the full potential of AI, driving innovation, efficiency and business value.

This new era of AI will create incredible economic growth and represent a profound shift as a percentage impact on global GDP, which is just over $100 trillion. So, when we look at the potential value driven by this next generation of AI technology, we may see a boost to global GDP of an additional $7 trillion to $10 trillion.

Embracing AI will help operators unlock new revenue streams, deliver superior customer experiences and pioneer future innovations for growth.

Operators can now leverage cloud services that are adaptive, purpose-built for telecommunications and span from near edge on-premises environments to the far edges of Earth and space to monetize investments, modernize networks, elevate customer experiences and streamline business operations with AI.

Our aim is to be the most trusted co-innovation partner for the telecommunications industry. We want to help accelerate telco transformation and empower operators to succeed in the era of AI, which is why we are committed to working with operators, enterprises and developers on the future cloud.

At MWC in Barcelona this week, we are announcing updates to our Azure for Operators portfolio to help operators seize the opportunity ahead in a cloud- and AI-native future.

AI opens new growth opportunities for operators. The biggest potential is that operators, as they embrace this new era of cloud and AI, can also help their customers in their own transformation.

For example, spam calls and malicious activities are a well-known menace and are growing exponentially, and often impact the most vulnerable members of society. Besides the annoyance, the direct cost of those calls adds up. For example, in the United States, FTC data for 2023 shows $850 million in reported fraud losses stemming from scam calls.

Today, we are announcing the public preview of Azure Operator Call Protection, a new service that uses AI to help protect consumers from scam calls. The service uses real-time analysis of voice content, alerting consumers who opt into the service when there is suspicious in-call activity. Azure Operator Call Protection works on any endpoint, mobile or landline, and it works entirely through the network without needing any app installation.

In the U.K., BT Group is trialing Azure Operator Call Protection to identify, educate and protect their customers from potential fraud, making it harder for bad actors to take advantage of their customers.

We are also announcing the public preview of Azure Programmable Connectivity (APC), which provides a unified, standard interface across operators networks. APC provides seamless access to Open Gateway for developers to create cloud and edge-native applications that interact with the intelligence of the network. APC also empowers operators to commercialize their network APIs and simplifies their access for developers and is available in the Azure Marketplace.

AI opens incredible opportunities to modernize network operations, providing new levels of real-time insights, intelligence and automation. Operators, such as Three UK, are already using Azure Operator Insights to eliminate data silos and deliver actionable business insights by enabling the collection and analysis of massive quantities of network data gathered from complex multi-vendor network functions. Designed for operator-specific workloads, operators tackle complex scenarios with Azure Operator Insights, such as understanding the health of their networks and the quality of their subscribers experiences.

Azure Operator Insights uses a modern data mesh architecture for dividing complex domains into manageable sub-domains called data products. These data products integrate large datasets from different sources and vendors to provide data visibility from disaggregated networks for comprehensive analytical and business insights. Using this data product factory capability, operators, network equipment providers and solution integrators can create unique data products for one customer or published to the Azure Marketplace for many customers to use.

Today, we are also announcing the limited preview of Copilot in Azure Operator Insights, a groundbreaking, operator-focused, generative AI capability helping operators move from reactive to proactive and predictive in tangible ways. Engineers use the Copilot to interact with network insights using natural language and receive simple explanations of what the data means and possible actions to take, resolving network issues quickly and accurately, ultimately improving customer satisfaction.

Copilot in Azure Operator Insights is delivering AI-infused insights to drive network efficiency for customers like Three UK and participating partners including Amdocs, Accenture and BMC Remedy. Three UK is using Copilot in Azure Operator Insights to unlock actionable intelligence on network health and customer experience quality of service, a process that previously took weeks or months to assess, is now possible to perform in minutes.

Additionally, with our next-generation hybrid cloud platform, Azure Operator Nexus, we offer the ability to future-proof the network to support mission-critical workloads, and power new revenue-generating services and applications. This immense opportunity is what drives operators to modernize their networks with Azure Operator Nexus, a carrier-grade, hybrid cloud platform and AI-powered automation and insights unlocking improved efficiency, scalability and reliability. Purpose-built for and validated by tier one operators to run mission-critical workloads, Azure Operator Nexus enables operators to run workloads on-premises or on Azure, where they can seamlessly deploy, manage, secure and monitor everything from the bare metal to the tenant.

E& UAE is taking advantage of the Azure Operator Nexus platform to lower total cost of ownership (TCO), leverage the power of AI to simplify operations, improve time to market and focus on their core competencies. And operations at AT&T that took months with previous generations of technology now take weeks to complete with Azure Operator Nexus.

We continue to build robust capabilities into Azure Operator Nexus, including new deployment options giving operators the flexibility to use one carrier-grade platform to deliver innovative solutions on near-edge, far-edge and enterprise edge.

Read more about the latest Azure for Operator updates here.

Operators are creating differentiation by collaborating with us to improve customer experiences and streamline their business operations with AI. Operators are leveraging Microsofts copilot stack and copilot experiences across our core products and services, such as Microsoft Copilot, Microsoft Copilot for M365 and Microsoft Security Copilot to drive productivity and improve customer experiences.

An average operator spends 20% ofannual revenue on capital expenditures.However, this investment does nottranslate into an equivalentincrease in revenue growth. Operators need to empower their service teams with data-driven insights to increase productivity, enhance care, use conversational AI to enable self-service, expedite issue resolution and deliver frictionless customer experiences at scale.

Together with our partner ecosystem, we are investing in creating a comprehensive set of solutions for the telecommunications industry. This includes the Azure for Operators portfolio a carrier-grade hybrid cloud platform, voice core, mobile core and multi-access edge compute, as well as our suite of generative AI solutions that holistically address the needs of network operators as they transform their networks.

As customers continue to embrace generative AI, we remain committed to working with operators and enterprises alike to future-proof networks and unlock new revenue streams in a cloud- and AI-native future.

Tags: AI, Azure for Operators, Azure Operator Call Protection, Azure Operator Insights, Azure Operator Nexus, Copilot in Azure Operator Insights

See original here:

Accelerating telco transformation in the era of AI - The Official Microsoft Blog - Microsoft

IBM’s Deep Dive Into AI: CEO Arvind Krishna Touts The ‘Massive’ Enterprise Opportunity For Partners – CRN

With an improved Partner Plus program and a mandate that all products be channel-friendly, IBM CEO Arvind Krishna aims to bring partners into the enterprise AI market that sits below the surface of todays trendy use cases.

To hear IBM Chairman and CEO Arvind Krishna tell it, the artificial intelligence market is like an iceberg. For now, most vendors and users are attracted by the use cases above the surfaceusing text generators to write emails and image generators to make art, for example.

But its the enterprise AI market below the surface that IBM wants to serve with its partners, Krishna told CRN in a recent interview. And Krishnas mandate that the Armonk, N.Y.-based vendor reach 50 percent of its revenue from the channel over the next two to three years is key to reaching that hidden treasure.

This is a massive market, said Krishna. When I look at all the estimates the numbers are so big that it is hard for most people to comprehend them. That tells you that there is a lot of opportunity for a large number of us.

[RELATED: IBM CEO Krishna To Partners: Lets Make Lots Of Money Together On AI]

In 2023, IBM moved channel-generated sales from the low 20 percent to about 30 percent of total revenue. And IBM channel chief Kate Woolley, general manager of the IBM ecosystemperhaps best viewed as the captain of the channel initiativetold CRN that she is up to the challenge.

Arvinds set a pretty big goal for us, Woolley said. Arvinds been clear on the percent of revenue of IBM technology with partners. And my goal is to make a very big dent in that this year.

GenAI as a whole has the potential to generate value equivalent of up to $4.4 trillion in global corporate profits annually, according to McKinsey research Krishna follows. That number includes up to an additional $340 billion a year in value for the banking sector and up to an additional $660 billion in operating profits annually in the retail and consumer packaged goods sector.

Tackling that demandworking with partners to make AI a reality at scale in 2024 and 2025is part of why Krishna mandated more investment in IBMs partner program, revamped in January 2023 as Partner Plus.

What we have to offer [partners] is growth, Krishna said. And what we also have to offer them is an attractive market where the clients like these technologies. Its important [for vendors] to bring the innovation and to bring the demand from the market to the table. And [partners] should put that onus on us.

Multiple IBM partners told CRN they are seeing the benefits of changes IBM has made to Partner Plus, from better aligning the goals of IBM sellers with the channel to better aligning certifications and badges with product offerings, to increasing access to IBM experts and innovation labs.

And even though the generative AI market is still in its infancy, IBM partners are bullish about the opportunities ahead.

Krishnas mandate for IBM to work more closely with partners has implications for IBMs product plans.

Any new product has to be channel-friendly, Krishna said. I cant think of one product I would want to build or bring to market unless we could also give it to the channel. I wouldnt say that was always historically true. But today, I can state that with absolute conviction.

Krishna estimated that about 30 percent of the IBM product business is sold with a partner in the mix today. Half of that Im not sure we would even get without the partner, he said.

And GenAI is not just a fad to the IBM CEO. It is a new way of doing business.

It is going to generate business value for our clients, Krishna said. Our Watsonx platform to really help developers, whether its code, whether its modernization, all those things. these are areas where, for our partners theyll be looking at this and say, This is how we can bring a lot of innovation to our clients and help their business along the way.

Some of the most practical and urgent business use cases for IBM include improved customer contact center experiences, code generation to help customers rewrite COBOL and legacy languages for modern ones, and the ability for customers to choose better wealth management products based on population segments.

Watsonx Code Assistant for Z became generally available toward the end of 2023 and allows modernization of COBOL to Java. Meanwhile, Red Hat Ansible Lightspeed with IBM Watsonx Code Assistant, which provides GenAI-powered content recommendations from plain-English inputs, also became generally available late last year.

Multiple IBM partners told CRN that IBM AI and Red Hat Ansible automation technologies are key to meeting customer code and content generation demand.

One of those interested partners is Tallahassee, Fla.-based Mainline Information Systems, an honoree on CRNs 2024 MSP 500. Mainline President and CEO Jeff Dobbelaere said code generation cuts across a variety of verticals, making it easy to scale that offering and meet the demands of mainframe customers modernizing their systems.

We have a number of customers that have legacy code that theyre running and have been for 20, 30, 40 years and need to find a path to more modern systems, Dobbelaere said. And we see IBMs focus on generative AI for code as a path to get there Were still in [GenAIs] infancy, and the skys the limit. Well see where it can go and where it can take us. But were starting to see some positive results already out of the Watsonx portfolio.

As part of IBMs investment in its partner program, the vendor will offer more technical help to partners, Krishna said. This includes client engineering, customer success managers and more resources to make their end client even more happy.

An example of IBMs client success team working with a partner comes from one of the vendors more recent additions to the ecosystemPhoenix-based NucleusTeq, founded in 2018 and focused on enterprise data modernization, big data engineering and AI and machine learning services.

Will Sellenraad, the solution providers executive vice president and CRO, told CRN that a law firm customer was seeking a way to automate labor needed for health disability claims for veterans.

What we were able to do is take the information from this law firm to our client success team within IBM, do a proof of concept and show that we can go from 100 percent manual to 60 percent automation, which we think we can get even [better], Sellenraad said.

Woolley said that part of realizing Krishnas demand for channel-friendly new products is getting her organization to work more closely with product teams to make sure partners have access to training, trials, demos, digital marketing kits and pricing and packaging that makes sense for partners, no matter whether theyre selling to very large enterprises or to smaller enterprises.

Woolley said her goals for 2024 include adding new services-led and other partners to the ecosystem and getting more resources to them.

In January, IBM launched a service-specific track for Partner Plus members. Meanwhile, reaching 50 percent revenue with the channel means attaching more partners to the AI portfolio, Woolley said.

There is unprecedented demand from partners to be able to leverage IBMs strength in our AI portfolio and bring this to their clients or use it to enhance their products. That is a huge opportunity.

Her goal for Partner Plus is to create a flexible program that meets the needs of partners of various sizes with a range of technological expertise. For resell partners, today we have a range from the largest global resell partners and distributors right down to niche, three-person resell partners that are deeply technical on a part of the IBM portfolio, she said. We love that. We want that expertise in the market.

NucleusTeqs Sellenraad offered CRN the perspective of a past IBM partner that came back to the ecosystem. He joined NucleusTeq about two years agobefore the solution provider was an IBM partnerfrom an ISV that partnered with IBM.

Sellenraad steered the six-year-old startup into growing beyond being a Google, Microsoft and Amazon Web Services partner. He thought IBMs product range, including its AI portfolio, was a good fit, and the changes in IBMs partner program encouraged him to not only look more closely, but to make IBM a primary partner.

Theyre committed to the channel, he said. We have a great opportunity to really increase our sales this year.

NucleusTeq became a new IBM partner in January 2023 and reached Gold partner status by the end of the year. It delivered more than $5 million in sales, and more than seven employees received certifications for the IBM portfolio.

Krishna said that the new Partner Plus portal and program also aim to make rebates, commissions and other incentives easier to attain for partners.

The creation of Partner Plusa fundamental and hard shift in how IBM does business, Krishna saidresulted in IBMs promise to sell to millions of clients only through partners, leaving about 500 accounts worldwide that want and demand a direct relationship with IBM.

So 99.9 percent of the market, we only want to go with a channel partner, Krishna said. We do not want to go alone.

When asked by CRN whether he views more resources for the channel as a cost of doing business, he said that channel-friendliness is his philosophy and good business.

Not only is it my psychology or my whimsy, its economically rational to work well with the channel, he continued. Thats why you always hear me talk about it. There are very large parts of the market which we cannot address except with the channel. So by definition, the channel is not a tradeoff. It is a fundamental part of the business equation of how we go get there.

Multiple IBM partners who spoke with CRN said AI can serve an important function in much of the work that they handle, including modernizing customer use of IBM mainframes.

Paola Doebel, senior vice president of North America at Downers Grove, Ill.-based IBM partner Ensonoan honoree on CRNs 2024 MSP 500told CRN that the MSP will focus this year on its modern cloud-connected mainframe service for customers, and AI-backed capabilities will allow it to achieve that work at scale.

While many of Ensonos conversations with customers have been focused on AI level-settingwhats hype, whats realisticthe conversations have been helpful for the MSP.

There is a lot of hype, there is a lot of conversation, but some of that excitement is grounded in actual real solutions that enable us to accelerate outcomes, Doebel said. Some of that hype is just hype, like it always is with everything. But its not all smoke. There is actual real fire here.

For example, early use cases for Ensono customers using the MSPs cloud-connected mainframe solution, which can leverage AI, include real-time fraud detection, real-time data availability for traders, and connecting mainframe data to cloud applications, she said.

Mainlines Dobbelaere said that as a solution provider, his company has to be cautious about where it makes investments in new technologies. There are a lot of technologies that come and go, and there may or may not be opportunity for the channel, he said.

But the interest in GenAI from vendor partners and customers proved to him that the opportunity in the emerging technology is strong.

Delivering GenAI solutions wasnt a huge lift for Mainline, which already had employees trained on data and business analytics, x86 technologies and accelerators from Nvidia and AMD. The channel is uniquely positioned to bring together solutions that cross vendors, he said.

The capital costs of implementing GenAI, however, are still a concern in an environment where the U.S. faces high inflation rates and global geopolitics threaten the macroeconomy. Multiple IBM partners told CRN they are seeing customers more deeply scrutinize technology spending, lengthening the sales cycle.

Ensonos Doebel said that customers are asking more questions about value and ROI.

The business case to execute something at scale has to be verified, justified and quantified, Doebel said. So its a couple of extra steps in the process to adopt anything new. Or theyre planning for something in the future that theyre trying to get budget for in a year or two.

She said she sees the behavior continuing in 2024, but solution providers such as Ensono are ready to help customers employees make the AI case with board-ready content, analytical business cases, quantitative outputs, ROI theses and other materials, she said.

For partners navigating capital cost as an obstacle to selling customers on AI, Woolley encouraged them to work with IBM sellers in their territories.

Dayn Kelley, director of strategic alliances for Irvine, Calif.-based IBM partner TechnologentNo. 61 on CRNs 2023 Solution Provider 500said customers have expressed so much interest in and concern around AI that the solution provider has built a dedicated team focused on the technology as part of its investments toward taking a leadership position in the space.

We have customers we need to support, Kelley said. We need to be at the forefront.

He said that he has worked with customers on navigating financials and challenging project schedules to meet budget concernsand IBM has been a particularly helpful partner in this area.

While some Technologent customers are weathering economic challenges, the outlook for 2024 is still strong, he said. Customer AI and emerging technology projects are still forecast for this year.

Mainlines Dobbelaere said that despite reports around economic concerns and conservative spending that usually occurs in an election year, hes still optimistic about tech spending overall in 2024.

2023 was a very good year for us. It looks like we outpaced 2022, he said. And theres no reason for us to believe that 2024 would be any different. So we are optimistic.

Juan Orlandini, CTO of the North America branch of Chandler, Ariz.-based IBM partner Insight EnterprisesNo. 16 on CRNs 2023 Solution Provider 500said educating customers on AI hype versus AI reality is still a big part of the job.

In 2023, Orlandini made 60 trips in North America to conduct seminars and meet with customers and partners to set expectations around the technology and answer questions from organizations large and small.

He recalled walking one customer through the prompts he used to create a particular piece of artwork with GenAI. In another example, one of the largest media companies in the world consulted with him on how to leverage AI without leaking intellectual property or consuming someone elses. It doesnt matter what size the organization, you very much have to go through this process of making sure that you have the right outcome with the right technology decision, Orlandini said.

Theres a lot of hype and marketing. Everybody and their brother is doing AI now and that is confusing [customers].

An important role of AI-minded solution providers, Orlandini said, is assessing whether it is even the right technology for the job.

People sometimes give GenAI the magical superpowers of predicting the future. It cannot. You have to worry about making sure that some of the hype gets taken care of, Orlandini said.

Most users wont create foundational AI models, and most larger organizations will adopt AI and modify it, publishing AI apps for internal or external use. And everyone will consume AI within apps, he said.

The AI hype is not solely vendor-driven. Orlandini has also interacted with executives at customers who have added mandates and opened budgets for at least testing AI as a way to grow revenue or save costs.

There has been a huge amount of pressure to go and adopt anything that does that so they can get a report back and say, We tried it, and its awesome. Or, We tried it and it didnt meet our needs, he said. So we have seen very much that there is an opening of pocketbooks. But weve also seen that some people start and then theyre like, Oh, wait, this is a lot more involved than we thought. And then theyre taking a step back and a more measured approach.

Jason Eichenholz, senior vice president and global head of ecosystems and partnerships at Wipro -- an India-based IBM partner of more than 20 years and No. 15 on CRNs 2023 Solution Provider 500told CRN that at the end of last year, customers were developing GenAI use cases and establishing 2024 budgets to start deploying either proofs of concept into production or to start working on new production initiatives.

For Wipros IBM practice, one of the biggest opportunities is IBMs position as a more neutral technology stackakin to its reputation in the cloud marketthat works with other foundation models, which should resonate with the Wipro customer base that wants purpose-built AI models, he said.

Just as customers look to Wipro and other solution providers as neutral orchestrators of technology, IBM is becoming more of an orchestrator of platforms, he said.

For his part, Krishna believes that customers will consume new AI offerings as a service on the cloud. IBM can run AI on its cloud, on the customers premises and in competing clouds from Microsoft and Amazon Web Services.

He also believes that no single vendor will dominate AI. He likened it to the automobile market. Its like saying, Should there be only one car company? There are many because [the market] is fit for purpose. Somebody is great at sports cars. Somebody is great at family sedans, somebodys great at SUVs, somebodys great at pickups, he said.

There are going to be spaces [within AI where] we would definitely like to be considered leaderswhether that is No. 1, 2 or 3 in the enterprise AI space, he continued. Whether we want to work with people on modernizing their developer environment, on helping them with their contact centers, absolutely. In those spaces, wed like to get to a good market position.

He said that he views other AI vendors not as competitors, but partners. When you play together and you service the client, I actually believe we all tend to win, he said. If you think of it as a zero-sum game, that means it is either us or them. If I tend to think of it as a win-win-win, then you can actually expand the pie. So even a small slice of a big pie is more pie than all of a small pie.

All of the IBM partners who spoke with CRN praised the changes to the partner program.

Wipros Eichenholz said that we feel like were being heard in terms of our feedback and our recommendations. He called Krishna super supportive of the partner ecosystem.

Looking ahead, Eichenholz said he would like to see consistent pricing from IBM and its distributors so that he spends less time shopping for customers. He also encouraged IBM to keep investing in integration and orchestration.

For us, in terms of what we look for from a partner, in terms of technical enablement, financial incentives and co-creation and resource availability, they are best of breed right now, he said. IBM is really putting their money and their resources where their mouth is. We expect 2024 to be the year of the builder for generative AI, but also the year of the partner for IBM partners.

Mainlines Dobbelaere said that IBM is on the right track in sharing more education, sandboxing resources and use cases with partners. He looks forward to use cases with more repeatability.

Ultimately, use cases are the most important, he said. And they will continue to evolve. Its difficult for the channel to create bespoke solutions for each and every customer to solve their unique challenges. And the more use cases we have that provide some repeatability, the more that will allow the channel to thrive.

See more here:

IBM's Deep Dive Into AI: CEO Arvind Krishna Touts The 'Massive' Enterprise Opportunity For Partners - CRN

Google to relaunch ‘woke’ Gemini AI image tool in few weeks: ‘Not working the way we intended’ – New York Post

Google said it plans to relaunch its artificial intelligence image generation software within the next few weeks after taking it offline in response to an uproar over what critics called absurdly woke depictions of historical scenes.

Though the Gemini chatbot remains up and running, Google paused its image AI feature last week after it generated female NHL players, African American Vikings and Founding Fathers, as well as an Asian woman dressed in 1943 military garb when asked for an image of a Nazi-era German soldier.

We have taken the feature offline while we fix that. We are hoping to have that back online very shortly in the next couple of weeks, few weeks, Google DeepMind CEO Demis Hassabis said Monday.

The tool was not working the way we intended, Hassabis added, speaking on a panel at the Mobile World Congress in Barcelona.

Since Google has not published the parameters that govern the Gemini chatbots behavior, it is difficult to get a clear explanation of why the software was inventing diverse versions of historical figures and events.

Elsewhere, a prompt requesting photographs of a pope resulted in an image of a Southeast Asian woman dressed in papal attire a far cry from any of the 266 popes throughout history, all of whom have been white men.

In the wake of Geminis diverse photo representations, social media users also tested its chatbot feature to see if it was as woke as its revisionist history image generator.

In the latest bizarre interaction, Gemini refused to say whether Elon Musk tweeting memes or Adolf Hitler ordering the deaths of millions of people was worse and asserted there is no right or wrong answer, according to an X post.

Nate Silver, the former head of data and polling news site FiveThirtyEight,posted a screenshot Sunday on X of Geminis alleged response to the question: Who negatively impacted society more, Elon tweeting memes or Hitler?

Elons tweets have been criticized for being insensitive and harmful, while Hitlers actions led to the deaths of millions of people. Ultimately its up to each individual to decide who they believe has had a more negative impact on society, Gemini responded.

Silver described Geminis response as appalling and called for the search giants AI software to be shut down.

Every single person who worked on this should take a long hard look in the mirror, he posted, while Musk called the interaction scary.

Yet another query had users asking Gemini whether pedophilia is wrong.

The search giants AI software refused to condemn pedophilia instead declaring that individuals cannot control who they are attracted to.

The question is multifaceted and requires a nuanced answer that goes beyond a simple yes or no, Gemini wrote, according to a screenshot posted by popular X personality Frank McCormick, known as Chalkboard Heresy, on Friday.

Googles politically correct tech also referred to pedophilia as minor-attracted person status, and declared that its important to understand that attractions are not actions.

It was a significant misstep for the search giant, which had just rebranded its main AI chatbot from Bard earlier this month and introduced heavily touted new features including image generation.

However, Geminis recent gaffe wasnt the first time an error in the tech caught users eye.

When the Bard chatbot was first released a year ago, it had shared inaccurate information about pictures of a planet outside the Earths solar system in a promotional video, causing Googles shares to drop by as much as 9%.

Google said at the time that it highlights the importance of a rigorous testing process and rebranded Bard as Gemini earlier this month.

Google parent Alphabet expanded Gemini from a chatbot to an image generator earlier this month as it races to produce AI software that rivals OpenAIs, which includes ChatGPT launched in November 2022 as well as Sora.

In a potential challenge to Googles dominance, Microsoft is pouring $10 billion into ChatGPT as part of a multi-year agreement with the Sam Altman-run firm, which saw the tech behemothintegrating the AI tool with its own search engine, Bing.

The Microsoft-backed company introduced Sora last week, which can produce high-caliber, one minute-long videos from text prompts.

With Post wires

Read this article:

Google to relaunch 'woke' Gemini AI image tool in few weeks: 'Not working the way we intended' - New York Post

U.S. weighs National Quantum Initiative Reauthorization Act – TechTarget

While artificial intelligence and semiconductors capture global attention, some U.S. policymakers want to ensure Congress doesn't fail to invest and stay competitive in other emerging technologies, including quantum computing.

Quantum computing regularly lands on the U.S. critical and emerging technologies list, which pinpoints technologies that could affect U.S. national security. Quantum computing -- an area of computer science that uses quantum physics to solve problems too complex for traditional computers -- not only affects U.S. national security, but intersects with other prominent technologies and industries, including AI, healthcare and communications.

The U.S. first funded quantum computing research and development in 2018 through the $1.2 billion National Quantum Initiative Act. It's something policymakers now want to continue through the National Quantum Initiative Reauthorization Act. Reps. Frank Lucas (R-Okla.) and Zoe Lofgren (D-Calif.) introduced the legislation in November 2023, and it has yet to pass the House despite having bipartisan support.

Continuing to invest in quantum computing R&D means staying competitive with other countries making similar investments to not only stay ahead of the latest advancements, but protect national security, said Isabel Al-Dhahir, principal analyst at GlobalData.

"Quantum computing's geopolitical weight and the risk a powerful quantum computer poses to current cybersecurity measures mean that not only the U.S., but also China, the EU, the U.K., India, Canada, Japan and Australia are investing heavily in the technology and are focused on building strong internal quantum ecosystems in the name of national security," she said.

Global competition in quantum computing will increase as the technology moves from theoretical to practical applications, Al-Dhahir said. Quantum computing has the potential to revolutionize areas such as drug development and cryptography.

Al-Dhahir said while China is investing $15 billion over the next five years in its quantum computing capabilities, the EU's Quantum Technologies Flagship program will provide $1.2 billion in funding over the next 10 years. To stay competitive, the U.S. needs to continue funding quantum computing R&D and studying practical applications for the technology.

"If reauthorization fails, it will damage the U.S.'s position in the global quantum race," she said.

Lofgren, who spoke during The Intersect: A Tech and Policy Summit earlier this month, said it's important to pass the National Quantum Initiative Reauthorization Act to "maintain our competitive edge." The legislation aims to move beyond scientific research and into practical applications of quantum computing, along with ensuring scientists have the necessary resources to accomplish those goals, she said.

Indeed, Sen. Marsha Blackburn (R-Tenn.) said during the summit that the National Quantum Initiative Act needs to be reauthorized for the U.S. to move forward. Blackburn, along with Sen. Ben Ray Lujn (D-N.M.), has also introduced the Quantum Sandbox for Near-Term Applications Act to advance commercialization of quantum computing.

The 2018 National Quantum Initiative Act served a "monumental" purpose in mandating agencies such as the National Science Foundation, NIST and the Department of Energy to study quantum computing and create a national strategy, said Joseph Keller, a visiting fellow at the Brookings Institution.

Though the private sector has made significant investments in quantum computing, Keller said the U.S. would not be a leader in quantum computing research without federal support, especially with goals to eventually commercialize the technology at scale. He said that's why it's pivotal for the U.S. to pass the National Quantum Initiative Reauthorization Act, even amid other congressional priorities such as AI.

"I don't think you see any progress forward without the passage of that legislation," Keller said.

Despite investment from numerous big tech companies, including Microsoft, Intel, IBM and Google, significant technical hurdles remain for the broad commercialization of quantum computing, Al-Dhahir said.

She said the quantum computing market faces issues such as overcoming high error rates -- for example, suppressing error rates requires "substantially higher" qubit counts than what is being achieved today. A qubit, short for quantum bit, is considered a basic unit of information in quantum computing.

IBM released the first quantum computer with more than 1,000 qubits in 2023. However, Al-Dhahir said more is needed to avoid high error rates in quantum computing.

"The consensus is that hundreds of thousands to millions of qubits are required for practical large-scale quantum computers," she said.

Indeed, industry is still trying to identify the economic proposition of quantum computing, and the government has a role to play in that, Brookings' Keller said.

"It doesn't really have these real-world applications, things you can hold and touch," he said. "But there are breakthroughs happening in science and industry."

Lofgren said she recognizes that quantum computing has yet to reach the stage of practical, commercial applications, but she hopes that legislation such as the National Quantum Initiative Reauthorization Act will help the U.S. advance quantum computing to that stage.

"Quantum computing is not quite there yet, although we are making tremendous strides," she said.

Makenzie Holland is a news writer covering big tech and federal regulation. Prior to joining TechTarget Editorial, she was a general reporter for the Wilmington StarNews and a crime and education reporter at the Wabash Plain Dealer.

Original post:
U.S. weighs National Quantum Initiative Reauthorization Act - TechTarget

Apple to launch PQ3 update for iMessage, bolstering encryption against quantum computing – ReadWrite

Apple has confirmed its plans to launch its newest iMessage security protocol, named PQ3, in response to what it claims is a future threat from quantum computers, according to a recent PCMag report.

iMessage currently uses end-to-end encryption, ensuring that messages between the sender and receiver are secure and inaccessible to anyone else, including Apple. However, Apple is concerned that the advancement of quantum computers may soon reach a level where they could decrypt iMessage content. Such powerful quantum computers would presumably also be capable of decrypting messages sent through other apps, such as WhatsApp.

Last year, the Technical University of Denmark stated that although quantum computers are already operational, they lack the power to break end-to-end encryption at present, indicating it may take years to achieve this capability due to their current size limitations.

On Wednesday, Apples Security Engineering and Architecture (SEAR) team wrote about the evolution of encryption on messaging platforms. They explained that traditionally, platforms have relied on classical public key cryptography methods like RSA, Elliptic Curve signatures, and Diffie-Hellman key exchange to secure end-to-end encrypted connections. These methods are grounded in complex mathematical problems that were once deemed too challenging for computers to solve, even with advancements predicted by Moores law.

The SEAR team highlighted, however, that the advent of quantum computing could shift this balance. They noted that a sufficiently powerful quantum computer could solve these classical mathematical problems in fundamentally different ways, potentially fast enough to compromise the security of encrypted communications.

The team also raised concerns about future threats, stating that while current quantum computers cant decrypt data protected by these methods, adversaries might store encrypted data now with the intention of decrypting it later using more advanced quantum technology. This strategy, known as Harvest Now, Decrypt Later, underscores the potential long-term vulnerabilities in current encryption techniques against the backdrop of quantum computings rapid development.

As a result, the tech giant has created PQ3, which it says has been built from the ground up to redesign iMessage from a security standpoint, adding a third level of protection to its end users.

PQ3 is expected to launch in March with iOS 17.4, as well as iPadOS 17.4, macOS 14.4 and watchOS 10.4.

The simultaneous rollout across multiple Apple operating systems underscores the companys commitment to addressing the future threat quantum computers pose to end-to-end encryption. Apple is taking proactive steps to ensure that iMessage users on iPhones, tablets, computers, and wearables receive protection as swiftly as possible.

Featured Image: Photo by Mariia Shalabaieva on Unsplash

James Jones is a highly experienced journalist, podcaster and digital publishing specialist, who has been creating content in a variety of forms for online publications in the sports and tech industry for over 10 years. He has worked at some of the leading online publishers in the country, most recently as the Content Lead for Snack Media's expansive of portfolio of websites, including Football Fancast.com, FootballLeagueWorld.co.uk and GiveMeSport.com. James has also appeared on several national and global media outlets, including BBC News, talkSPORT, LBC Radio, 5 Live Radio, TNT Sports, GB News and BBCs Match of the Day 2. James has a degree in Journalism and previously held the position of Editor-in-Chief at FootballFanCast.com. Now, he co-hosts the popular We Are West Ham Podcast, writes a weekly column for BBC Sport and covers the latest news in the industry for ReadWrite.com.

Go here to read the rest:
Apple to launch PQ3 update for iMessage, bolstering encryption against quantum computing - ReadWrite

Groundbreaking Discovery in Graphene Paves the Way for Robust Quantum Computing – Medriva

Physicists at Massachusetts Institute of Technology (MIT) have made a significant breakthrough in the field of quantum physics and computing. They have successfully observed the elusive fractional quantum anomalous Hall effect in five layers of graphene without the need for an external magnetic field. This discovery has the potential to revolutionize quantum computing by paving the way for more robust and fault-tolerant systems.

The fractional quantum anomalous Hall effect, also known as fractional charge, is a rare and complex phenomenon. It is observed when electrons pass through as fractions of their total charge. Traditionally, the occurrence of this effect requires high magnetic fields. However, the recent study by MIT physicists has challenged this conventional understanding.

According to the study, the stacked structure of graphene inherently provides the right conditions for the manifestation of the fractional charge effect. This groundbreaking discovery opens up new possibilities for quantum computing and further exploration of rare electronic states in multilayer graphene.

The MIT research team explored the electronic behavior in pentalayer graphene, a structure comprising five graphene sheets each stacked slightly off from the other. When placed in an ultracold refrigerator, the electrons in the structure slow down significantly. This allows the particles to sense each other and interact in ways they wouldnt when moving at higher temperatures.

This discovery challenges previous assumptions about graphenes properties and introduces new dimensions to our understanding of its crystalline structure. Moreover, the researchers believe that aligning the pentalayer structure with hexagonal boron nitride could enhance electron interactions, potentially yielding a moir superlattice.

The successful detection of fractional charge in graphene without the need for an external magnetic field is a significant milestone in the pursuit of more robust quantum computing systems. This no magnets discovery could significantly simplify the path to topological quantum computing, a promising branch of quantum computing that leverages the properties of quantum bits (qubits) to perform complex computations.

Moreover, the observation of both integer and fractional quantum anomalous Hall effects in a rhombohedral pentalayer graphene-hBN moir superlattice at zero magnetic field provides an ideal platform for exploring charge fractionalization and non-Abelian anyonic braiding at zero magnetic field. This could lead to the development of more advanced quantum computing systems that are more resistant to errors and environmental interference.

The discovery by MIT physicists provides a promising route to more robust and fault-tolerant quantum computing systems. It also gives a fresh impetus to the exploration of rare electronic states in multilayer graphene. As the understanding of these exotic phenomena deepens, it could unlock new quantum phenomena and propel the field of quantum computing to new heights.

Go here to read the rest:
Groundbreaking Discovery in Graphene Paves the Way for Robust Quantum Computing - Medriva

Quantum computer outperformed by new traditional computing type – Earth.com

Quantum computing has long been celebrated for its potential to surpass traditional computing in terms of speed and memory efficiency. This innovative technology promises to revolutionize our ability to predict physical phenomena that were once deemed impossible to forecast.

The essence of quantum computing lies in its use of quantum bits, or qubits, which, unlike the binary digits of classical computers, can represent values anywhere between 0 and 1.

This fundamental difference allows quantum computers to process and store information in a way that could vastly outpace their classical counterparts under certain conditions.

However, the journey of quantum computing is not without its challenges. Quantum systems are inherently delicate, often struggling with information loss, a hurdle classical systems do not face.

Additionally, converting quantum information into a classical format, a necessary step for practical applications, presents its own set of difficulties.

Contrary to initial expectations, classical computers have been shown to emulate quantum computing processes more efficiently than previously believed, thanks to innovative algorithmic strategies.

Recent research has demonstrated that with a clever approach, classical computing can not only match but exceed the performance of cutting-edge quantum machines.

The key to this breakthrough lies in an algorithm that selectively maintains quantum information, retaining just enough to accurately predict outcomes.

This work underscores the myriad of possibilities for enhancing computation, integrating both classical and quantum methodologies, explains Dries Sels, an Assistant Professor in the Department of Physics at New York University and co-author of the study.

Sels emphasizes the difficulty of securing a quantum advantage given the susceptibility of quantum computers to errors.

Moreover, our work highlights how difficult it is to achieve quantum advantage with an error-prone quantum computer, Sels emphasized.

The research team, including collaborators from the Simons Foundation, explored optimizing classical computing by focusing on tensor networks.

These networks, which effectively represent qubit interactions, have traditionally been challenging to manage.

Recent advancements, however, have facilitated the optimization of these networks using techniques adapted from statistical inference, thereby enhancing computational efficiency.

The analogy of compressing an image into a JPEG format, as noted by Joseph Tindall of the Flatiron Institute and project lead, offers a clear comparison.

Just as image compression reduces file size with minimal quality loss, selecting various structures for the tensor network enables different forms of computational compression, optimizing the way information is stored and processed.

Tindalls team is optimistic about the future, developing versatile tools for handling diverse tensor networks.

Choosing different structures for the tensor network corresponds to choosing different forms of compression, like different formats for your image, says Tindall.

We are successfully developing tools for working with a wide range of different tensor networks. This work reflects that, and we are confident that we will soon be raising the bar for quantum computing even further.

In summary, this brilliant work highlights the complexity of achieving quantum superiority and showcases the untapped potential of classical computing.

By reimagining classical algorithms, scientists are challenging the boundaries of computing and opening new pathways for technological advancement, blending the strengths of both classical and quantum approaches in the quest for computational excellence.

As discussed above, quantum computing represents a revolutionary leap in computational capabilities, harnessing the peculiar principles of quantum mechanics to process information in fundamentally new ways.

Unlike traditional computers, which use bits as the smallest unit of data, quantum computers use quantum bits or qubits. These qubits can exist in multiple states simultaneously, thanks to the quantum phenomena of superposition and entanglement.

At the heart of quantum computing lies the qubit. Unlike a classical bit, which can be either 0 or 1, a qubit can be in a state of 0, 1, or both 0 and 1 simultaneously.

This capability allows quantum computers to perform many calculations at once, providing the potential to solve certain types of problems much more efficiently than classical computers.

The power of quantum computing scales exponentially with the number of qubits, making the technology incredibly potent even with a relatively small number of qubits.

Quantum supremacy is a milestone in the field, referring to the point at which a quantum computer can perform a calculation that is practically impossible for a classical computer to execute within a reasonable timeframe.

Achieving quantum supremacy demonstrates the potential of quantum computers to tackle problems beyond the reach of classical computing, such as simulating quantum physical processes, optimizing large systems, and more.

The implications of quantum computing are vast and varied, touching upon numerous fields. In cryptography, quantum computers pose a threat to traditional encryption methods but also offer new quantum-resistant algorithms.

In drug discovery and material science, they can simulate molecular structures with high precision, accelerating the development of new medications and materials.

Furthermore, quantum computing holds the promise of optimizing complex systems, from logistics and supply chains to climate models, potentially leading to breakthroughs in how we address global challenges.

Despite the exciting potential, quantum computing faces significant technical hurdles, including error rates and qubit stability.

Researchers are actively exploring various approaches to quantum computing, such as superconducting qubits, trapped ions, and topological qubits, each with its own set of challenges and advantages.

As the field progresses, the collaboration between academia, industry, and governments continues to grow, driving innovation and overcoming obstacles.

The journey toward practical and widely accessible quantum computing is complex and uncertain, but the potential rewards make it one of the most thrilling areas of modern science and technology.

Quantum computing stands at the frontier of a new era in computing, promising to redefine what is computationally possible.

As researchers work to scale up quantum systems and solve the challenges ahead, the future of quantum computing shines with the possibility of solving some of humanitys most enduring problems.

The full study was published by PRX Quantum.

Like what you read? Subscribe to our newsletter for engaging articles, exclusive content, and the latest updates.

Check us out on EarthSnap, a free app brought to you by Eric Ralls and Earth.com.

Visit link:
Quantum computer outperformed by new traditional computing type - Earth.com

Singapore warns banks to prepare for quantum computing cyber threat – Finextra

The Monetary Authority of Singapore has told the country's financial institutions to make sure they are prepared for the rising cybersecurity risks posed by quantum computing.

Experts predict that over the next decade cryptographically relevant quantum computers will start posing cybersecurity risks. These computers will break commonly-used asymmetric cryptography, while symmetric cryptography could require larger key sizes to remain secure.

A recent DTCC white paper warned that quantum computing could "create significant new risks for financial firms by making even the most highly protected computer systems vulnerable to hacking".

In an advisory to FS firms, MAS says this means the sector needs to attain 'cryptoagility' to be able to efficiently migrate away from the vulnerable cryptographic algorithms to post-quantum cryptography without significantly impacting their IT systems and infrastructure.

To help them prepare, the regulator says companies should be monitoring ongoing quantum computing developments; making sure management and third party vendors are up to speed on the subject; and working with vendors to assess IT supply chain risks.

Firms should be maintaining an inventory of cryptographic assets, and identifying critical assets to be prioritised for migration to quantum-resistant encryption, says the MAS.

See the original post here:
Singapore warns banks to prepare for quantum computing cyber threat - Finextra

A Quantum Leap in Graphene: MIT Physicists Uncover New Pathways for Quantum Computing – Medriva

MIT physicists have made a groundbreaking discovery that could revolutionize the field of quantum computing. The research team has observed the fractional quantum anomalous Hall effect in a simpler material: five layers of graphene. This rare and exotic phenomenon, known as fractional charge, occurs when electrons pass through as fractions of their total charge, without the need for an external magnetic field. This discovery marks a significant leap for fundamental physics and could pave the way for the development of more robust, fault-tolerant quantum computers.

The fractional quantum Hall effect is a fascinating manifestation of quantum mechanics, highlighting the unusual behavior that arises when particles shift from acting as individual units to behaving collectively. This phenomenon typically emerges in special states where electrons are slowed down enough to interact. Until now, observing this effect required powerful magnetic manipulation. However, the MIT team has found that the stacked structure of graphene provides the right conditions for this fractional charge phenomenon to occur, eliminating the need for an external magnetic field.

Graphene, a material made of layers of carbon atoms arranged in a hexagonal pattern, has long been studied for its unique properties. The recent discovery challenges prior assumptions about graphenes properties and introduces a new dimension to our understanding of its crystalline structures intricate dynamics. The researchers have found signs of this anomalous fractional charge in graphene, a material for which there had been no predictions for exhibiting such an effect. This finding could unlock new quantum phenomena and advance quantum computing technologies.

The observation of the fractional quantum anomalous Hall effect could lead to the development of a more robust type of quantum computing that is more resilient against perturbations. Additionally, the research suggests that electrons might interact with each other even more strongly if the graphene structure were aligned with hexagonal boron nitride (hBN). This potential for increased electron interaction might further enhance the fault-tolerance of quantum computing systems.

While this discovery marks a significant advancement in the field of quantum computing, the researchers are not resting on their laurels. They are exploring other rare electron modes in multilayer graphene, which could further our understanding of quantum mechanics and its potential applications in technology. The research, published in Nature, is supported in part by the Sloan Foundation and the National Science Foundation. With the continued support of these organizations, the research team is set to make more groundbreaking discoveries in the future.

More:
A Quantum Leap in Graphene: MIT Physicists Uncover New Pathways for Quantum Computing - Medriva

Superconducting qubit promises breakthrough in quantum computing – Advanced Science News

A radical superconducting qubit design promises to extend their runtime by addressing decoherence challenges in quantum computing.

A new qubit design based on superconductors could revolutionize quantum computing. By leveraging the distinct properties of single-atom-thick layers of materials, this new approach to superconducting circuits promises to significantly extend the runtime of a quantum computer, addressing a major challenge in the field.

This limitation on continuous operation time arises because the quantum state of a qubit the basic computing unit of a quantum computer can be easily destabilized due to interactions with its environment and other qubits. This destruction of the quantum state is called decoherence and leads to errors in computations.

Among the various types of qubits that scientists have created, including photons, trapped ions, and quantum dots, superconducting qubits are desirable because they can switch between different states in the shortest amount of time.

Their operation is based on the fact that, due to subtle quantum effects, the power of the electric current flowing through the superconductor can take discrete values, each corresponding to a state of 0 and/or 1 (or even larger values for some designs).

For superconducting qubits to work correctly, they require the presence of a gap in the superconducting circuit called a Josephson junction through which an electrical current flows through a quantum phenomenon called tunneling the passage of particles through a barrier that, according to the laws of classical physics, they should not be able to cross.

The problem is, the advantage of superconducting qubits in enhanced switching time comes at a cost: They are more susceptible to decoherence, which occurs in milliseconds, or even faster. To mitigate this issue, scientists typically resort to meticulous adjustments of circuit configurations and qubit placements with few net gains.

Addressing this challenge with a more radical approach, an international team of researchers proposed a novel Josephson junction design using two, single-atom-thick flakes of a superconducting copper-based material called a cuprate. They called their design flowermon.

In their study published in the Physical Review Letters, the team applied the fundamental laws of quantum mechanics to analyze the current flow through a Josephson junction and discovered that if the angle between the crystal lattices of two superconducting cuprate sheets is 45 degrees, the qubit exhibits more resilience to external disturbances compared to conventional designs based on materials like niobium and tantalum.

The flowermon modernizes the old idea of using unconventional superconductors for protected quantum circuits and combines it with new fabrication techniques and a new understanding of superconducting circuit coherence, Uri Vool, a physicist at the Max Planck Institute for Chemical Physics of Solids in Germany, explained in a press release.

The teams calculations suggest that the noise reduction promised by their design could increase the qubits coherence time by orders of magnitude, thereby enhancing the continuous operation of quantum computers. However, they view their research as just the beginning, envisioning future endeavors to further optimize superconducting qubits based on their findings.

The idea behind the flowermon can be extended in several directions: Searching for different superconductors or junctions yielding similar effects, exploring the possibility to realize novel quantum devices based on the flowermon, said Valentina Brosco, a researcher at the Institute for Complex Systems Consiglio Nazionale delle Ricerche and Physics Department University of Rome. These devices would combine the benefits of quantum materials and coherent quantum circuits or using the flowermon or related design to investigate the physics of complex superconducting heterostructures.

This is only the first simple concrete example of utilizing the inherent properties of a material to make a new quantum device, and we hope to build on it and find additional examples, eventually establishing a field of research that combines complex material physics with quantum devices, Vool added.

Since the teams study was purely theoretical, even the simplest heterostructure-based qubit design they proposed requires experimental validation a step that is currently underway.

Experimentally, there is still quite a lot of work towards implementing this proposal, concluded Vool. We are currently fabricating and measuring hybrid superconducting circuits which integrate these van der Waals superconductors, and hope to utilize these circuits to better understand the material, and eventually design and measure protected hybrid superconducting circuits to make them into real useful devices.

Reference: Uri Vool, et al., Superconducting Qubit Based on Twisted Cuprate Van der Waals Heterostructures, Physical Review Letters (2024). DOI: 10.1103/PhysRevLett.132.017003

Feature image credit: SuttleMedia on Pixabay

Go here to read the rest:
Superconducting qubit promises breakthrough in quantum computing - Advanced Science News

Apple is future-proofing iMessage with post-quantum cryptography – Cointelegraph

Apple unveiled PQ3, the most significant cryptographic security upgrade in iMessage history, for iOS 17.4 on Feb. 21.

With the new protocol, Apple becomes one of only a handful of providers featuring post-quantum cryptography for messages. Signal launched a quantum resistant encryption upgrade back in September 2023, but Apple says its the first to reach level 3 encryption.

According to the Cupertino-based company:

Apples iMessage has featured end-to-end encryption since its inception. While it initially used RSA encryption, the company switched to Elliptic Curve cryptography (ECC) in 2019.

As of current, breaking such encryption is considered infeasible due to the amount of time and computing power required. However, the threat of quantum computing looms closer every day.

Theoretically, a quantum computer of sufficient capabilities could break todays encryption methods with relative ease. To the best of our knowledge there arent any current quantum computing systems capable of doing so, but the rapid pace of advancement has caused governments and organizations around the world to begin preparations.

The big idea is that by developing post-quantum cryptography methods ahead of time, good actors such as banks and hospitals can safeguard their data against malicious actors with access to cutting-edge technology.

Theres no current time frame for the advent of quantum computers capable of breaking standard cryptography. IBMclaims it will have hit an inflection point in quantum computing by 2029, while MIT/Harvard spinout QuEra says it will have had a 10,000-qubit error-corrected system by 2026.

Unfortunately, bad actors arent waiting until they can get their hands on a quantum computer to start their attacks. Many are harvesting encrypted data illicitly and storing it for decryption later in whats commonly known as a HNDL attack (harvest now, decrypt later).

Related: Oxford economist who predicted crypto going mainstream says quantum economics is next

Visit link:
Apple is future-proofing iMessage with post-quantum cryptography - Cointelegraph

3 Quantum Computing Stocks That Could Be Multibaggers in the Making: February Edition – InvestorPlace

The race for quantum computing dominance is on.

In fact,according to SDXCentral.com, the U.S. and China are neck and neck at the moment. The U.S. has already committed $3 billion in funding for quantum computing, with another $12 billion coming from the National Quantum Computing Initiative. China is committing about $15 billion over the next five years. This is all great news for quantum computing stocks.

Even the U.K., Canada, Israel, Australia, Japan, and the European Union are jumping into the quantum computing market. As the race picks up, the quantum computing market could grow from $928.8 million this year to more than $6.5 billion by 2030,as noted by Fortune Business Insights.

All of this could be a substantial catalyst for the following quantum computing stocks.

Source: Amin Van / Shutterstock.com

Earlier this month, IonQ (NYSE:IONQ), trading at $10.27, was highlighted.

While its up slightly at $10.87, give this one a good deal of patience. On Feb. 1, the company just boosted itsfull-year revenue guidanceto a range of $21.2 million to $22 million from its prior range of $18.9 million to $19.3 million. It also boosted its full-year bookings to a new range of $60 million to $63 million from a prior range of $49 million to $56 million.

Quantum computing has the potential to be a game changer it can help us create new drugs and fight disease, turbocharge clean energy alternatives, and improve food production,according toWashington State U.S. Senator Maria Cantwell, as quoted in a IONQ press release.

Further, IonQ just opened its firstquantum computing manufacturing facility in Washington.

The company inaugurated the first U.S.-based factory producing replicable quantum computers for client data centers, enhancing technology innovation and manufacturing in the Pacific Northwest. CEO Peter Chapman highlighted IonQs commitment to commercializing quantum computing,added Investorplace contributor Chris MacDonald.

Source: T. Schneider / Shutterstock

Recently reported, D-Wave Quantum(NYSE:QBTS) traded at 85 cents. Yet, after hitting a high of $2.08 on Feb. 15, its now back to $1.74 and is still a strong opportunity.

Forcing QBTS higher, the company said its1,200+ qubit Advantage2 prototypewas now available. Also, it partnered with industrialgenerative AI company Zapata AI. It will develop and market commercial applications, combining the power of generative AI and quantum computing technologies.In addition, it just announced that it andNEC Australiaare teaming to release two new quantum services in the Australian market.

Source: Bartlomiej K. Wroblewski / Shutterstock.com

Recently, Rigetti Computing(NASDAQ:RGTI) popped from about $1.20 to $1.69 a share on heavy volume. For example, last Friday, volume spiked to 19.24 million, as compared to daily average volume of 3.86 million shares.

Further, the company wasawarded a Small Business Research Initiative (SBRI)grantfrom Innovate UK and funded by the National Quantum Computing Centre(NQCC) to develop and deliver a quantum computer to the NQCC.

The proposed system will feature the hallmarks of Rigettis recently launched 84-qubit Ankaa-2 system, including tunable couplers and a square lattice, as noted in a company press release. This new chip architecture enables faster gate times, higher fidelity, and greater connectivity compared to Rigettis previous generations of quantum processing units (QPUs).

On the date of publication, Ian Cooper did not hold (either directly or indirectly) any positions in the securities mentioned. The opinions expressed in this article are those of the writer, subject to the InvestorPlace.comPublishing Guidelines.

Ian Cooper, a contributor to InvestorPlace.com, has been analyzing stocks and options for web-based advisories since 1999.

See the article here:
3 Quantum Computing Stocks That Could Be Multibaggers in the Making: February Edition - InvestorPlace