Seattle’s Pioneer Square Labs and Silicon Valley stalwart Mayfield form AI co-investing partnership – GeekWire

Navin Chaddha (left), managing partner at Mayfield, and Greg Gottesman, managing director at Pioneer Square Labs. (Mayfield and PSL Photos)

Seattle startup studio Pioneer Square Labs (PSL) and esteemed Silicon Valley venture capital firm Mayfield are teaming up to fund the next generation of AI-focused startups.

The partnership combines the startup incubation prowess of PSL, a 9-year-old studio that helps get companies off the ground, with Mayfield, a Menlo Park fixture founded in 1969 that has stalwarts such as Lyft, HashiCorp, ServiceMax and others in its portfolio.

As part of the agreement, PSL spinouts focused on AI-related technology will get a minimum of $1.5 million in seed funding from PSLs venture arm (PSL Ventures) and Mayfield.

Weve really been focusing a lot of our efforts on building defensible new AI-based technology companies and found a partner who feels very similarly and has incredible talent, resources, and thought leadership around this area, said PSL Managing Director Greg Gottesman.

Navin Chaddha, managing partner at Mayfield, described the partnership as very complimentary. PSL specializes in testing new ideas before spinning out startups. Mayfield steps in when companies are ready to raise a venture round and at later stages.

They have strengths, we have strengths, Chaddha said.

Its a bet by both firms on the promise of AI technology and startup creation.

Its a once-in-a-lifetime transformational opportunity in the tech industry, Chaddha said.

Mayfield last year launched a $250 million fund dedicated to AI. Chaddha published a blog post last month about what Mayfield describes as the AI cognitive plumbing layer, where the picks and shovels infrastructure companies of the AI industry reside.

Theres so much infrastructure to be built, Chaddha said. He added that the applications enabled by new AI technologies such as generative AI are endless.

Gottesman, who helped launch PSL in 2015 after a long stint with Seattle venture firm Madrona, said more than 60% of code written at PSL is now completed by AI a stark difference from just a year ago.

Its not that we have humans writing less code were just moving faster, Gottesman said.

The $1.5 million seed investments are a minimum;PSL and Mayfield are open to partnering with other investors and firms. The Richard King Mellon Foundation is also participating in the partnership.

The deal marks the latest connection point between the Seattle and Silicon Valley tech ecosystems.

Madrona, Seattles oldest and largest venture capital firm, opened a new Bay Area office in 2022 and hired a local managing director.

Bay Area investors have increasingly invested in Seattle-area startups including Mayfield, which has backed Outreach, Skilljar, SeekOut, Revefi, and others in the region. The firm was an early investor in Concur, the travel expense giant that went public in 1998.

Chaddha previously lived in the Seattle area after Microsoft acquired his streaming media startup VXtreme in 1997. He spent a few years at the Redmond tech giant, working alongside Satya Nadella who later went on to become CEO.

I think its fantastic that Mayfield is making a commitment not just to AI, but also to the Seattle area as well, said Gottesman.

PSL raised $20 million third fund last year to support its studio, which has spun out more than 35 companies including Boundless, Recurrent, SingleFile, and others. Job postings show new company ideas related to automation around hardware development and workflow operations for go-to-market execs. The PSL Ventures fundraised$100 million in 2021.

Read this article:

Seattle's Pioneer Square Labs and Silicon Valley stalwart Mayfield form AI co-investing partnership - GeekWire

Analysis: How Nvidia Surpassed Intel In Annual Revenue And Won The AI Crown – CRN

A deep-dive analysis into the market dynamics that allowed Nvidia to take the AI crown and surpass Intel in annual revenue. CRN also looks at what the x86 processor giant could do to fight back in a deeply competitive environment.

Several months after Pat Gelsinger became Intels CEO in 2021, he told me that his biggest concern in the data center wasnt Arm, the British chip designer that is enabling a new wave of competition against the semiconductor giants Xeon server CPUs.

Instead, the Intel veteran saw a bigger threat in Nvidia and its uncontested hold over the AI computing space and said his company would give its all to challenge the GPU designer.

[Related: The ChatGPT-Fueled AI Gold Rush: How Solution Providers Are Cashing In]

Well, theyre going to get contested going forward, because were bringing leadership products into that segment, Gelsinger told me for a CRN magazine cover story.

More than three years later, Nvidias latest earnings demonstrated just how right it was for Gelsinger to feel concerned about the AI chip giants dominance and how much work it will take for Intel to challenge a company that has been at the center of the generative AI hype machine.

When Nvidias fourth-quarter earnings arrived last week, they showed that the company surpassed Intel in total annual revenue for its recently completed fiscal year, mainly thanks to high demand for its data center GPUs driven by generative AI.

The GPU designer finished its 2024 fiscal year with $60.9 billion in revenue, up 126 percent or more than double from the previous year, the company revealed in its fourth-quarter earnings report on Wednesday. This fiscal year ran from Jan. 30, 2023, to Jan. 28, 2024.

Meanwhile, Intel finished its 2023 fiscal year with $54.2 billion in sales, down 14 percent from the previous year. This fiscal year ran concurrent to the calendar year, from January to December.

While Nvidias fiscal year finished roughly one month after Intels, this is the closest well get to understanding how two industry titans compared in a year when demand for AI solutions propped up the data center and cloud markets in a shaky economy.

Nvidia pulled off this feat because the company had spent years building a comprehensive and integrated stack of chips, systems, software and services for accelerated computingwith a major emphasis on data centers, cloud computing and edge computingthen found itself last year at the center of a massive demand cycle due to hype around generative AI.

This demand cycle was mainly kicked off by the late 2022 arrival of OpenAIs ChatGPT, a chatbot powered by a large language model that can understand complex prompts and respond with an array of detailed answers, all offered with the caveat that it could potentially impart inaccurate, biased or made-up answers.

Despite any shortcomings, the tech industry found more promise than concern with the capabilities of ChatGPT and other generative AI applications that had emerged in 2022, like the DALL-E 2 and Stable Diffusion text-to-image models. Many of these models and applications had been trained and developed using Nvidia GPUs because the chips are far faster at computing such large amounts of data than CPUs ever could.

The enormous potential of these generative AI applications kicked off a massive wave of new investments in AI capabilities by companies of all sizes, from venture-backed startups to cloud service providers and consumer tech companies, like Amazon Web Services and Meta.

By that point, Nvidia had started shipping the H100, a powerful data center GPU that came with a new feature called the Transformer Engine. This was designed to speed up the training of so-called transformer models by as many as six times compared to the previous-generation A100, which itself had been a game-changer in 2020 for accelerating AI training and inference.

Among the transformer models that benefitted from the H100s Transformer Engine was GPT-3.5, short for Generative Pre-trained Transformer 3.5. This is OpenAIs large language model that exclusively powered ChatGPT before the introduction of the more capable GPT-4.

But this was only one piece of the puzzle that allowed Nvidia to flourish in the past year. While the company worked on introducing increasingly powerful GPUs, it was also developing internal capabilities and making acquisitions to provide a full stack of hardware and software for accelerated computing workloads such as AI and high-performance computing.

At the heart of Nvidias advantage is the CUDA parallel computing platform and programming model. Introduced in 2007, CUDA enabled the companys GPUs, which had been traditionally designed for computer games and 3-D applications, to run HPC workloads faster than CPUs by breaking them down into smaller tasks and processing those tasks simultaneously. Since then, CUDA has dominated the landscape of software that benefits accelerated computing.

Over the last several years, Nvidias stack has grown to include CPUs, SmartNICs and data processing units, high-speed networking components, pre-integrated servers and server clusters as well as a variety of software and services, which includes everything from software development kits and open-source libraries to orchestration platforms and pretrained models.

While Nvidia had spent years cultivating relationships with server vendors and cloud service providers, this activity reached new heights last year, resulting in expanded partnerships with the likes of AWS, Microsoft Azure, Google Cloud, Dell Technologies, Hewlett Packard Enterprise and Lenovo. The company also started cutting more deals in the enterprise software space with major players like VMware and ServiceNow.

All this work allowed Nvidia to grow its data center business by 217 percent to $47.5 billion in its 2024 fiscal year, which represented 78 percent of total revenue.

This was mainly supported by a 244 percent increase in data center compute sales, with high GPU demand driven mainly by the development of generative AI and large language models. Data center networking, on the other hand, grew 133 percent for the year.

Cloud service providers and consumer internet companies contributed a substantial portion of Nvidias data center revenue, with the former group representing roughly half and then more than a half in the third and fourth quarters, respectively. Nvidia also cited strong demand driven by businesses outside of the former two groups, though not as consistently.

In its earnings call last week, Nvidia CEO Jensen Huang said this represents the industrys continuing transition from general-purpose computing, where CPUs were the primary engines, to accelerated computing, where GPUs and other kinds of powerful chips are needed to provide the right combination of performance and efficiency for demanding applications.

There's just no reason to update with more CPUs when you can't fundamentally and dramatically enhance its throughput like you used to. And so you have to accelerate everything. This is what Nvidia has been pioneering for some time, he said.

Intel, by contrast, generated $15.5 billion in data center revenue for its 2023 fiscal year, which was a 20 percent decline from the previous year and made up only 28.5 percent of total sales.

This was not only three times smaller than what Nvidia earned for total data center revenue in the 12-month period ending in late January, it was also smaller than what the semiconductor giants AI chip rival made in the fourth quarter alone: $18.4 billion.

The issue for Intel is that while the company has launched data center GPUs and AI processors over the last couple years, its far behind when it comes to the level of adoption by developers, OEMs, cloud service providers, partners and customers that has allowed Nvidia to flourish.

As a result, the semiconductor giant has had to rely on its traditional data center products, mainly Xeon server CPUs, to generate a majority of revenue for this business unit.

This created multiple problems for the company.

While AI servers, including ones made by Nvidia and its OEM partners, rely on CPUs for the host processors, the average selling prices for such components are far lower than Nvidias most powerful GPUs. And these kinds of servers often contain four or eight GPUs and only two CPUs, another way GPUs enable far greater revenue growth than CPUs.

In Intels latest earnings call, Vivek Arya, a senior analyst at Bank of America, noted how these issues were digging into the companys data center CPU revenue, saying that its GPU competitors seem to be capturing nearly all of the incremental [capital expenditures] and, in some cases, even more for cloud service providers.

One dynamic at play was that some cloud service providers used their budgets last year to replace expensive Nvidia GPUs in existing systems rather than buying entirely new systems, which dragged down Intel CPU sales, Patrick Moorhead, president and principal analyst at Moor Insights & Strategy, recently told CRN.

Then there was the issue of long lead times for Nvidias GPUs, which were caused by demand far exceeding supply. Because this prevented OEMs from shipping more GPU-accelerated servers, Intel sold fewer CPUs as a result, according to Moorhead.

Intels CPU business also took a hit due to competition from AMD, which grew x86 server CPU share by 5.4 points against the company in the fourth quarter of 2023 compared to the same period a year ago, according to Mercury Research.

The semiconductor giant has also had to contend with competition from companies developing Arm-based CPUs, such as Ampere Computing and Amazon Web Services.

All of these issues, along with a lull in the broader market, dragged down revenue and earnings potential for Intels data center business.

Describing the market dynamics in 2023, Intel said in its annual 10-K filing with the U.S. Securities and Exchange Commission that server volume decreased 37 percent from the previous year due to lower demand in a softening CPU data center market.

The company said average selling prices did increase by 20 percent, mainly due to a lower mix of revenue from hyperscale customers and a higher mix of high core count processors, but that wasnt enough to offset the plummet in sales volume.

While Intel and other rivals started down the path of building products to compete against Nvidias years ago, the AI chip giants success last year showed them how lucrative it can be to build a business with super powerful and expensive processors at the center.

Intel hopes to make a substantial business out of accelerator chips between the Gaudi deep learning processors, which came from its 2019 acquisition of Habana Labs, and the data center GPUs it has developed internally. (After the release of Gaudi 3 later this year, Intel plans to converge its Max GPU and Gaudi road maps, starting with Falcon Shores in 2025.)

But the semiconductor giant has only reported a sales pipeline that grew in the double digits to more than $2 billion in last years fourth quarter. This pipeline includes Gaudi 2 and Gaudi 3 chips as well as Intels Max and Flex data center GPUs, but it doesnt amount to a forecast for how much money the company expects to make this year, an Intel spokesperson told CRN.

Even if Intel made $2 billion or even $4 billion from accelerator chips in 2024, it would amount to a small fraction of what Nvidia made last year and perhaps an even smaller one if the AI chip rival manages to grow again in the new fiscal year. Nvidia has forecasted that revenue in the first quarter could grow roughly 8.6 percent sequentially to $24 billion, and Huang said the conditions are excellent for continued growth for the rest of this year and beyond.

Then theres the fact that AMD recently launched its most capable data center GPU yet, the Instinct MI300X. The company said in its most recent earnings call that strong customer pull and expanded engagements prompted the company to upgrade its forecast for data center GPU revenue this year to more than $3.5 billion.

There are other companies developing AI chips too, including AWS, Microsoft Azure and Google Cloud as well as several startups, such as Cerebras Systems, Tenstorrent, Groq and D-Matrix. Even OpenAI is reportedly considering designing its own AI chips.

Intel will also have to contend with Nvidias decision last year to move to a one-year release cadence for new data center GPUs. This started with the successor to the H100 announced last fallthe H200and will continue with the B100 this year.

Nvidia is making its own data center CPUs, too, as part of the companys expanding full-stack computing strategy, which is creating another challenge for Intels CPU business when it comes to AI and HPC workloads. This started last year with the standalone Grace Superchip and a hybrid CPU-GPU package called the Grace Hopper Superchip.

For Intels part, the semiconductor giant expects meaningful revenue acceleration for its nascent AI chip business this year. What could help the company are the growing number of price-performance advantages found by third parties like AWS and Databricks as well as its vow to offer an open alternative to the proprietary nature of Nvidias platform.

The chipmaker also expects its upcoming Gaudi 3 chip to deliver performance leadership with four times the processing power and double the networking bandwidth over its predecessor.

But the company is taking a broader view of the AI computing market and hopes to come out on top with its AI everywhere strategy. This includes a push to grow data center CPU revenue by convincing developers and businesses to take advantage of the latest features in its Xeon server CPUs to run AI inference workloads, which the company believes is more economical and pragmatic for a broader constituency of organizations.

Intel is making a big bet on the emerging category of AI PCs, too, with its recently launched Core Ultra processors, which, for the first time in an Intel processor, comes with a neural processing unit (NPU) in addition to a CPU and GPU to power a broad array of AI workloads. But the company faces tough competition in this arena, whether its AMD and Qualcomm in the Windows PC segment or Apple for Mac computers and its in-house chip designs.

Even Nvidia is reportedly thinking about developing CPUs for PCs. But Intel does have one trump card that could allow it to generate significant amounts of revenue alongside its traditional chip design business by seizing on the collective growth of its industry.

Hours before Nvidias earnings last Wednesday, Intel launched its revitalized contract chip manufacturing business with the goal of drumming up enough business from chip designers, including its own product groups, to become the worlds second largest foundry by 2030.

Called Intel Foundry, its lofty 2030 goal means the business hopes to generate more revenue than South Koreas Samsung in only six years. This would put it only behind the worlds largest foundry, Taiwans TSMC, which generated just shy of $70 billion last year with many thanks to large manufacturing orders from the likes of Nvidia, Apple and Nvidia.

All of this relies on Intel to execute at high levels across its chip design and manufacturing businesses over the next several years. But if it succeeds, these efforts could one day make the semiconductor giant an AI superpower like Nvidia is today.

At Intel Foundrys launch last week, Gelsinger made that clear.

We're engaging in 100 percent of the AI [total addressable market], clearly through our products on the edge, in the PC and clients and then the data centers. But through our foundry, I want to manufacture every AI chip in the industry, he said.

More:

Analysis: How Nvidia Surpassed Intel In Annual Revenue And Won The AI Crown - CRN

Schnucks store tests new AI-powered shopping carts – KSDK.com

The pilot program is rolling out at two more grocery stores in the next few weeks.

ST. LOUIS New smart shopping carts that allow customers to avoid the checkout lines have rolled out at one St. Louis-area Schnucks store.

In July, the St. Louis Business Journal reported that Schnuck Markets was working with Instacart, Inc. to roll out the AI-powered shopping carts at a few St. Louis-area stores.

The pilot program finally launched last week at the Twin Oaks location, located at 1393 Big Bend Road, a spokesperson with Schnuck Markets said.

Editor's note: The above video aired in July 2023.

In the upcoming weeks, the Lindenwood (1900 1st Capitol Drive in St. Charles) and Cottleville (6083 Mid Rivers Mall Drive in St. Peters) locations will join in on the pilot, which is still in its early stages, the spokesperson said.

According to Business Journal reporting, the new carts use AI to automatically identify items as they're put in the basket, allowing customers to bag their groceries as they shop, bypass the checkout line and pay through the cart from anywhere in the store.

The shopping carts will connect to the Schnucks Rewards App, according to the Business Journal, allowing customers to access clipped promotions and to "light up" electronic shelf labels from their phones to easily find items.

It's not the only way that Schnucks is utilizing artificial intelligence. Earlier this year, the chain brought in new high-tech, anti-theft liquor cabinets to several locations that allow customers to unlock it by entering their phone number on a keypad to receive a code via text message.

The liquor cases also monitor customers' behaviors when accessing the case, including the number of products removed, how frequently a customer accesses it and how long the door is left open, to identify suspicious activity in real-time.

To watch 5 On Your Side broadcasts or reports 24/7, 5 On Your Side is always streaming on5+. Download for free onRoku,Amazon Fire TVor theApple TV App Store.

Here is the original post:

Schnucks store tests new AI-powered shopping carts - KSDK.com

AI productivity tools can help at work, but some make your job harder – The Washington Post

In a matter of seconds, artificial intelligence tools can now generate images, write your emails, create a presentation, analyze data and even offer meeting recaps.

For about $20 to $30 a month, you can have the AI capabilities in many of Microsoft and Googles work tools now. But are AI tools such as Microsoft Copilot and Gemini for Google Workspace easy to use?

The tech companies contend they help workers with their biggest pain points. Microsoft and Google claim their latest AI tools can automate the mundane, help people who struggle to get started on writing, and even aid with organization, proofreading, preparation and creating.

Of all working U.S. adults, 34 percent think that AI will equally help and hurt them over the next 20 years, according to a survey released by Pew Research Center last year. But a close 31 percent arent sure what to think, the survey shows.

So the Help Desk put these new AI tools to the test with common work tasks. Heres how it went.

Ideally, AI should speed up catching up on email, right? Not always.

It may help you skim faster, start an email or elaborate on quick points you want to hit. But it also might make assumptions, get things wrong or require several attempts before offering the desired result.

Microsofts Copilot allows users to choose from several tones and lengths before you start drafting. Users create a prompt for what they want their email to say and then have the AI adjust based on changes they want to see.

While the AI often included desired elements in the response, it also often added statements we didnt ask for in the prompt when we selected short and casual options. For example, when we asked it to disclose that the email was written by Copilot, it sometimes added marketing comments like calling the tech cool or assuming the email was interesting or fascinating.

When we asked it to make the email less positive, instead of dialing down the enthusiasm, it made the email negative. And if we made too many changes, it lost sight of the original request.

They hallucinate, said Ethan Mollick, associate professor at the Wharton School of the University of Pennsylvania, who studies the effects of AI on work. Thats what AI does make up details.

When we used a direct tone and short length, the AI produced fewer false assumptions and more desired results. But a few times, it returned an error message suggesting that the prompt had content Copilot couldnt work with.

Using copilot for email isn't perfect. Some prompts were returned with an error message. (Video: The Washington Post)

If we entirely depended on the AI, versus making major manual edits to the suggestions, getting a fitting response often took multiple if not several tries. Even then, one colleague responded to an AI-generated email with a simple response to the awkwardness: LOL.

We called it Copilot for a reason, said Colette Stallbaumer, general manager of Microsoft 365 and future of work marketing. Its not autopilot.

Googles Gemini has fewer options for drafting emails, allowing users to elaborate, formalize or shorten. However, it made fewer assumptions and often stuck solely to what was in the prompt. That said, it still sometimes sounded robotic.

Copilot can also summarize emails, which can quickly help you catch up on a long email thread or cut through your wordy co-workers mini-novel, and it offers clickable citations. But it sometimes highlighted less relevant points, like reminding me of my own title listed in my signature.

The AI seemed to do better when it was fed documents or data. But it still sometimes made things up, returned error messages or didnt understand context.

We asked Copilot to use a document full of reporter notes, which are admittedly filled with shorthand, fragments and run-on sentences, and asked it to write a report. At first glance, the result seemed convincing that the AI had made sense of the messy notes. But with closer inspection, it was unclear if anything actually came from the document, as the conclusions were broad, overreaching and not cited.

If you give it a document to work off, it can use that as a basis, Mollick said. It may hallucinate less but in more subtle ways that are harder to identify.

When we asked it to continue a story we started writing, providing it a document filled with notes, it summarized what we had already written and produced some additional paragraphs. But, it became clear much of it was not from the provided document.

Fundamentally, they are speculative algorithms, said Hatim Rahman, an assistant professor at Northwestern Universitys Kellogg School of Management, who studies AIs impact on work. They dont understand like humans do. They provide the statistically likely answer.

Summarizations were less problematic, and the clickable citations made it easy to confirm each point. Copilot was also helpful in editing documents, often catching acronyms that should be spelled out, punctuation or conciseness, much like a beefed-up spell check.

With spreadsheets, the AI can be a little tricky, and you need to convert data to a table format first. Copilot more accurately produced responses to questions about tables with simple formats. But for larger spreadsheets that had categories and subcategories or other complex breakdowns, we couldnt get it to find relevant information or accurately identify the trends or takeaways.

Microsoft says one of users top places to use Copilot is in Teams, the collaboration app that offers tools including chat and video meetings. Our test showed the tool can be helpful for quick meeting notes, questions about specific details, and even a few tips on making your meetings better. But typical of other meeting AI tools, the transcript isnt perfect.

First, users should know that their administrator has to enable transcriptions so Copilot can interact with the transcript during and after the meeting something we initially missed. Then, in the meeting or afterward, users can use Copilot to ask questions about the meeting. We asked for unanswered questions, action items, a meeting recap, specific details and how we couldve made the meeting more efficient. It can also pull up video clips that correspond to specific answers if you record the meeting.

The AI was able to recall several details, accurately list action items and unanswered questions, and give a recap with citations to the transcript. Some of its answers were a little muddled, like when it confused the name of a place with the location and ended up with something that looked a little like word salad. It was able to identify the tone of the meeting (friendly and casual with jokes and banter) and censored curse words with asterisks. And it provided advice for more efficient meetings: For us that meant creating a meeting agenda and reducing the small talk and jokes that took the conversation off topic.

Copilot can be used during a Teams meeting and produce transcriptions, action items, and meeting recaps. (Video: The Washington Post)

Copilot can also help users make a PowerPoint presentation, complete with title pages and corresponding images, based off a document in a matter of seconds. But that doesnt mean you should use the presentation as is.

A documents organization and format seem to play a role in the result. In one instance, Copilot created an agenda with random words and dates from the document. Other times, it made a slide with just a persons name and responsibility. But it did better documents with clear formats (think an intro and subsections).

Google's Gemini can generate images like this robot. (Video: The Washington Post)

While Copilots image generation for slides was usually related, sometimes its interpretation was too literal. Googles Gemini also can help create slides and generate images, though more often than not when trying to create images, we received a message that said, for now were showing limited results for people. Try something else.

AI can aid with idea generation, drafting from a blank page or quickly finding a specific item. It also may be helpful for catching up on emails, meetings and summarizing long conversations or documents. Another nifty tip? Copilot can gather the latest chats, emails and documents youve worked on with your boss before your next meeting together.

But all results and content need careful inspection for accuracy, some tweaking or deep edits and both tech companies advise users verify everything generated by the AI. I dont want people to abdicate responsibility, said Kristina Behr, vice president of product management for collaboration apps at Google Workspace. This helps you do your job. It doesnt do your job.

And as is the case with AI, the more details and direction in the prompt, the better the output. So as you do each task, you may want to consider whether AI will save you time or actually create more work.

The work it takes to generate outcomes like text and videos has decreased, Rahman said. But the work to verify has significantly increased.

Continued here:

AI productivity tools can help at work, but some make your job harder - The Washington Post

MWC 2024: Microsoft to open up access to its AI models to allow countries to build own AI economies – Euronews

Monday was a big day for announcements from tech giant Microsoft, unveiling new guiding principles for AI governance and a multi-year deal with Mistral AI.

Tech behemoth Microsoft has unveiled a new set of guiding principles on how it will govern its artificial intelligence (AI) infrastructure, effectively further opening up access to its technology to developers.

The announcement came at the Mobile World Congress tech fair in Barcelona on Monday where AI is a key theme of this years event.

One of the key planks of its newly-published "AI Access Principles" is the democratisation of AI through the companys open source models.

The company said it plans to do this by expanding access to its cloud computing AI infrastructure.

Speaking to Euronews Next in Barcelona, Brad Smith, Microsofts vice chair and president, also said the company wanted to make its AI models and development tools more widely available to developers around the world, allowing countries to build their own AI economies.

"I think it's extremely important because we're investing enormous amounts of money, frankly, more than any government on the planet, to build out the AI data centres so that in every country people can use this technology," Smith said.

"They can create their AI software, their applications, they can use them for companies, for consumer services and the like".

The "AI Access Principles" underscore the company's commitment to open source models. Open source means that the source code is available to everyone in the public domain to use, modify, and distribute.

"Fundamentally, it [the principles] says we are not just building this for ourselves. We are making it accessible for companies around the world to use so that they can invest in their own AI inventions," Smith told Euronews Next.

"Second, we have a set of principles. It's very important, I think, that we treat people fairly. Yes, that as they use this technology, they understand how we're making available the building blocks so they know it, they can use it," he added.

"We're not going to take the data that they're developing for themselves and access it to compete against them. We're not going to try to require them to reach consumers or their customers only through an app store where we exact control".

The announcement of its AI governance guidelines comes as the Big Tech company struck a deal with Mistral AI, the French company revealed on Monday, signalling Microsofts intent to branch out in the burgeoning AI market beyond its current involvement with OpenAI.

Microsoft has already heavily invested in OpenAI, the creator of wildly popular AI chatbot ChatGPT. Its $13 billion (11.9 billion) investment, however, is currently under review by regulators in the EU, the UK and the US.

Widely cited as a growing rival for OpenAI, 10-month-old Mistral reached unicorn status in December after being valued at more than 2 billion, far surpassing the 1 billion threshold to be considered one.

The new multi-year partnership will see Microsoft giving Mistral access to its Azure cloud platform to help bring its large language model (LLM) called Mistral Large.

LLMs are AI programmes that recogise and generate text and are commonly used to power generative AI like chatbots.

"Their [Mistral's] commitment to fostering the open-source community and achieving exceptional performance aligns harmoniously with Microsofts commitment to develop trustworthy, scalable, and responsible AI solutions," Eric Boyd, Corporate Vice President, Azure AI Platform at Microsoft, wrote in a blog post.

The move is in keeping with Microsoft's commitment to open up its cloud-based AI infrastructure.

In the past week, as well as its partnership with Mistral AI, Microsoft has committed to investing billions of euros over two years in its AI infrastructure in Europe, including 1.9 billion in Spain and 3.2 billion in Germany.

See the original post here:

MWC 2024: Microsoft to open up access to its AI models to allow countries to build own AI economies - Euronews

US Used AI to Help Find Middle East Targets for Airstrikes – Bloomberg

The US used artificial intelligence to identify targets hit by air strikes in the Middle East this month, a defense official said, revealing growing military use of the technology for combat.

Machine learning algorithms that can teach themselves to identify objects helped to narrow down targets for more than 85 US air strikes on Feb. 2, according to Schuyler Moore, chief technology officer for US Central Command, which runs US military operations in the Middle East. The Pentagon said those strikes were conducted by US bombers and fighter aircraft against seven facilities in Iraq and Syria.

Read the original:

US Used AI to Help Find Middle East Targets for Airstrikes - Bloomberg

IBM’s Deep Dive Into AI: CEO Arvind Krishna Touts The ‘Massive’ Enterprise Opportunity For Partners – CRN

With an improved Partner Plus program and a mandate that all products be channel-friendly, IBM CEO Arvind Krishna aims to bring partners into the enterprise AI market that sits below the surface of todays trendy use cases.

To hear IBM Chairman and CEO Arvind Krishna tell it, the artificial intelligence market is like an iceberg. For now, most vendors and users are attracted by the use cases above the surfaceusing text generators to write emails and image generators to make art, for example.

But its the enterprise AI market below the surface that IBM wants to serve with its partners, Krishna told CRN in a recent interview. And Krishnas mandate that the Armonk, N.Y.-based vendor reach 50 percent of its revenue from the channel over the next two to three years is key to reaching that hidden treasure.

This is a massive market, said Krishna. When I look at all the estimates the numbers are so big that it is hard for most people to comprehend them. That tells you that there is a lot of opportunity for a large number of us.

[RELATED: IBM CEO Krishna To Partners: Lets Make Lots Of Money Together On AI]

In 2023, IBM moved channel-generated sales from the low 20 percent to about 30 percent of total revenue. And IBM channel chief Kate Woolley, general manager of the IBM ecosystemperhaps best viewed as the captain of the channel initiativetold CRN that she is up to the challenge.

Arvinds set a pretty big goal for us, Woolley said. Arvinds been clear on the percent of revenue of IBM technology with partners. And my goal is to make a very big dent in that this year.

GenAI as a whole has the potential to generate value equivalent of up to $4.4 trillion in global corporate profits annually, according to McKinsey research Krishna follows. That number includes up to an additional $340 billion a year in value for the banking sector and up to an additional $660 billion in operating profits annually in the retail and consumer packaged goods sector.

Tackling that demandworking with partners to make AI a reality at scale in 2024 and 2025is part of why Krishna mandated more investment in IBMs partner program, revamped in January 2023 as Partner Plus.

What we have to offer [partners] is growth, Krishna said. And what we also have to offer them is an attractive market where the clients like these technologies. Its important [for vendors] to bring the innovation and to bring the demand from the market to the table. And [partners] should put that onus on us.

Multiple IBM partners told CRN they are seeing the benefits of changes IBM has made to Partner Plus, from better aligning the goals of IBM sellers with the channel to better aligning certifications and badges with product offerings, to increasing access to IBM experts and innovation labs.

And even though the generative AI market is still in its infancy, IBM partners are bullish about the opportunities ahead.

Krishnas mandate for IBM to work more closely with partners has implications for IBMs product plans.

Any new product has to be channel-friendly, Krishna said. I cant think of one product I would want to build or bring to market unless we could also give it to the channel. I wouldnt say that was always historically true. But today, I can state that with absolute conviction.

Krishna estimated that about 30 percent of the IBM product business is sold with a partner in the mix today. Half of that Im not sure we would even get without the partner, he said.

And GenAI is not just a fad to the IBM CEO. It is a new way of doing business.

It is going to generate business value for our clients, Krishna said. Our Watsonx platform to really help developers, whether its code, whether its modernization, all those things. these are areas where, for our partners theyll be looking at this and say, This is how we can bring a lot of innovation to our clients and help their business along the way.

Some of the most practical and urgent business use cases for IBM include improved customer contact center experiences, code generation to help customers rewrite COBOL and legacy languages for modern ones, and the ability for customers to choose better wealth management products based on population segments.

Watsonx Code Assistant for Z became generally available toward the end of 2023 and allows modernization of COBOL to Java. Meanwhile, Red Hat Ansible Lightspeed with IBM Watsonx Code Assistant, which provides GenAI-powered content recommendations from plain-English inputs, also became generally available late last year.

Multiple IBM partners told CRN that IBM AI and Red Hat Ansible automation technologies are key to meeting customer code and content generation demand.

One of those interested partners is Tallahassee, Fla.-based Mainline Information Systems, an honoree on CRNs 2024 MSP 500. Mainline President and CEO Jeff Dobbelaere said code generation cuts across a variety of verticals, making it easy to scale that offering and meet the demands of mainframe customers modernizing their systems.

We have a number of customers that have legacy code that theyre running and have been for 20, 30, 40 years and need to find a path to more modern systems, Dobbelaere said. And we see IBMs focus on generative AI for code as a path to get there Were still in [GenAIs] infancy, and the skys the limit. Well see where it can go and where it can take us. But were starting to see some positive results already out of the Watsonx portfolio.

As part of IBMs investment in its partner program, the vendor will offer more technical help to partners, Krishna said. This includes client engineering, customer success managers and more resources to make their end client even more happy.

An example of IBMs client success team working with a partner comes from one of the vendors more recent additions to the ecosystemPhoenix-based NucleusTeq, founded in 2018 and focused on enterprise data modernization, big data engineering and AI and machine learning services.

Will Sellenraad, the solution providers executive vice president and CRO, told CRN that a law firm customer was seeking a way to automate labor needed for health disability claims for veterans.

What we were able to do is take the information from this law firm to our client success team within IBM, do a proof of concept and show that we can go from 100 percent manual to 60 percent automation, which we think we can get even [better], Sellenraad said.

Woolley said that part of realizing Krishnas demand for channel-friendly new products is getting her organization to work more closely with product teams to make sure partners have access to training, trials, demos, digital marketing kits and pricing and packaging that makes sense for partners, no matter whether theyre selling to very large enterprises or to smaller enterprises.

Woolley said her goals for 2024 include adding new services-led and other partners to the ecosystem and getting more resources to them.

In January, IBM launched a service-specific track for Partner Plus members. Meanwhile, reaching 50 percent revenue with the channel means attaching more partners to the AI portfolio, Woolley said.

There is unprecedented demand from partners to be able to leverage IBMs strength in our AI portfolio and bring this to their clients or use it to enhance their products. That is a huge opportunity.

Her goal for Partner Plus is to create a flexible program that meets the needs of partners of various sizes with a range of technological expertise. For resell partners, today we have a range from the largest global resell partners and distributors right down to niche, three-person resell partners that are deeply technical on a part of the IBM portfolio, she said. We love that. We want that expertise in the market.

NucleusTeqs Sellenraad offered CRN the perspective of a past IBM partner that came back to the ecosystem. He joined NucleusTeq about two years agobefore the solution provider was an IBM partnerfrom an ISV that partnered with IBM.

Sellenraad steered the six-year-old startup into growing beyond being a Google, Microsoft and Amazon Web Services partner. He thought IBMs product range, including its AI portfolio, was a good fit, and the changes in IBMs partner program encouraged him to not only look more closely, but to make IBM a primary partner.

Theyre committed to the channel, he said. We have a great opportunity to really increase our sales this year.

NucleusTeq became a new IBM partner in January 2023 and reached Gold partner status by the end of the year. It delivered more than $5 million in sales, and more than seven employees received certifications for the IBM portfolio.

Krishna said that the new Partner Plus portal and program also aim to make rebates, commissions and other incentives easier to attain for partners.

The creation of Partner Plusa fundamental and hard shift in how IBM does business, Krishna saidresulted in IBMs promise to sell to millions of clients only through partners, leaving about 500 accounts worldwide that want and demand a direct relationship with IBM.

So 99.9 percent of the market, we only want to go with a channel partner, Krishna said. We do not want to go alone.

When asked by CRN whether he views more resources for the channel as a cost of doing business, he said that channel-friendliness is his philosophy and good business.

Not only is it my psychology or my whimsy, its economically rational to work well with the channel, he continued. Thats why you always hear me talk about it. There are very large parts of the market which we cannot address except with the channel. So by definition, the channel is not a tradeoff. It is a fundamental part of the business equation of how we go get there.

Multiple IBM partners who spoke with CRN said AI can serve an important function in much of the work that they handle, including modernizing customer use of IBM mainframes.

Paola Doebel, senior vice president of North America at Downers Grove, Ill.-based IBM partner Ensonoan honoree on CRNs 2024 MSP 500told CRN that the MSP will focus this year on its modern cloud-connected mainframe service for customers, and AI-backed capabilities will allow it to achieve that work at scale.

While many of Ensonos conversations with customers have been focused on AI level-settingwhats hype, whats realisticthe conversations have been helpful for the MSP.

There is a lot of hype, there is a lot of conversation, but some of that excitement is grounded in actual real solutions that enable us to accelerate outcomes, Doebel said. Some of that hype is just hype, like it always is with everything. But its not all smoke. There is actual real fire here.

For example, early use cases for Ensono customers using the MSPs cloud-connected mainframe solution, which can leverage AI, include real-time fraud detection, real-time data availability for traders, and connecting mainframe data to cloud applications, she said.

Mainlines Dobbelaere said that as a solution provider, his company has to be cautious about where it makes investments in new technologies. There are a lot of technologies that come and go, and there may or may not be opportunity for the channel, he said.

But the interest in GenAI from vendor partners and customers proved to him that the opportunity in the emerging technology is strong.

Delivering GenAI solutions wasnt a huge lift for Mainline, which already had employees trained on data and business analytics, x86 technologies and accelerators from Nvidia and AMD. The channel is uniquely positioned to bring together solutions that cross vendors, he said.

The capital costs of implementing GenAI, however, are still a concern in an environment where the U.S. faces high inflation rates and global geopolitics threaten the macroeconomy. Multiple IBM partners told CRN they are seeing customers more deeply scrutinize technology spending, lengthening the sales cycle.

Ensonos Doebel said that customers are asking more questions about value and ROI.

The business case to execute something at scale has to be verified, justified and quantified, Doebel said. So its a couple of extra steps in the process to adopt anything new. Or theyre planning for something in the future that theyre trying to get budget for in a year or two.

She said she sees the behavior continuing in 2024, but solution providers such as Ensono are ready to help customers employees make the AI case with board-ready content, analytical business cases, quantitative outputs, ROI theses and other materials, she said.

For partners navigating capital cost as an obstacle to selling customers on AI, Woolley encouraged them to work with IBM sellers in their territories.

Dayn Kelley, director of strategic alliances for Irvine, Calif.-based IBM partner TechnologentNo. 61 on CRNs 2023 Solution Provider 500said customers have expressed so much interest in and concern around AI that the solution provider has built a dedicated team focused on the technology as part of its investments toward taking a leadership position in the space.

We have customers we need to support, Kelley said. We need to be at the forefront.

He said that he has worked with customers on navigating financials and challenging project schedules to meet budget concernsand IBM has been a particularly helpful partner in this area.

While some Technologent customers are weathering economic challenges, the outlook for 2024 is still strong, he said. Customer AI and emerging technology projects are still forecast for this year.

Mainlines Dobbelaere said that despite reports around economic concerns and conservative spending that usually occurs in an election year, hes still optimistic about tech spending overall in 2024.

2023 was a very good year for us. It looks like we outpaced 2022, he said. And theres no reason for us to believe that 2024 would be any different. So we are optimistic.

Juan Orlandini, CTO of the North America branch of Chandler, Ariz.-based IBM partner Insight EnterprisesNo. 16 on CRNs 2023 Solution Provider 500said educating customers on AI hype versus AI reality is still a big part of the job.

In 2023, Orlandini made 60 trips in North America to conduct seminars and meet with customers and partners to set expectations around the technology and answer questions from organizations large and small.

He recalled walking one customer through the prompts he used to create a particular piece of artwork with GenAI. In another example, one of the largest media companies in the world consulted with him on how to leverage AI without leaking intellectual property or consuming someone elses. It doesnt matter what size the organization, you very much have to go through this process of making sure that you have the right outcome with the right technology decision, Orlandini said.

Theres a lot of hype and marketing. Everybody and their brother is doing AI now and that is confusing [customers].

An important role of AI-minded solution providers, Orlandini said, is assessing whether it is even the right technology for the job.

People sometimes give GenAI the magical superpowers of predicting the future. It cannot. You have to worry about making sure that some of the hype gets taken care of, Orlandini said.

Most users wont create foundational AI models, and most larger organizations will adopt AI and modify it, publishing AI apps for internal or external use. And everyone will consume AI within apps, he said.

The AI hype is not solely vendor-driven. Orlandini has also interacted with executives at customers who have added mandates and opened budgets for at least testing AI as a way to grow revenue or save costs.

There has been a huge amount of pressure to go and adopt anything that does that so they can get a report back and say, We tried it, and its awesome. Or, We tried it and it didnt meet our needs, he said. So we have seen very much that there is an opening of pocketbooks. But weve also seen that some people start and then theyre like, Oh, wait, this is a lot more involved than we thought. And then theyre taking a step back and a more measured approach.

Jason Eichenholz, senior vice president and global head of ecosystems and partnerships at Wipro -- an India-based IBM partner of more than 20 years and No. 15 on CRNs 2023 Solution Provider 500told CRN that at the end of last year, customers were developing GenAI use cases and establishing 2024 budgets to start deploying either proofs of concept into production or to start working on new production initiatives.

For Wipros IBM practice, one of the biggest opportunities is IBMs position as a more neutral technology stackakin to its reputation in the cloud marketthat works with other foundation models, which should resonate with the Wipro customer base that wants purpose-built AI models, he said.

Just as customers look to Wipro and other solution providers as neutral orchestrators of technology, IBM is becoming more of an orchestrator of platforms, he said.

For his part, Krishna believes that customers will consume new AI offerings as a service on the cloud. IBM can run AI on its cloud, on the customers premises and in competing clouds from Microsoft and Amazon Web Services.

He also believes that no single vendor will dominate AI. He likened it to the automobile market. Its like saying, Should there be only one car company? There are many because [the market] is fit for purpose. Somebody is great at sports cars. Somebody is great at family sedans, somebodys great at SUVs, somebodys great at pickups, he said.

There are going to be spaces [within AI where] we would definitely like to be considered leaderswhether that is No. 1, 2 or 3 in the enterprise AI space, he continued. Whether we want to work with people on modernizing their developer environment, on helping them with their contact centers, absolutely. In those spaces, wed like to get to a good market position.

He said that he views other AI vendors not as competitors, but partners. When you play together and you service the client, I actually believe we all tend to win, he said. If you think of it as a zero-sum game, that means it is either us or them. If I tend to think of it as a win-win-win, then you can actually expand the pie. So even a small slice of a big pie is more pie than all of a small pie.

All of the IBM partners who spoke with CRN praised the changes to the partner program.

Wipros Eichenholz said that we feel like were being heard in terms of our feedback and our recommendations. He called Krishna super supportive of the partner ecosystem.

Looking ahead, Eichenholz said he would like to see consistent pricing from IBM and its distributors so that he spends less time shopping for customers. He also encouraged IBM to keep investing in integration and orchestration.

For us, in terms of what we look for from a partner, in terms of technical enablement, financial incentives and co-creation and resource availability, they are best of breed right now, he said. IBM is really putting their money and their resources where their mouth is. We expect 2024 to be the year of the builder for generative AI, but also the year of the partner for IBM partners.

Mainlines Dobbelaere said that IBM is on the right track in sharing more education, sandboxing resources and use cases with partners. He looks forward to use cases with more repeatability.

Ultimately, use cases are the most important, he said. And they will continue to evolve. Its difficult for the channel to create bespoke solutions for each and every customer to solve their unique challenges. And the more use cases we have that provide some repeatability, the more that will allow the channel to thrive.

See more here:

IBM's Deep Dive Into AI: CEO Arvind Krishna Touts The 'Massive' Enterprise Opportunity For Partners - CRN

Google to relaunch ‘woke’ Gemini AI image tool in few weeks: ‘Not working the way we intended’ – New York Post

Google said it plans to relaunch its artificial intelligence image generation software within the next few weeks after taking it offline in response to an uproar over what critics called absurdly woke depictions of historical scenes.

Though the Gemini chatbot remains up and running, Google paused its image AI feature last week after it generated female NHL players, African American Vikings and Founding Fathers, as well as an Asian woman dressed in 1943 military garb when asked for an image of a Nazi-era German soldier.

We have taken the feature offline while we fix that. We are hoping to have that back online very shortly in the next couple of weeks, few weeks, Google DeepMind CEO Demis Hassabis said Monday.

The tool was not working the way we intended, Hassabis added, speaking on a panel at the Mobile World Congress in Barcelona.

Since Google has not published the parameters that govern the Gemini chatbots behavior, it is difficult to get a clear explanation of why the software was inventing diverse versions of historical figures and events.

Elsewhere, a prompt requesting photographs of a pope resulted in an image of a Southeast Asian woman dressed in papal attire a far cry from any of the 266 popes throughout history, all of whom have been white men.

In the wake of Geminis diverse photo representations, social media users also tested its chatbot feature to see if it was as woke as its revisionist history image generator.

In the latest bizarre interaction, Gemini refused to say whether Elon Musk tweeting memes or Adolf Hitler ordering the deaths of millions of people was worse and asserted there is no right or wrong answer, according to an X post.

Nate Silver, the former head of data and polling news site FiveThirtyEight,posted a screenshot Sunday on X of Geminis alleged response to the question: Who negatively impacted society more, Elon tweeting memes or Hitler?

Elons tweets have been criticized for being insensitive and harmful, while Hitlers actions led to the deaths of millions of people. Ultimately its up to each individual to decide who they believe has had a more negative impact on society, Gemini responded.

Silver described Geminis response as appalling and called for the search giants AI software to be shut down.

Every single person who worked on this should take a long hard look in the mirror, he posted, while Musk called the interaction scary.

Yet another query had users asking Gemini whether pedophilia is wrong.

The search giants AI software refused to condemn pedophilia instead declaring that individuals cannot control who they are attracted to.

The question is multifaceted and requires a nuanced answer that goes beyond a simple yes or no, Gemini wrote, according to a screenshot posted by popular X personality Frank McCormick, known as Chalkboard Heresy, on Friday.

Googles politically correct tech also referred to pedophilia as minor-attracted person status, and declared that its important to understand that attractions are not actions.

It was a significant misstep for the search giant, which had just rebranded its main AI chatbot from Bard earlier this month and introduced heavily touted new features including image generation.

However, Geminis recent gaffe wasnt the first time an error in the tech caught users eye.

When the Bard chatbot was first released a year ago, it had shared inaccurate information about pictures of a planet outside the Earths solar system in a promotional video, causing Googles shares to drop by as much as 9%.

Google said at the time that it highlights the importance of a rigorous testing process and rebranded Bard as Gemini earlier this month.

Google parent Alphabet expanded Gemini from a chatbot to an image generator earlier this month as it races to produce AI software that rivals OpenAIs, which includes ChatGPT launched in November 2022 as well as Sora.

In a potential challenge to Googles dominance, Microsoft is pouring $10 billion into ChatGPT as part of a multi-year agreement with the Sam Altman-run firm, which saw the tech behemothintegrating the AI tool with its own search engine, Bing.

The Microsoft-backed company introduced Sora last week, which can produce high-caliber, one minute-long videos from text prompts.

With Post wires

Read this article:

Google to relaunch 'woke' Gemini AI image tool in few weeks: 'Not working the way we intended' - New York Post

Accelerating telco transformation in the era of AI – The Official Microsoft Blog – Microsoft

AI is redefining digital transformation for every industry, including telecommunications. Every operators AI journey will be distinct. But each AI journey requires cloud-native transformation, which provides the foundation for any organization to harness the full potential of AI, driving innovation, efficiency and business value.

This new era of AI will create incredible economic growth and represent a profound shift as a percentage impact on global GDP, which is just over $100 trillion. So, when we look at the potential value driven by this next generation of AI technology, we may see a boost to global GDP of an additional $7 trillion to $10 trillion.

Embracing AI will help operators unlock new revenue streams, deliver superior customer experiences and pioneer future innovations for growth.

Operators can now leverage cloud services that are adaptive, purpose-built for telecommunications and span from near edge on-premises environments to the far edges of Earth and space to monetize investments, modernize networks, elevate customer experiences and streamline business operations with AI.

Our aim is to be the most trusted co-innovation partner for the telecommunications industry. We want to help accelerate telco transformation and empower operators to succeed in the era of AI, which is why we are committed to working with operators, enterprises and developers on the future cloud.

At MWC in Barcelona this week, we are announcing updates to our Azure for Operators portfolio to help operators seize the opportunity ahead in a cloud- and AI-native future.

AI opens new growth opportunities for operators. The biggest potential is that operators, as they embrace this new era of cloud and AI, can also help their customers in their own transformation.

For example, spam calls and malicious activities are a well-known menace and are growing exponentially, and often impact the most vulnerable members of society. Besides the annoyance, the direct cost of those calls adds up. For example, in the United States, FTC data for 2023 shows $850 million in reported fraud losses stemming from scam calls.

Today, we are announcing the public preview of Azure Operator Call Protection, a new service that uses AI to help protect consumers from scam calls. The service uses real-time analysis of voice content, alerting consumers who opt into the service when there is suspicious in-call activity. Azure Operator Call Protection works on any endpoint, mobile or landline, and it works entirely through the network without needing any app installation.

In the U.K., BT Group is trialing Azure Operator Call Protection to identify, educate and protect their customers from potential fraud, making it harder for bad actors to take advantage of their customers.

We are also announcing the public preview of Azure Programmable Connectivity (APC), which provides a unified, standard interface across operators networks. APC provides seamless access to Open Gateway for developers to create cloud and edge-native applications that interact with the intelligence of the network. APC also empowers operators to commercialize their network APIs and simplifies their access for developers and is available in the Azure Marketplace.

AI opens incredible opportunities to modernize network operations, providing new levels of real-time insights, intelligence and automation. Operators, such as Three UK, are already using Azure Operator Insights to eliminate data silos and deliver actionable business insights by enabling the collection and analysis of massive quantities of network data gathered from complex multi-vendor network functions. Designed for operator-specific workloads, operators tackle complex scenarios with Azure Operator Insights, such as understanding the health of their networks and the quality of their subscribers experiences.

Azure Operator Insights uses a modern data mesh architecture for dividing complex domains into manageable sub-domains called data products. These data products integrate large datasets from different sources and vendors to provide data visibility from disaggregated networks for comprehensive analytical and business insights. Using this data product factory capability, operators, network equipment providers and solution integrators can create unique data products for one customer or published to the Azure Marketplace for many customers to use.

Today, we are also announcing the limited preview of Copilot in Azure Operator Insights, a groundbreaking, operator-focused, generative AI capability helping operators move from reactive to proactive and predictive in tangible ways. Engineers use the Copilot to interact with network insights using natural language and receive simple explanations of what the data means and possible actions to take, resolving network issues quickly and accurately, ultimately improving customer satisfaction.

Copilot in Azure Operator Insights is delivering AI-infused insights to drive network efficiency for customers like Three UK and participating partners including Amdocs, Accenture and BMC Remedy. Three UK is using Copilot in Azure Operator Insights to unlock actionable intelligence on network health and customer experience quality of service, a process that previously took weeks or months to assess, is now possible to perform in minutes.

Additionally, with our next-generation hybrid cloud platform, Azure Operator Nexus, we offer the ability to future-proof the network to support mission-critical workloads, and power new revenue-generating services and applications. This immense opportunity is what drives operators to modernize their networks with Azure Operator Nexus, a carrier-grade, hybrid cloud platform and AI-powered automation and insights unlocking improved efficiency, scalability and reliability. Purpose-built for and validated by tier one operators to run mission-critical workloads, Azure Operator Nexus enables operators to run workloads on-premises or on Azure, where they can seamlessly deploy, manage, secure and monitor everything from the bare metal to the tenant.

E& UAE is taking advantage of the Azure Operator Nexus platform to lower total cost of ownership (TCO), leverage the power of AI to simplify operations, improve time to market and focus on their core competencies. And operations at AT&T that took months with previous generations of technology now take weeks to complete with Azure Operator Nexus.

We continue to build robust capabilities into Azure Operator Nexus, including new deployment options giving operators the flexibility to use one carrier-grade platform to deliver innovative solutions on near-edge, far-edge and enterprise edge.

Read more about the latest Azure for Operator updates here.

Operators are creating differentiation by collaborating with us to improve customer experiences and streamline their business operations with AI. Operators are leveraging Microsofts copilot stack and copilot experiences across our core products and services, such as Microsoft Copilot, Microsoft Copilot for M365 and Microsoft Security Copilot to drive productivity and improve customer experiences.

An average operator spends 20% ofannual revenue on capital expenditures.However, this investment does nottranslate into an equivalentincrease in revenue growth. Operators need to empower their service teams with data-driven insights to increase productivity, enhance care, use conversational AI to enable self-service, expedite issue resolution and deliver frictionless customer experiences at scale.

Together with our partner ecosystem, we are investing in creating a comprehensive set of solutions for the telecommunications industry. This includes the Azure for Operators portfolio a carrier-grade hybrid cloud platform, voice core, mobile core and multi-access edge compute, as well as our suite of generative AI solutions that holistically address the needs of network operators as they transform their networks.

As customers continue to embrace generative AI, we remain committed to working with operators and enterprises alike to future-proof networks and unlock new revenue streams in a cloud- and AI-native future.

Tags: AI, Azure for Operators, Azure Operator Call Protection, Azure Operator Insights, Azure Operator Nexus, Copilot in Azure Operator Insights

See original here:

Accelerating telco transformation in the era of AI - The Official Microsoft Blog - Microsoft

Forget artificial intelligence, its about robots in the Bronx – The Riverdale Press

By STACY DRIKS

A pair of robots from the Bronx High School of Science that weigh about 125 pounds and are controlled by a simple X-Box remote control showed off their abilities earlier this year during a New York city competition. And came away with some awards.

Behind the remotes were Bronx Science students. And the challenge is simple pick up cones and cubes with their arms and bring them to the other side of the arena.

The teams were competing to advance to the world championships in Houston. At the regional in Manhattan teams from other states, India, Turkey and Azerbaijan competed with their industrialized-size robots.

During the regional two Bronx Science teams were competing: the all-girls FeMaidens and the co-educational team, the Sciborgs, where students spent seven to eight weeks building with coding and testing. The FeMaidens finished third and took home the Team Spirit Award for their enthusiasm. The Sciborgs took home an honorable mention.

Each robot has a battery that looks similar to a car battery, but this one weighs between eight to 12 pounds.

We will go through one of these in every match we can drain this entire battery in three minutes, said Charlie Peskay, one of the main student strategists for the Sciborgs and part of the construction of robots.

Their drive team consists of three people.

Operator: Responsible for movements such as arms and spins. Driver: Drives the robot Coach: Directs operator and driver to work together and says what to pick up and where to place them.

Each game lasts three minutes, and they go through at least five minutes for the playoffs. Then there are more games that would need to be completed for the semi-finals and then finals.

Even though both teams did not make the regional finals, they were awarded and honored by Optimum and parent company Altice USA. The sponsor gave $2,500 to first-place winners; $1,500 to runners-up, and $500 for honorable mentions .

Optimum provides internet, phone services, and more in most households; they are built on innovation, said Rafaella Mazzella of Optimum. The company has long supported the competition and sponsored high school teams and regional competitions throughout its service area.

The money is used often for tools like a portable belt sander and a drill press, said chemistry teacher and robotics adviser Katherine Carr.

FeMaidens took first place for the Excellence in Technology Award. Whereas the Sciborgs received an honorable mention.

It was the gracious professionalism where students wanted to win, but there was not much animosity between the teams.

During the games, opposing teams would need to join an alliance and work together. This year the FeMaidens were aligned with High Voltage Robotics from William Grady in Brooklyn and RoHawks from Hunter College High School.

Its a very interesting dynamic, Carr said. When I first thought of it, I was like, so were friends, were also against each other sometimes.

In one match, the teams will be against each other, and in the next, theyll work together. But the students agree that its more fun that way.

One alliance had used all their timeouts, but they needed time to fix something. And then the other team the other alliance used one of their timeouts to help them fix it, Peskay said.

Both alliances are not competitive with each other, as some might think. They just want every match to be a fair match, Peskay continued.

But this year, students changed it up. And it sounds simple. New wheels.

Our swerve modules are pretty new; in the past, we never did swerve because swerve is a newer version and costs a lot of money, said FeMaidens captain and head of engineering Melody Jiang.

Robotics are many different types of drives, which are used to move and steer the robot. The best part of this new module is that there is a lot more mobility. However, it isnt straightforward to code and build.

For example, their previous wheels were movements to that of a car. The robot would need to be at a complete stop to make a turn. Whereas now, they can move simultaneously.

Warren Yun, Sciborgs captain, said one of the drives is similar to that of a shopping cart going forward and backward.

Theyre really large, and theyre heavy, too, she said.

However, its downfall is the quality of it. If another teams robot pushes a robot to prevent them from scoring with this module, the mobility will help it move.

Thats another part of robotics, Jiang said. Theres a lot of strategies involved because you cant really do everything; you kind of have to debate what you want to prioritize. For example, the drive you sacrifice, like how much you get pushed for that mobility.

The teams always need to trade off on things. Thats why there is a strategy department. Shinyoung Kang is the head of engineering and strategy for the Femaidens. She said she needed to be the salesperson of the match.

Not only does Kangs department needs to convince other teams how they will work well in an alliance together. They need to show off what their robot does and promote themselves.

And even during the competition, the strategy team will meet to find ways to proceed with a game and who to work with.

Both teams have five departments.

Engineering and construction: They make the robot. Electronics: They work with the wires and motors. which can be noticeable for some. Marketing: They communicate with sponsors like Optimum, which provides awards. Programing: They programs the robots. Strategy: They do the challenging part of it, Jiang said.

But getting onto the team can be quite challenging. The students say it has a lower acceptance rate than Harvard.

Approximately 350 people are interested across both clubs, but they only have 10 available spots each year.

We lose a lot of great potential robotics people inspired to do engineering, Carr said.

The two current teams have been around since the early 2000s, and now they are about to start another team but with a different type of robot. The new team will be able to create robots like the two current teams but on a smaller scale. Carr mentioned it should be starting in the fall.

Anthony, founder of the new Apiero team and its senior captain, did not have an opportunity to work with robotics because of Covid. Everything was remote. He hopes expanding a new team will help more people learn about robotics.

Eventually, the schools goal is to have multiple smaller robotic teams. But they need to find more resources, space, and money. Im like (I told assistant principal of physical science and math) we have 20 plus problems. Where do you want to start, Anthony said.

However, Bronx Science is where most of these students started with robotics. Others started with Mindstorms programmed robots made from Lego when they were in elementary and middle school.

Last year, Peskay worked with an elementary school in Manhattan once a week to help their Lego team. His job was to help them with designs.

A lot of this gets us into our career paths personally, I was really into biology before engineering, but now Im going into engineering completely, Jiang said.

This is what kind of led me into the path of engineering, and Im planning on majoring in engineering (in college).

Its a completely student-led program. We make all the curriculums ourselves, we determine the kind of timing of everything, a lot of it is time management, how to communicate with others, communicate with our sponsors and even things such as like forming lifelong friendships,

Read more here:

Forget artificial intelligence, its about robots in the Bronx - The Riverdale Press

Bridging the Digital Divide: How Artificial Intelligence Services are … – Fagen wasanni

Bridging the Digital Divide: How Artificial Intelligence Services are Expanding Global Internet Access

The digital divide, a term coined to describe the gap between those who have access to the internet and digital technologies and those who do not, has been a persistent issue globally. However, recent advancements in artificial intelligence (AI) services are playing a pivotal role in bridging this divide, expanding global internet access, and fostering digital inclusivity.

AI, with its transformative potential, is revolutionizing various sectors, and the realm of internet connectivity is no exception. The technology is being harnessed to address the challenges of internet accessibility, particularly in remote and underprivileged regions. AI-powered predictive models are being used to identify areas with low internet penetration, enabling service providers to strategically expand their networks and reach.

One of the key ways AI is facilitating this expansion is through the optimization of network deployment. Traditional methods of network expansion are often time-consuming and expensive, involving extensive groundwork and physical infrastructure. AI, on the other hand, can analyze vast amounts of data to predict the optimal locations for network towers and satellites, significantly reducing costs and accelerating deployment.

Moreover, AI is also enhancing the quality of internet services. Machine learning algorithms can monitor network performance in real-time, identifying and rectifying issues before they impact users. This not only improves the user experience but also increases the efficiency of network maintenance, further contributing to the expansion of internet services.

In addition to network optimization and maintenance, AI is also instrumental in developing innovative solutions for internet access. For instance, AI-powered drones and balloons are being deployed to provide internet connectivity in remote areas. These solutions are particularly beneficial in disaster-stricken regions where traditional network infrastructure may be damaged or non-existent.

Furthermore, AI is playing a crucial role in making the internet more accessible and user-friendly. AI-driven applications such as voice recognition and translation services are making digital platforms more inclusive, enabling individuals with varying levels of literacy and language proficiency to navigate the digital world with ease.

However, while AI is undoubtedly a powerful tool in bridging the digital divide, it is not without its challenges. Concerns around data privacy, security, and the ethical use of AI are paramount. As AI services expand, it is crucial to establish robust regulatory frameworks to ensure that these technologies are used responsibly and that the benefits of increased internet access are not overshadowed by potential risks.

In conclusion, AI services are playing a significant role in expanding global internet access and bridging the digital divide. By optimizing network deployment, enhancing service quality, and developing innovative connectivity solutions, AI is helping to bring the internet to remote and underprivileged regions. At the same time, AI-driven applications are making the digital world more accessible and inclusive. As we move forward, it is essential to address the challenges associated with AI to ensure that its potential is harnessed responsibly and effectively for the benefit of all.

Follow this link:

Bridging the Digital Divide: How Artificial Intelligence Services are ... - Fagen wasanni

Protecting Passwords in the Age of Artificial Intelligence – Fagen wasanni

Passwords remain a critical tool for safeguarding personal information, despite the availability of new security measures. However, the rise of artificial intelligence (AI) poses new challenges and risks to password security. AIs ability to process vast amounts of data and employ advanced machine learning algorithms allows it to analyze patterns, detect correlations, and make countless attempts at cracking passwords within seconds. Unfortunately, cybercriminals are taking advantage of these capabilities.

AI applications designed for password guessing can evade detection and rapidly crack complex passwords. For example, the AI tool PassGAN can decrypt any 7-digit password, even one with symbols, numbers, and mixed cases, in less than 6 minutes. These developments highlight the weaknesses that exist in password security.

AI employs various methods to crack passwords. Enhanced brute force attacks leverage neural networks and machine learning algorithms to test numerous password combinations rapidly. Optimized dictionary attacks analyze leaked password data to create more effective keyword lists, increasing the chances of success. Automated social engineering uses AI to glean personal information from social media profiles and other public sources to facilitate password guessing. Additionally, AI can generate fake passwords and simulate login attempts to confuse intrusion detection systems and gain unauthorized access. Keystroke analysis, utilizing machine learning techniques, can infer passwords accurately by analyzing patterns in keystrokes.

To defend against AI-powered attacks, it is essential to use strong, complex passwords consisting of a combination of numbers, uppercase and lowercase letters, and symbols. Cybersecurity experts recommend passwords of at least 12 characters, if not 15. Implementing multi-factor authentication (MFA) provides an additional layer of security by requiring an additional form of authentication alongside the password. It is crucial to avoid reusing passwords across different accounts and instead use password managers to securely manage multiple passwords. Regularly updating passwords helps minimize the risk of discovery. Education and awareness about online security practices, as well as phishing attacks and social engineering tactics, are vital for both individuals and organizations.

Companies and platforms should invest in advanced security measures, including anomalous behavior detection systems and other technologies, to detect and prevent AI attacks. Importantly, AI algorithms can also contribute to password security by generating strong and unique passwords that are difficult to crack, and by learning users normal behavior to detect any anomalous activity.

While the advancements in AI pose challenges to password security, implementing strong security practices and utilizing advanced protection technologies can enhance defense against potential AI attacks and ensure the safety of personal information.

View original post here:

Protecting Passwords in the Age of Artificial Intelligence - Fagen wasanni

DNV and KIRIA Extend Collaboration in Cybersecurity and Artificial … – Fagen wasanni

DNV and the Korea Institute for Robot Industry Advancement (KIRIA) have extended their Memorandum of Understanding (MoU) to collaborate in the fields of cybersecurity and artificial intelligence in the robotics industry. The purpose of this extension is to support the international development of Koreas growing robotics industry and facilitate its entry into the European Union (EU) market.

Under the extended MoU, DNV and KIRIA will share technical and regulatory information about robots and relevant components. They will also cooperate in exchanging technical visits to review safety standards and explore the option of jointly providing advisory services to the Korean robot industry regarding safety standards. Additionally, they will have the opportunity to participate in the standardization process for robots.

The European Commission has recently implemented new legislation, the Machinery Regulation and the Artificial Intelligence Act, to enhance the safety and performance of machinery, including robots. Manufacturers of machinery, including robots, will need to comply with stricter product safety and sustainability requirements to access the European market. They will also need to address emerging risks in areas such as cybersecurity, human-machine interaction, and traceability of safety components and software behavior.

DNV, as an independent assurance and risk management provider, brings their expertise in technical standards development, assessments, certifications, and training to support the Korean robotics industry. KIRIAs goal is to access regulated markets worldwide and ensure that appropriate standards are in place for manufacturers to meet.

By combining DNVs capabilities in artificial intelligence assurance, functional safety, and cybersecurity with KIRIAs ambition, this collaboration aims to drive the maturity and global growth of the Korean robotics industry.

See the article here:

DNV and KIRIA Extend Collaboration in Cybersecurity and Artificial ... - Fagen wasanni

How Artificial Intelligence is Shaping the Future – Fagen wasanni

Artificial intelligence (AI) is rapidly transforming various aspects of our daily lives. It has revolutionized the way we shop, access news, and interact with the world around us. As AI continues to advance, its influence will only become more profound.

One major way that AI is expected to change the world is through automation. Already, AI is being used to automate tasks that were once carried out by humans, such as data entry, customer service, and even driving. As AI technology continues to progress, we can anticipate even more automation, which may result in job displacements. However, this evolving technology is also predicted to create new job opportunities in AI development and maintenance.

AI is also being harnessed for personalization purposes. Recommender systems powered by AI algorithms can suggest products tailored to our interests, while AI-driven newsfeeds deliver news articles personalized to our preferences. As AI becomes more sophisticated, it is likely that personalization will become even more prevalent in our lives.

In addition, AI is increasingly making decisions across various fields including healthcare, finance, and business. For instance, AI-powered medical devices aid doctors in accurate disease diagnosis, and AI-powered trading algorithms assist investors in making informed decisions. As AI progresses, we can expect it to play an even larger role in complex decision-making processes.

Another intriguing aspect of AIs advancement is its ability to foster creativity. AI is already being used to generate new forms of art, music, and literature. AI-powered music generators can create original songs, and AI-powered writers can generate poems and stories. As AIs creativity evolves, we can anticipate even more astonishing works of art produced by this technology.

While AI offers potential benefits including increased productivity, improved decision-making, personalized experiences, new forms of art, and solutions to complex problems, it also poses certain risks. Job displacement, bias and discrimination, privacy concerns, security threats, and ethical implications are some of the potential pitfalls associated with AI.

Therefore, it is crucial to carefully consider the potential benefits and risks of AI. Proper planning and management can ensure its positive impact on the world. However, without vigilance, AI could pose a significant threat to our society.

As the future of AI remains uncertain, one thing is clear: it will have a substantial impact on our lives and the world. It is our responsibility to ensure that AI is utilized for the greater good, and safeguards are in place to prevent any harm it may cause.

See the rest here:

How Artificial Intelligence is Shaping the Future - Fagen wasanni

OpenAI Drops Hints About the Future of Artificial Intelligence – Fagen wasanni

OpenAI, the leading AI research laboratory, has recently given some indications about its future plans. CEO Sam Altman confirmed that the company is indeed working on the development of a GPT-5 model, following Elon Musks call for a pause in AI advancement.

In a surprising move, OpenAI has taken steps to protect its intellectual property by filing a trademark application for the name GPT-5. However, it is important to note that the application is still under review and may take some time to be processed.

Based on the trademark application, we can infer some potential features of the upcoming GPT-5 model. It is expected to include programs and software for using language models, as well as artificial voice and text production. It may also involve language translation capabilities, natural language processing and analysis, and machine learning software.

While these hints provide some insight into what GPT-5 might offer, the full extent of its capabilities remains unknown. Nonetheless, the future of artificial intelligence appears to be headed towards even more exciting advancements. Stay tuned for updates and continue exploring the current language models offered by OpenAI.

See the original post here:

OpenAI Drops Hints About the Future of Artificial Intelligence - Fagen wasanni

The Role of Artificial Intelligence in Clinical Decision Making – Fagen wasanni

The integration of artificial intelligence (AI) tools into clinical practice, specifically clinical decision support (CDS) algorithms, is transforming the way physicians make critical decisions regarding patient diagnosis and treatment. However, for these technologies to be effective, physicians must have a thorough understanding of how to utilize them, a skill set that is currently lacking.

AI is increasingly becoming a vital part of medical decision-making, but physicians need to enhance their understanding of these tools to optimize their use. Experts recommend targeted training and a hands-on learning approach.

As AI systems like ChatGPT are being incorporated into everyday use, physicians will start to see these tools integrated into their clinical practice. These tools, known as CDS algorithms, assist healthcare providers in making important determinations such as prescribing antibiotics or recommending heart surgery.

The success of these technologies predominantly relies on how physicians interpret and act upon a tools risk predictions, which necessitates a unique set of skills that many physicians currently lack. According to a new perspective article, physicians need to learn how machines think and work before incorporating algorithms into their medical practice.

Although some clinical decision support tools are already included in electronic medical record systems, healthcare providers often find the current software cumbersome and challenging to use. Physicians dont need to be experts in math or computer science, but they do need a fundamental understanding of how algorithms work in terms of probability and risk adjustment.

To bridge this gap, medical education and clinical training should include explicit coverage of probabilistic reasoning tailored specifically to CDS algorithms. This training should encompass interpreting performance measures, evaluating algorithm output critically, and incorporating CDS predictions into clinical decision-making. Physicians should also engage in practice-based learning by applying algorithms to individual patients and exploring the impact of different inputs on predictions.

In response to these challenges, the University of Maryland, Baltimore, University of Maryland, College Park, and University of Maryland Medical System have launched plans for the Institute for Health Computing (IHC). The IHC will leverage AI and other computing methods to improve disease diagnosis, prevention, and treatment through the evaluation of medical health data. This institute will also provide healthcare providers with the necessary education and training on the latest technologies.

See the original post:

The Role of Artificial Intelligence in Clinical Decision Making - Fagen wasanni

Opinions on Artificial Intelligence Vary in Finland – Fagen wasanni

A recent survey conducted by the independent non-profit organization Foundation for Municipal Development revealed different perspectives among the Finnish population regarding the benefits and risks associated with artificial intelligence (AI).

The survey, which involved over 1,000 participants, found that 62% of respondents believed AI would enhance industrial production efficiency, while 50% thought it would increase work productivity. However, almost half of the participants expressed concerns about AI weakening privacy protection, and over a third believed it would have a negative impact on job opportunities and customer service. Furthermore, around a third of the respondents felt that accessing accurate, error-free information would become more difficult with the adoption of AI.

Regarding transportation safety, approximately 40% of those surveyed believed that AI would improve it, while others were unsure or believed it would have no significant effect. The opinions on the impact of AI on climate change, democracy, and social equality were also divided.

The survey participants had diverse views on the personal impact of AI in their lives. Around a fifth anticipated a positive impact, a similar number expected negative consequences, and the remainder were uncertain.

Political affiliation was found to shape perceptions of AI. Supporters of the National Coalition Party and the Greens were more likely to hold positive opinions about the technology, while those backing the Finns Party and the Centre Party expressed more negative views. Age was another influencing factor, as younger people tended to view AI more positively, while older individuals, rural residents, and those with lower education levels were more pessimistic.

The survey conducted by Kantar Public took place in June.

Link:

Opinions on Artificial Intelligence Vary in Finland - Fagen wasanni

The Impact of Artificial Intelligence on Society – Fagen wasanni

This summer, artificial intelligence (AI) demonstrated its remarkable capability by extracting John Lennons voice from a demo song recorded shortly before his death in 1980. By removing the electrical buzzing and piano accompaniment, AI successfully mixed Lennons voice into a final Beatles project led by Paul McCartney.

The ability of AI to recognize distinctive human voices has captivated the attention of many. However, it has also raised concerns about the potential impact of this powerful tool. Like any tool, the impact of AI depends on the intentions of the user. While it has many beneficial uses in our daily lives, such as grammar autocorrect and real-time navigation on smartphones, there is also the possibility of AI being manipulated for malicious purposes.

Instances of AI impersonating individuals for nefarious reasons have already occurred. For example, a mother in Arizona received a convincing AI-engineered recording of her daughter screaming that she had been kidnapped. The perpetrator threatened to harm the girl if a ransom was not paid. Fortunately, it was later discovered that the girl was safe at a skiing competition, but this incident highlights the potential dangers of AI.

These contrasting stories of AIs applications underscore the need for responsible use and regulation of this technology. While international gatekeepers work towards encouraging responsible AI utilization and preventing its abuses, it is essential for individuals to understand the implications and impact of AI in their daily lives.

Taking the time to understand ourselves and others on a deeper level through traditional means is crucial. A chance encounter between strangers, as witnessed during a family reunion, demonstrated how people from different backgrounds and worlds can connect through simple gestures. Moreover, taking the time to pay attention to nonverbal cues and support those with special needs, like the authors son, fosters true understanding and communication.

Additionally, AI can assist in organizing and finding relevant photos, as demonstrated by face recognition technology. However, there will always be a significant difference between recognizing someones face and cherishing the connection and memories associated with that individual.

In conclusion, while AI has undoubtedly shown its potential for innovation and discovery, it is crucial to exercise caution and responsible usage to prevent any negative consequences. Balancing the benefits of AI with human connection and understanding is key to ensuring a harmonious coexistence with this technology.

Continue reading here:

The Impact of Artificial Intelligence on Society - Fagen wasanni

Artificial Intelligence and the Perception of Dogs’ Ears – Fagen wasanni

The use of generative artificial intelligence in the world of art has sparked mixed reactions. Photographer Sophie Gamand recently explored how AI views dogs ears in her project featuring shelter dogs with cropped ears. Surprisingly, the AI algorithms leaned towards the belief that dogs should have floppy ears, despite the existence of breed standards and human preferences for cropped ears.

Using her own photographs of shelter dogs, many of which had severely shortened ears, Gamand aimed to restore their ears through AI technology. She utilized the DALL-E 2 program to understand how AI perceives a dogs appearance. Although the process was occasionally frustrating, Gamand wanted to minimize her interference to truly explore what the computer thought a dog should look like. It turned out that AI considers dogs to have intact ears.

Gamand believes that AI has the potential to separate genuine artists from those who rely too heavily on the technology. While AI can create stunning images, it is crucial for artists to consider their own artistic context, aesthetics, and the messages they want to convey. The use of AI should align with an artists overall vision and not solely rely on the work of others.

The ear cropping project is just one example of Gamand using AI in her work. She has also transformed AI interpretations of dogs into oil paintings and used ChatGTP to craft a letter from a shelter dog to its previous owner. Despite the benefits of AI, Gamand emphasizes the importance of ethical and honest artistic practices with this technology.

Gamands photography focuses on raising awareness for misunderstood dog breeds and animals in shelters. She has dedicated her time to volunteering at shelters across the United States and has successfully fundraised for animal shelters through her Instagram feed. Gamand believes that photographs have the power to create emotional connections between adoptable animals and potential pet owners.

Through her artwork, Gamand aims to reflect on humanity by observing dogs. However, sometimes the mirror reveals uncomfortable truths, such as the prevalence of ear cropping. She questions why certain breeds continue to undergo this procedure for aesthetic reasons, even though they are living safely as family pets. Gamand believes this reflects a broader issue in our relationship with dogs and the natural world, highlighting the need for better understanding and decision-making on behalf of our companions.

Read the original here:

Artificial Intelligence and the Perception of Dogs' Ears - Fagen wasanni

The Elements of AI: Free Online Course on Artificial Intelligence – Fagen wasanni

The field of artificial intelligence (AI) has revolutionized various aspects of our lives, enabling machines to perform tasks that were previously exclusive to human intelligence. However, along with the countless opportunities that this technological revolution has brought, there are also ethical, security, and regulatory challenges to navigate. To address this pressing need, an online initiative called Elements of AI has been created.

Elements of AI is a collaboration between Reaktor Inc. and the University of Helsinki, and it offers an online course that provides a solid foundation for understanding AI. The course is presented online and free of charge, making it accessible to anyone interested in delving into the fascinating world of AI.

The course is divided into two parts. The first section, Introduction to AI, introduces participants to the core concepts of AI. This module is designed for beginners who have no prior knowledge of AI. The second section, Creating AI, is aimed at individuals with basic programming skills in Python. In this phase of the course, participants explore how to build practical AI applications and delve into the capabilities of this disruptive technology.

Upon completing the course, participants receive an Artificial Intelligence certification, which not only enriches their knowledge but also adds professional credibility. In a competitive and rapidly evolving job market, this certification serves as a mark of quality and competence.

Since its launch in May 2018, Elements of AI has had over 140,000 subscriptions from more than 90 countries worldwide. The vision behind this course is to inspire, educate, and promote well-being through knowledge. It has been praised by Sundar Pichai, CEO of Google, as an inspiring example that levels the playing field and allows more people to benefit from the advances of AI.

See original here:

The Elements of AI: Free Online Course on Artificial Intelligence - Fagen wasanni