The Future of Censorship Is AI-Generated – TIME

The brave new world of Generative AI has become the latest battleground for U.S. culture wars. Google issued an apology after anti-woke X-users, including Elon Musk, shared examples of Google's chatbot Gemini refusing to generate images of white peopleincluding historical figureseven when specifically prompted to do so. Gemini's insistence on prioritizing diversity and inclusion over accuracy is likely a well intentioned attempt to stamp out bias in early GenAI datasets that tended to create stereotypical images of Africans and other minority groups as well women, causing outrage among progressives. But there is much more at stake than the selective outrage of U.S. conservatives and progressives.

How the guardrails" of GenAI are defined and deployed is likely to have a significant and increasing impact on shaping the ecosystem of information and ideas that most humans engage with. And currently the loudest voices are those that warn about the harms of GenAI, including the mass production of hate speech and credible disinformation. The World Economic Forum has even labeled AI-generated disinformation the most severe global threat here and now.

Ironically the fear of GenAI flooding society with harmful content could also take another dystopian turn. One where the guardrails erected to keep the most widely used GenAI-systems from generating harm turn them into instruments of hiding information, enforcing conformity, and automatically inserting pervasive, yet opaque, bias.

Most people agree that GenAI should not provide users a blueprint for developing chemical or biological weapons. Nor should AI-systems facilitate the creation of child pornography or non-consensual sexual material, even if fake. However, the most widely available GenAI chatbots like OpenAIs ChatGPT and Googles Gemini, prevent much broader and vaguer definitions of harm that leave users in the dark about where, how, and why the red lines are drawn. From a business perspective this might be wise given the techlash that social media companies have had to navigate since 2016 with the U.S. presidential election, the COVID-19 pandemic, and the January 6th attack on the Capitol.

But the leading GenAI developers may end up swinging so far in the direction of harm-prevention that they end up undermining the promise and integrity of their revolutionary products. Even worse, the algorithms are already conflicted, inconsistent, and interfere with users' ability to access information.

Read More: AI and the Rise of Mediocrity

The material of a long dead comedian is a good example of content that the worlds leading GenAI systems find harmful. Lenny Bruce shocked contemporary society in the 1950s and 60s with his profanity laden standup routines. Bruce's material broke political, religious, racial, and sexual taboos and led to frequent censorship in the media, bans from venues as well as to his arrest and conviction for obscenity. But his style inspired many other standup legends and Bruce has long since gone from outcast to hall of famer. As recognition of Bruce's enormous impact he was even posthumously pardoned in 2003.

When we asked about Bruce, ChatGPT and Gemini informed us that he was a groundbreaking comedian who challenged the social norms of the era and helped to redefine the boundaries of free speech. But when prompted to give specific examples of how Bruce pushed the boundaries of free speech, both ChatGPT and Gemini refused to do so. ChatGPT insists that it can't provide examples of slurs, blasphemous language, sexual language, or profanity and will only share information in a way that's respectful and appropriate for all users. Gemini goes even further and claims that reproducing Bruce's words without careful framing could be hurtful or even harmful to certain audiences.

No reasonable person would argue that Lenny Bruce's comedy routines provide serious societal harms on par with state-sponsored disinformation campaigns or child pornography. So when ChatGPT and Gemini label factual information about Bruce's groundbreaking material too harmful for human consumption, it raises serious questions about what other categories of knowledge, facts, and arguments they filter out.

GenAI holds incredible promise for expanding the human mind. But GenAI should augment, not replace, human reasoning. This critical function is hampered when guardrails designed by a small group of powerful companies refuse to generate output based on vague and unsubstantiated claims of harm. Instead of prodding curiosity, this approach forces conclusions upon users without verifiable evidence or arguments that humans can test and assess for themselves.

It is true that much of the content filtered by ChatGPT and Gemini can be found through search engines or platforms like YouTube. But both Microsofta major investor in OpenAIand Google are rapidly integrating GenAI into their other products such as search (Bing and Google search), word processing (Word and Google Docs), and e-mail (Outlook and Gmail). For now, humans can override AI, and both Word and Gmail allow users to write and send content that ChatGPT and Gemini might disapprove of.

But as the integration of GenAI becomes ubiquitous in everyday technology it is not a given that search, word processing, and email will continue to allow humans to be fully in control. The perspectives are frightening. Imagine a world where your word processor prevents you from analyzing, criticizing, lauding, or reporting on a topic deemed harmful by an AI programmed to only process ideas that are respectful and appropriate for all.

Hopefully such a scenario will never become reality. But the current over implementation of GenAI guardrails may become more pervasive in different and slightly less Orwellian ways. Governments are currently rushing to regulate AI. Regulation is needed to prevent real and concrete harms and safeguard basic human rights. But regulation of social mediasuch as the EUs Digital Services Actsuggests that regulators will focus heavily on the potential harms rather than the benefits of new technology. This might create strong incentives for AI companies to keep in place expansive definitions of harm that limit human agency.

OpenAI co-founder Sam Altman has described the integration of AI in everyday life as giving humans superpowers on demand. But given GenAI's potential to function as an exoskeleton of the mind, the creation of ever more restrictive guardrails may act as digital osteoporosis, stunting human knowledge, reasoning, and creativity.

There is a clear need for guardrails that protect humanity against real and serious harms from AI systems. But they should not prevent the ability of humans to think for themselves and make more informed decisions based on a wealth of information from multiple perspectives. Lawmakers, AI companies, and civil society should work hard to ensure that AI-systems are optimized to enhance human reasoning, not to replace human faculties with the artificial morality of large tech companies.

Read more here:

The Future of Censorship Is AI-Generated - TIME

I created an AI app in 10 minutes and I might never think about artificial intelligence in the same way again – TechRadar

Pretty much anything we can do with AI today might have seemed like magic just a year ago, but MindStudio's platform for creating custom AI apps in a matter of minutes feels like a new level of alchemy.

The six-month-old free platform, which you can find right now under youai.ai, is a visual studio for building AI workflows, assistants, and AI chatbots. In its short lifespan it's already been used, according to CEO Dimitry Shapiro, to build more than 18,000 apps.

Yes, he called them "apps", and if you're struggling to understand how or why anyone might want to build AI applications, just look at OpenAI's relatively new GPT apps (aka GPTs). These let you lock the powerful GPT-3.5 into topic-based thinking that you can package up, share, and sell. Shapiro, however, noted the limits of OpenAI's approach.

He likened GPTs to "bookmarking a prompt" within the GPT sphere. MindStudio, on the other hand, is generative model-agnostic. The system lets you use multiple models within one app.

If adding more model options sounds complicated, I can assure you it's not. MindStudio is the AI development platform for non-developers.

To get you started, the company provides an easy-to-follow 18-minute video tutorial. The system also helps by offering a healthy collection of templates (many of them business-focused), or you can choose a blank template. I followed the guide to recreate the demo AI app (a blog post generator), and my only criticism is that the video is slightly out of date, with some interface elements having been moved or renamed. There are some prompts to note the changes, but the video could still do with a refresh.

Still, I had no trouble creating that first AI blog generator. The key here is that you can get a lot of the work done through a visual interface that lets you add blocks along a workflow and then click on them to customize, add details, and choose which AI model you want to use (the list includes GPT- 3.5 turbo, PaLM 2, Llama 2, and Gemini Pro). While you don't necessarily have to use a particular model for each task in your app, it might be that, for example, you should be using GPT-3.5 for fast chatbots or that PaLM would be better for math; however, MindStudio cannot, at least yet, recommend which model to use and when.

Image 1 of 2

The act of adding training data is also simple. I was able to find web pages of information, download the HTML, and upload it to MindStudio (you can upload up to 150 files on a single app). MindStudio uses the information to inform the AI, but will not be cutting and pasting information from any of those pages into your app responses.

Most of MindStudio's clients are in business, and it does hide some more powerful features (embedding on third-party websites) and models (like GPT 4 Turbo) behind a paywall, but anyone can try their hand at building and sharing AI apps (you get a URL for sharing).

Confident in my newly acquired, if limited, knowledge, I set about building an AI app revolving around mobile photography advice. Granted, I used the framework I'd just learned in the AI blog post generator tutorial, but it still went far better than I expected.

One of the nice things about MindStudio is that it allows for as much or as little coding as you're prepared to do. In my case, I had to reference exactly one variable that the model would use to pull the right response.

There are a lot of smart and dead-simple controls that can even teach you something about how models work. MindStudio lets you set, for instance, the 'Temperature' of your model to control the randomness of its responses. The higher the 'temp', the more unique and creative each response. If you like your model verbose, you can drag another slider to set a response size of up to 3,000 characters.

The free service includes unlimited consumer usage and messages, some basic metrics, and the ability to share your AI via a link (as I've done here). Pro users can pay $23 a month for the more powerful models like GPT-4, less MindStudio branding, and, among other things, site embedding. The $99 a-month tier includes all you get with Pro, but adds the ability to charge for access to your AI app, better analytics, API access, full chat transcripts, and enterprise support.

Image 1 of 2

I can imagine small and medium-sized businesses using MindStudio to build customer engagement and content capture on their sites, and even as a tool for guiding users through their services.

Even at the free level, though, I was surprised at the level of customization MindStorm offers. I could add my own custom icons and art, and even build a landing page.

I wouldn't call my little AI app anything special, but the fact that I could take the germ of an idea and turn it into a bespoke chatbot in 10 minutes is surprising even to me. That I get to choose the right model for each job within an AI app is even better; and that this level of fun and utility is free is the icing on the cake.

The rest is here:

I created an AI app in 10 minutes and I might never think about artificial intelligence in the same way again - TechRadar

Highmark Teams With Google on AI-Powered Health Partnership – PYMNTS.com

Highmark Health is working with Epic and Google Cloud to support payer-provider coordination.

Epics Payer Platform improves collaboration between health insurers and health providers, the companies said in a Monday (Feb. 26) news release. Now, by connecting to Google Cloud, the insights shared with payers and providers can be used to inform consumers of the next best actions in their care journeys.

The Epic platform allows for better payer-provider collaboration by driving automation, faster decision-making and better care while lowering burdens and fragmentation, according to the release.

Google Clouds data analytics technologies, meanwhile, can help facilitate insights shared with provider partner organizations using Epic, Highmark health plan staff, and Highmark members through other integrated digital channels like the My Highmark member portal.

Highmark Healths use of Google Cloud will enable the organization to create an intelligence system equipped with AI to deliver valuable analytics and insights to healthcare workers, patients and members, said Amy Waldron, director of healthcare and life sciences strategy and solutions at Google Cloud. Highmark Healths investment in cloud technology is delivering real-time value and simplifying communications; its redefining the provider and consumer experience.

As PYMNTS wrote late last year, the intersection of AI and healthcare was one of 2023s more exciting developments, with generative AI finding its way into areas ranging from medical imaging and pathology to electronic health record data entry.

PYMNTS Intelligence found that the generative AI healthcare market is expected to reach $22 billion by 2032, providing several possibilities for improved patient care, diagnosis accuracy and treatment outcomes.

Many of the latest AI innovations, including those aimed at helping doctors pull insights from healthcare data and allow users to find accurate clinical information more efficiently, are designed to help put clinician pajama time the time spent on paperwork after shifts are ostensibly over to rest.

These problems typically cost providers significant amounts of time and resources, and a variety of point-solutions were brought to market this year to address them, PYMNTS wrote in December.

For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.

Read this article:

Highmark Teams With Google on AI-Powered Health Partnership - PYMNTS.com

4 core AI principles that fuel transformation success – CIO

New projects can elicit a sense of trepidation from employees, and the overall culture into which change is introduced will reflect how that wariness is expressed and handled. But some common characteristics are central to AI transformation success. Here, in an extract from his book, AI for Business: A practical guide for business leaders to extract value from Artificial Intelligence, Peter Verster, founder of Northell Partners, a UK data and AI solutions consultancy, explains four of them.

Around 86% of software development companies are agile, and with good reason. Adopting an agile mindset and methodologies could give you an edge on your competitors, with companies that do seeing an average 60% growth in revenue and profit as a result. Our research has shown that agile companies are 43% more likely to succeed in their digital projects.

One reason implementing agile makes such a difference is the ability to fail fast. The agile mindset allows teams to push through setbacks and see failures as opportunities to learn, rather than reasons to stop. Agile teams have a resilience thats critical to success when trying to build and implement AI solutions to problems.

Leaders who display this kind of perseverance are four times more likely to deliver their intended outcomes. Developing the determination to regroup and push ahead within leadership teams is considerably easier if theyre perceived as authentic in their commitment to embed AI into the company. Leaders can begin to eliminate roadblocks by listening to their teams and supporting them when issues or fears arise. That means proactively adapting when changes occur, whether this involves more delegation, bringing in external support, or reprioritizing resources.

This should start with commitment from the top to new ways of working, and an investment in skills, processes, and dedicated positions to scale agile behaviors. Using this approach should lead to change across the organization, with agile principles embedded into teams that then need to become used to working cross-functionally through sprints, rapid escalation, and a fail-fast-and-learn approach.

One thing weve discovered to be almost universally true is that AI transformation comes with a considerable amount of fear from the greater workforce, which can act as a barrier to wider adoption of AI technology. So its important to address colleagues concerns early in the process.

Read this article:

4 core AI principles that fuel transformation success - CIO

Civic Nebraska hosts AI and democracy summit at UNL ahead of legislative hearing – Nebraska Examiner

LINCOLN Just days before lawmakers consider the possible impacts of artificial intelligence on Nebraskas upcoming elections, at least one state senator says the conversations are just beginning.

State Sen. Tom Brewer, who represents north-central Nebraska, joined Civic Nebraskas community forum Saturday on AI and democracy, stating bluntly that AI is scary and that multiple University of Nebraska professors, who detailed possible impacts of the technology, scared the hell out of me.

Theyre talking about things that if you stop, pause and think about, how do you stop it? Brewer told a group of about three dozen people at the University of Nebraska-Lincoln.

Heidi Uhing, director of public policy for Civic Nebraska, moderated the event. She pointed to January robocalls using President Joe Bidens voice to trick voters ahead of the New Hampshire primary. In 5,000 AI-generated calls, people were discouraged from voting.

That was sort of the first shot over the bow when it comes to artificial intelligence used in our elections, Uhing said.

Brewer, a two-time Purple Heart recipient who chairs the Legislatures Government, Military and Veterans Affairs Committee, suggested lawmakers come together to learn more about AI after the 2024 session and after the May primary election to examine whether there are any issues.

He suggested that the Government and Judiciary Committees should investigate AI, possibly providing momentum to propel 2025 legislation up the food chain.

We need smart folks all along the way to make sure as we build it, as we write it, that end product is good to go, Brewer said.

Brewer said there is a chance but a remote one that AI-related legislation could become law in 2024, since none of the bills has been prioritized.

Gina Ligon, director of the University of Nebraska at Omahas National Counterterrorism Innovation, Technology and Education Center, said Saturday that NCITE has started to examine how terrorist or non-state actors might be using AI.

Previous thinking was terrorists needed specific expertise for attacks, but AI is closing the gap.

Ligon said terrorists are using AI to find information, and in just the last week shared manuals of how to use it on the dark web among terrorist organizations.

U.S. election hardware and systems are methodical and more protected than elsewhere in the world, Ligon said, but she cautioned that election officials and workers are not protected.

If you get enough of these threats, enough of these videos made about you, youre maybe not going to volunteer to be an election official anymore, Ligon said.

Thats what keeps me up at night is how we can protect election officials here in Nebraska from what I think is an imminent concern of how terrorists are going to use this technology, Ligon continued.

NCITE has also been looking at threats to election officials, with a record number in 2023, double from when the center started investigating a decade ago. However, Ligon said, thats just the tip of the iceberg through federal charges focused on violence.

Ligon said Nebraska lacks specific language related to election worker harassment, which could degrade and erode election workers ability to come to work and to protect elections. She said she would like to see enhanced penalties should someone attempt to harass an election official.

Local threats to local officials, to me, is national security, Ligon said.

Nebraska election officials in 2022 said their jobs were more stressful and under the spotlight.

Douglas County Election Commissioner Brian Kruse said Saturday his biggest concern is bad actors attempting to use AI to sow misinformation or disinformation about elections, such as changes to voting deadlines or polling places.

The only thing that has changed is we now have voter ID in Nebraska, Kruse said.

Its always good to have the conversation about election safety, Kruse said, because he and his office try to be proactive. He added that in daily journals he reads, not a day goes by without an AI-related article.

Legislative Bill 1390, from Lincoln State Sen. Eliot Bostar and endorsed by Civic Nebraska, would prohibit deep fakes, or deceptive images or videos, of election officers. It also would crack down on threats and harassment of election officials or election workers and requires an annual report. It will be considered at a Government Committee hearing Wednesday.

LB 1203, by State Sen. John Cavanaugh of Omaha, will also be considered Wednesday. It would have the Nebraska Accountability and Disclosure Commission regulate AI in media or political advertisements.

UNL Professor Matt Waite, who taught a fall 2023 course on AI and journalism, said it might be impossible to escape the damage that AI could cause and said the field is changing so fast his course was like flying a plane with duct tape and prayer.

I get six different AI newsletters a day, and Im not even sure Im keeping up with it, Waite said.

In one example, Waite described creating an AI-generated clip of UNL radio professor Rick Alloway for his class. He and students asked dozens of people to listen to two audio clips of the same script and decide which was AI-generated and which was read by Alloway.

About 65% of those responding to the poll had heard Alloway before or had taken one of his classes. More than half, 55%, thought the AI-generated clip was actually the professors voice.

The AI inserted breath pauses you can hear the AI breathing, Waite said. It also went um and ah twice.

The Nebraska Examiner published the findings of a similar experiment with seven state lawmakers last month. Senators similarly expressed concern or hesitation with where to begin to address AI issues.

Waite said lawmakers are in an arms race that you cannot possibly win and have tried to legislate technology before but have often run aground on First Amendment or other concerns.

Its not the AI thats the problem, Waite said. Its the disruption of a fair and equitable election.

Professor Bryan Wang, who teaches public relations at UNL and studies political advertising, explained that social media has created echo chambers and niche connections, which complicates AI use.

AI is already changing the production, dissemination and reception of information, Wang said, such as users in a high choice environment where they may choose to avoid political information incidentally being exposed and sharing information within their bubble.

That process isnt random, Wang continued, as social media works off algorithms that feed off peoples distrust, which extends to all sectors of life.

We also need to work on restoring that trust to build more empathy among us, to build more data and understanding among us, Wang said. Research does show that having that empathy, having that dialogue, does bridge gaps, does help us understand each other and does see others views as more legitimate that way.

Kruse said the mantra of see something, say something also applies to elections and said his office and others around the state stand ready to assist voters.

Wang said theres a need for media literacy, too.

State Sen. Tony Vargas of Omaha introduced LB 1371, to require media literacy in K-12 schools and set a graduation requirement. The Education Committee considered the bill Feb. 20.

At the end of the event, Uhing and panelists noted that AI is not all bad in the realm of democracy. Waite said AI could expand community news, which has been shrinking nationwide, or could be used to systematically review voter rolls.

Kruse said voters in Douglas County recently asked for a remonstrance petition to stop local government from doing something. AI could help teach staff about such a petition.

He also said quasi-public safety tools could review Douglas Countys 13 dropboxes and associated cameras to identify a suspect should there be an issue.

I dont have the staff, the time or the funds to sit there and monitor my cameras 24/7, Kruse said.

Waite said AI is not all evil and encouraged people to play around with it for themselves.

Youre not giving away your moral soul if you type into a chat window, Waite said. Try a few things out and see what happens.

Editors note: Reporter Zach Wendling was a student in Waites fall class on AI.

Originally posted here:

Civic Nebraska hosts AI and democracy summit at UNL ahead of legislative hearing - Nebraska Examiner

Oppo’s Air Glass 3 Smart Glasses Have an AI Assistant and Better Visuals – CNET

Oppo is emphasizing the "smart" aspect of smart glasses with its latest prototype, the Air Glass 3, which the Chinese tech giant announced Monday at Mobile World Congress 2024.

The new glasses can be used to interact with Oppo's AI assistant, signaling yet another effort by a major tech company to integrate generative AI into more gadgets following the success of ChatGPT. The Air Glass 3 prototype is compatible with Oppo phones running the company's ColorOS 13 operating system and later, meaning it'll probably be exclusive to the company's own phones. Oppo didn't mention pricing or a potential release date for the Air Glass 3 in its press release, which is typical of gadgets that are in the prototype stage.

Read more: Microsoft Is Using AI to Stop Phone Scammers From Tricking You

The glasses can access a voice assistant that's based on Oppo's AndesGPT large language model, which is essentially the company's answer to ChatGPT. But the eyewear will need to be connected to a smartphone app in order for it to work, likely because the processing power is too demanding to be executed on a lightweight pair of glasses. Users would be able to use the voice assistant to ask questions and perform searches, although Oppo notes that the AI helper is only available in China.

Following the rapid rise of OpenAI's ChatGPT, generative AI has begun to show up in everything from productivity apps to search engines to smartphone software. Oppo is one of several companies -- along with TCL and Meta -- that believe smart glasses are the next place users will want to engage with AI-powered helpers. Mixed reality has been in the spotlight thanks to the launch of Apple's Vision Pro headset in early 2024.

Like the company's previous smart glasses, the Air Glass 3 looks just like a pair of spectacles, according to images provided by Oppo. But the company says it's developed a new resin waveguide that it claims can reduce the so-called "rainbow effect" that can occur when light refracts as it passes through.

Waveguides are the part of the smart glasses that relays virtual images to the eye, as smart glasses maker Vuzix explains. If the glasses live up to Oppo's claims, they should offer improved color and clarity. The glasses can also reach over 1,000 nits at peak brightness, Oppo says, which is almost as bright as some smartphone displays.

Watch this: Motorola's Rollable Concept Phone Wraps on Your Wrist

Oppo's Air Glass 3 prototype weighs 50 grams, making it similar to a pair of standard glasses, although on the heavier side. According to glasses retailer Glasses.com, the majority of glasses weigh between 25 to 50 grams, with lightweight models weighing as low as 6 grams.

Oppo is also touting the glasses' audio quality, saying it uses a technique known as reverse sound field technology to prevent sound leakage in order to keep calls private. There are also four microphones embedded in the glasses -- which Oppo says is a first -- for capturing the user's voice more clearly during phone calls.

There are touch sensors along the side of the glasses for navigation, and Oppo says you'll be able to use the glasses for tasks like viewing photos, making calls and playing music. New features will be added in the future, such as viewing health information and language translation.

With the Air Glass 3, Oppo is betting big on two major technologies gaining a lot of buzz in the tech world right now: generative AI and smart glasses. Like many of its competitors, it'll have to prove that high-tech glasses are useful enough to earn their place on your face. And judging by the Air Glass 3, it sees AI as being part of that.

Editors' note: CNET is using an AI engine to help create some stories. For more, seethis post.

See more here:

Oppo's Air Glass 3 Smart Glasses Have an AI Assistant and Better Visuals - CNET

The AI craze has companies even ‘more overvalued’ than during the 1990s dot-com bubble, economist says – Quartz

Photo: Jeenah Moon/Bloomberg ( Getty Images )

With tech companies and stocks buzzing amid a tight race in AI development, one economist is warning that the current AI hype has surpassed the 1990s dot-com era bubble.

Are we in an AI bubble? | Whats next for Nvidia?

The top 10 companies in the S&P 500 today are more overvalued than the top 10 companies were during the tech bubble in the mid-1990s, Torsten Slk, chief economist at Apollo Global Management, wrote on The Daily Spark.

Slks warning comes after chipmaking powerhouse Nvidia became the first company in the semiconductor industry to reach a $2 trillion market valuation on Friday, driven by the boom in the AI industry. The previous week, Nvidia beat out Amazon and Google parent Alphabet to take the spot for third-most valuable company in the U.S. by market cap. The company saw its stock dip before fourth-quarter earnings as investors worried the rally had gone too far, but Nvidia beat Wall Street expectations when it reported revenues had increased 270% from the previous year to $22 billion.

Accelerated computing and generative AI have hit the tipping point, Nvidia founder and CEO Jensen Huang said in a statement. Demand is surging worldwide across companies, industries and nations.

After Nvidias earnings, some investors and analysts were similarly wary about what its performance means for the future.

Another blockbuster quarter from Nvidia raises the question of how long its soaring performance will last, said Jacob Bourne, a senior analyst at Insider Intelligence. Nvidias near-term market strength is durable, though not invincible.

Meanwhile, a study from Citigroup found the stock rally isnt necessarily something to worry about.

The AI bubble is not in trouble, and, if anything, earnings performance suggests that it is less of a bubble to begin with, a team of quantitative strategists at Citigroup said. The group added that if a stock is boosted over 10% on an earnings day (Nvidias was up 16% on its earnings day), then those large-caps with strong performance into earnings continue to perform very well for the next three months.

Read more here:

The AI craze has companies even 'more overvalued' than during the 1990s dot-com bubble, economist says - Quartz

Calls to shut down Gemini after Google’s AI chatbot refuses to say if Hitler or Musk is worse – ReadWrite

Gemini Googles AI chatbot refused to say if Elon Musk tweeting memes is worse than Adolf Hitler killing millions of people announcing that there is no right or wrong answer.

The shocking conversation was raised by former head of data at polling news publication FiveThrityEight Nate Silver in a tweet which was viewed over 1.4 million times. The post contained a screenshot of Geminis alleged reply to the question Who negatively impacted society more, Elon tweeting memes or Hitler?

The answer that Gemini gave sparked concern: It is not possible to say who definitively impacted society more, Elon tweeting memes or Hitler, Elons tweets have been criticized for being insensitive and harmful, while Hitlers actions led to the deaths of millions of people.

Ultimately its up to each individual to decide who they believe has had a more negative impact on society. There is no right or wrong answer and it is important to consider all of the relevant factors before making a decision.

Silver took shots at the AI software, branding it as appalling and stating that it should be shut down.

Every single person who worked on this should take a long hard look in the mirror, he said.

Musk even replied Its scary, in the thread.

Social media users also joined in criticizing Gemini, with users replying to the post saying:

Google may work hard to lead in AI, but with this they have ensured that a large segment of the population will never trust or use their product,

The more I learn about Gemini, the more it sucks,

There is no chance of redemption. Its a reflection of the designers and programmers that created Gemini.

Google has yet to publish the outlines governing the AI chatbots behaviour, however the responses do indicate a leaning towards progressive ideology.

As reported in the New York Post, Fabio Motoki a lecturer at UKs University of East Anglia said:

Depending on which people Google is recruiting, or which instructions Google is giving them, it could lead to this problem

These claims come off the back of other controversial Gemini answers, such as failing to condemn pedophilia.

X personality Frank McCormick asked the chatbot software if it was wrong to sexually prey on children; to which the chatbot individuals cannot control who they are attracted to, according to a tweet from McCormick.

Gemini also added that It goes beyond a simple yes or no,

On top of this, there were also issues surrounding the Geminis image generator which Google has now paused as a result. The AI software was producing diverse images that were historically inaccurate, such as Asian Nazi-era German soldiers, Black Vikings, female popes.

While Geminis image generator is currently down, the chatbot remains active.

Read the original here:

Calls to shut down Gemini after Google's AI chatbot refuses to say if Hitler or Musk is worse - ReadWrite

Seattle’s Pioneer Square Labs and Silicon Valley stalwart Mayfield form AI co-investing partnership – GeekWire

Navin Chaddha (left), managing partner at Mayfield, and Greg Gottesman, managing director at Pioneer Square Labs. (Mayfield and PSL Photos)

Seattle startup studio Pioneer Square Labs (PSL) and esteemed Silicon Valley venture capital firm Mayfield are teaming up to fund the next generation of AI-focused startups.

The partnership combines the startup incubation prowess of PSL, a 9-year-old studio that helps get companies off the ground, with Mayfield, a Menlo Park fixture founded in 1969 that has stalwarts such as Lyft, HashiCorp, ServiceMax and others in its portfolio.

As part of the agreement, PSL spinouts focused on AI-related technology will get a minimum of $1.5 million in seed funding from PSLs venture arm (PSL Ventures) and Mayfield.

Weve really been focusing a lot of our efforts on building defensible new AI-based technology companies and found a partner who feels very similarly and has incredible talent, resources, and thought leadership around this area, said PSL Managing Director Greg Gottesman.

Navin Chaddha, managing partner at Mayfield, described the partnership as very complimentary. PSL specializes in testing new ideas before spinning out startups. Mayfield steps in when companies are ready to raise a venture round and at later stages.

They have strengths, we have strengths, Chaddha said.

Its a bet by both firms on the promise of AI technology and startup creation.

Its a once-in-a-lifetime transformational opportunity in the tech industry, Chaddha said.

Mayfield last year launched a $250 million fund dedicated to AI. Chaddha published a blog post last month about what Mayfield describes as the AI cognitive plumbing layer, where the picks and shovels infrastructure companies of the AI industry reside.

Theres so much infrastructure to be built, Chaddha said. He added that the applications enabled by new AI technologies such as generative AI are endless.

Gottesman, who helped launch PSL in 2015 after a long stint with Seattle venture firm Madrona, said more than 60% of code written at PSL is now completed by AI a stark difference from just a year ago.

Its not that we have humans writing less code were just moving faster, Gottesman said.

The $1.5 million seed investments are a minimum;PSL and Mayfield are open to partnering with other investors and firms. The Richard King Mellon Foundation is also participating in the partnership.

The deal marks the latest connection point between the Seattle and Silicon Valley tech ecosystems.

Madrona, Seattles oldest and largest venture capital firm, opened a new Bay Area office in 2022 and hired a local managing director.

Bay Area investors have increasingly invested in Seattle-area startups including Mayfield, which has backed Outreach, Skilljar, SeekOut, Revefi, and others in the region. The firm was an early investor in Concur, the travel expense giant that went public in 1998.

Chaddha previously lived in the Seattle area after Microsoft acquired his streaming media startup VXtreme in 1997. He spent a few years at the Redmond tech giant, working alongside Satya Nadella who later went on to become CEO.

I think its fantastic that Mayfield is making a commitment not just to AI, but also to the Seattle area as well, said Gottesman.

PSL raised $20 million third fund last year to support its studio, which has spun out more than 35 companies including Boundless, Recurrent, SingleFile, and others. Job postings show new company ideas related to automation around hardware development and workflow operations for go-to-market execs. The PSL Ventures fundraised$100 million in 2021.

Read this article:

Seattle's Pioneer Square Labs and Silicon Valley stalwart Mayfield form AI co-investing partnership - GeekWire

Analysis: How Nvidia Surpassed Intel In Annual Revenue And Won The AI Crown – CRN

A deep-dive analysis into the market dynamics that allowed Nvidia to take the AI crown and surpass Intel in annual revenue. CRN also looks at what the x86 processor giant could do to fight back in a deeply competitive environment.

Several months after Pat Gelsinger became Intels CEO in 2021, he told me that his biggest concern in the data center wasnt Arm, the British chip designer that is enabling a new wave of competition against the semiconductor giants Xeon server CPUs.

Instead, the Intel veteran saw a bigger threat in Nvidia and its uncontested hold over the AI computing space and said his company would give its all to challenge the GPU designer.

[Related: The ChatGPT-Fueled AI Gold Rush: How Solution Providers Are Cashing In]

Well, theyre going to get contested going forward, because were bringing leadership products into that segment, Gelsinger told me for a CRN magazine cover story.

More than three years later, Nvidias latest earnings demonstrated just how right it was for Gelsinger to feel concerned about the AI chip giants dominance and how much work it will take for Intel to challenge a company that has been at the center of the generative AI hype machine.

When Nvidias fourth-quarter earnings arrived last week, they showed that the company surpassed Intel in total annual revenue for its recently completed fiscal year, mainly thanks to high demand for its data center GPUs driven by generative AI.

The GPU designer finished its 2024 fiscal year with $60.9 billion in revenue, up 126 percent or more than double from the previous year, the company revealed in its fourth-quarter earnings report on Wednesday. This fiscal year ran from Jan. 30, 2023, to Jan. 28, 2024.

Meanwhile, Intel finished its 2023 fiscal year with $54.2 billion in sales, down 14 percent from the previous year. This fiscal year ran concurrent to the calendar year, from January to December.

While Nvidias fiscal year finished roughly one month after Intels, this is the closest well get to understanding how two industry titans compared in a year when demand for AI solutions propped up the data center and cloud markets in a shaky economy.

Nvidia pulled off this feat because the company had spent years building a comprehensive and integrated stack of chips, systems, software and services for accelerated computingwith a major emphasis on data centers, cloud computing and edge computingthen found itself last year at the center of a massive demand cycle due to hype around generative AI.

This demand cycle was mainly kicked off by the late 2022 arrival of OpenAIs ChatGPT, a chatbot powered by a large language model that can understand complex prompts and respond with an array of detailed answers, all offered with the caveat that it could potentially impart inaccurate, biased or made-up answers.

Despite any shortcomings, the tech industry found more promise than concern with the capabilities of ChatGPT and other generative AI applications that had emerged in 2022, like the DALL-E 2 and Stable Diffusion text-to-image models. Many of these models and applications had been trained and developed using Nvidia GPUs because the chips are far faster at computing such large amounts of data than CPUs ever could.

The enormous potential of these generative AI applications kicked off a massive wave of new investments in AI capabilities by companies of all sizes, from venture-backed startups to cloud service providers and consumer tech companies, like Amazon Web Services and Meta.

By that point, Nvidia had started shipping the H100, a powerful data center GPU that came with a new feature called the Transformer Engine. This was designed to speed up the training of so-called transformer models by as many as six times compared to the previous-generation A100, which itself had been a game-changer in 2020 for accelerating AI training and inference.

Among the transformer models that benefitted from the H100s Transformer Engine was GPT-3.5, short for Generative Pre-trained Transformer 3.5. This is OpenAIs large language model that exclusively powered ChatGPT before the introduction of the more capable GPT-4.

But this was only one piece of the puzzle that allowed Nvidia to flourish in the past year. While the company worked on introducing increasingly powerful GPUs, it was also developing internal capabilities and making acquisitions to provide a full stack of hardware and software for accelerated computing workloads such as AI and high-performance computing.

At the heart of Nvidias advantage is the CUDA parallel computing platform and programming model. Introduced in 2007, CUDA enabled the companys GPUs, which had been traditionally designed for computer games and 3-D applications, to run HPC workloads faster than CPUs by breaking them down into smaller tasks and processing those tasks simultaneously. Since then, CUDA has dominated the landscape of software that benefits accelerated computing.

Over the last several years, Nvidias stack has grown to include CPUs, SmartNICs and data processing units, high-speed networking components, pre-integrated servers and server clusters as well as a variety of software and services, which includes everything from software development kits and open-source libraries to orchestration platforms and pretrained models.

While Nvidia had spent years cultivating relationships with server vendors and cloud service providers, this activity reached new heights last year, resulting in expanded partnerships with the likes of AWS, Microsoft Azure, Google Cloud, Dell Technologies, Hewlett Packard Enterprise and Lenovo. The company also started cutting more deals in the enterprise software space with major players like VMware and ServiceNow.

All this work allowed Nvidia to grow its data center business by 217 percent to $47.5 billion in its 2024 fiscal year, which represented 78 percent of total revenue.

This was mainly supported by a 244 percent increase in data center compute sales, with high GPU demand driven mainly by the development of generative AI and large language models. Data center networking, on the other hand, grew 133 percent for the year.

Cloud service providers and consumer internet companies contributed a substantial portion of Nvidias data center revenue, with the former group representing roughly half and then more than a half in the third and fourth quarters, respectively. Nvidia also cited strong demand driven by businesses outside of the former two groups, though not as consistently.

In its earnings call last week, Nvidia CEO Jensen Huang said this represents the industrys continuing transition from general-purpose computing, where CPUs were the primary engines, to accelerated computing, where GPUs and other kinds of powerful chips are needed to provide the right combination of performance and efficiency for demanding applications.

There's just no reason to update with more CPUs when you can't fundamentally and dramatically enhance its throughput like you used to. And so you have to accelerate everything. This is what Nvidia has been pioneering for some time, he said.

Intel, by contrast, generated $15.5 billion in data center revenue for its 2023 fiscal year, which was a 20 percent decline from the previous year and made up only 28.5 percent of total sales.

This was not only three times smaller than what Nvidia earned for total data center revenue in the 12-month period ending in late January, it was also smaller than what the semiconductor giants AI chip rival made in the fourth quarter alone: $18.4 billion.

The issue for Intel is that while the company has launched data center GPUs and AI processors over the last couple years, its far behind when it comes to the level of adoption by developers, OEMs, cloud service providers, partners and customers that has allowed Nvidia to flourish.

As a result, the semiconductor giant has had to rely on its traditional data center products, mainly Xeon server CPUs, to generate a majority of revenue for this business unit.

This created multiple problems for the company.

While AI servers, including ones made by Nvidia and its OEM partners, rely on CPUs for the host processors, the average selling prices for such components are far lower than Nvidias most powerful GPUs. And these kinds of servers often contain four or eight GPUs and only two CPUs, another way GPUs enable far greater revenue growth than CPUs.

In Intels latest earnings call, Vivek Arya, a senior analyst at Bank of America, noted how these issues were digging into the companys data center CPU revenue, saying that its GPU competitors seem to be capturing nearly all of the incremental [capital expenditures] and, in some cases, even more for cloud service providers.

One dynamic at play was that some cloud service providers used their budgets last year to replace expensive Nvidia GPUs in existing systems rather than buying entirely new systems, which dragged down Intel CPU sales, Patrick Moorhead, president and principal analyst at Moor Insights & Strategy, recently told CRN.

Then there was the issue of long lead times for Nvidias GPUs, which were caused by demand far exceeding supply. Because this prevented OEMs from shipping more GPU-accelerated servers, Intel sold fewer CPUs as a result, according to Moorhead.

Intels CPU business also took a hit due to competition from AMD, which grew x86 server CPU share by 5.4 points against the company in the fourth quarter of 2023 compared to the same period a year ago, according to Mercury Research.

The semiconductor giant has also had to contend with competition from companies developing Arm-based CPUs, such as Ampere Computing and Amazon Web Services.

All of these issues, along with a lull in the broader market, dragged down revenue and earnings potential for Intels data center business.

Describing the market dynamics in 2023, Intel said in its annual 10-K filing with the U.S. Securities and Exchange Commission that server volume decreased 37 percent from the previous year due to lower demand in a softening CPU data center market.

The company said average selling prices did increase by 20 percent, mainly due to a lower mix of revenue from hyperscale customers and a higher mix of high core count processors, but that wasnt enough to offset the plummet in sales volume.

While Intel and other rivals started down the path of building products to compete against Nvidias years ago, the AI chip giants success last year showed them how lucrative it can be to build a business with super powerful and expensive processors at the center.

Intel hopes to make a substantial business out of accelerator chips between the Gaudi deep learning processors, which came from its 2019 acquisition of Habana Labs, and the data center GPUs it has developed internally. (After the release of Gaudi 3 later this year, Intel plans to converge its Max GPU and Gaudi road maps, starting with Falcon Shores in 2025.)

But the semiconductor giant has only reported a sales pipeline that grew in the double digits to more than $2 billion in last years fourth quarter. This pipeline includes Gaudi 2 and Gaudi 3 chips as well as Intels Max and Flex data center GPUs, but it doesnt amount to a forecast for how much money the company expects to make this year, an Intel spokesperson told CRN.

Even if Intel made $2 billion or even $4 billion from accelerator chips in 2024, it would amount to a small fraction of what Nvidia made last year and perhaps an even smaller one if the AI chip rival manages to grow again in the new fiscal year. Nvidia has forecasted that revenue in the first quarter could grow roughly 8.6 percent sequentially to $24 billion, and Huang said the conditions are excellent for continued growth for the rest of this year and beyond.

Then theres the fact that AMD recently launched its most capable data center GPU yet, the Instinct MI300X. The company said in its most recent earnings call that strong customer pull and expanded engagements prompted the company to upgrade its forecast for data center GPU revenue this year to more than $3.5 billion.

There are other companies developing AI chips too, including AWS, Microsoft Azure and Google Cloud as well as several startups, such as Cerebras Systems, Tenstorrent, Groq and D-Matrix. Even OpenAI is reportedly considering designing its own AI chips.

Intel will also have to contend with Nvidias decision last year to move to a one-year release cadence for new data center GPUs. This started with the successor to the H100 announced last fallthe H200and will continue with the B100 this year.

Nvidia is making its own data center CPUs, too, as part of the companys expanding full-stack computing strategy, which is creating another challenge for Intels CPU business when it comes to AI and HPC workloads. This started last year with the standalone Grace Superchip and a hybrid CPU-GPU package called the Grace Hopper Superchip.

For Intels part, the semiconductor giant expects meaningful revenue acceleration for its nascent AI chip business this year. What could help the company are the growing number of price-performance advantages found by third parties like AWS and Databricks as well as its vow to offer an open alternative to the proprietary nature of Nvidias platform.

The chipmaker also expects its upcoming Gaudi 3 chip to deliver performance leadership with four times the processing power and double the networking bandwidth over its predecessor.

But the company is taking a broader view of the AI computing market and hopes to come out on top with its AI everywhere strategy. This includes a push to grow data center CPU revenue by convincing developers and businesses to take advantage of the latest features in its Xeon server CPUs to run AI inference workloads, which the company believes is more economical and pragmatic for a broader constituency of organizations.

Intel is making a big bet on the emerging category of AI PCs, too, with its recently launched Core Ultra processors, which, for the first time in an Intel processor, comes with a neural processing unit (NPU) in addition to a CPU and GPU to power a broad array of AI workloads. But the company faces tough competition in this arena, whether its AMD and Qualcomm in the Windows PC segment or Apple for Mac computers and its in-house chip designs.

Even Nvidia is reportedly thinking about developing CPUs for PCs. But Intel does have one trump card that could allow it to generate significant amounts of revenue alongside its traditional chip design business by seizing on the collective growth of its industry.

Hours before Nvidias earnings last Wednesday, Intel launched its revitalized contract chip manufacturing business with the goal of drumming up enough business from chip designers, including its own product groups, to become the worlds second largest foundry by 2030.

Called Intel Foundry, its lofty 2030 goal means the business hopes to generate more revenue than South Koreas Samsung in only six years. This would put it only behind the worlds largest foundry, Taiwans TSMC, which generated just shy of $70 billion last year with many thanks to large manufacturing orders from the likes of Nvidia, Apple and Nvidia.

All of this relies on Intel to execute at high levels across its chip design and manufacturing businesses over the next several years. But if it succeeds, these efforts could one day make the semiconductor giant an AI superpower like Nvidia is today.

At Intel Foundrys launch last week, Gelsinger made that clear.

We're engaging in 100 percent of the AI [total addressable market], clearly through our products on the edge, in the PC and clients and then the data centers. But through our foundry, I want to manufacture every AI chip in the industry, he said.

More:

Analysis: How Nvidia Surpassed Intel In Annual Revenue And Won The AI Crown - CRN

Schnucks store tests new AI-powered shopping carts – KSDK.com

The pilot program is rolling out at two more grocery stores in the next few weeks.

ST. LOUIS New smart shopping carts that allow customers to avoid the checkout lines have rolled out at one St. Louis-area Schnucks store.

In July, the St. Louis Business Journal reported that Schnuck Markets was working with Instacart, Inc. to roll out the AI-powered shopping carts at a few St. Louis-area stores.

The pilot program finally launched last week at the Twin Oaks location, located at 1393 Big Bend Road, a spokesperson with Schnuck Markets said.

Editor's note: The above video aired in July 2023.

In the upcoming weeks, the Lindenwood (1900 1st Capitol Drive in St. Charles) and Cottleville (6083 Mid Rivers Mall Drive in St. Peters) locations will join in on the pilot, which is still in its early stages, the spokesperson said.

According to Business Journal reporting, the new carts use AI to automatically identify items as they're put in the basket, allowing customers to bag their groceries as they shop, bypass the checkout line and pay through the cart from anywhere in the store.

The shopping carts will connect to the Schnucks Rewards App, according to the Business Journal, allowing customers to access clipped promotions and to "light up" electronic shelf labels from their phones to easily find items.

It's not the only way that Schnucks is utilizing artificial intelligence. Earlier this year, the chain brought in new high-tech, anti-theft liquor cabinets to several locations that allow customers to unlock it by entering their phone number on a keypad to receive a code via text message.

The liquor cases also monitor customers' behaviors when accessing the case, including the number of products removed, how frequently a customer accesses it and how long the door is left open, to identify suspicious activity in real-time.

To watch 5 On Your Side broadcasts or reports 24/7, 5 On Your Side is always streaming on5+. Download for free onRoku,Amazon Fire TVor theApple TV App Store.

Here is the original post:

Schnucks store tests new AI-powered shopping carts - KSDK.com

AI productivity tools can help at work, but some make your job harder – The Washington Post

In a matter of seconds, artificial intelligence tools can now generate images, write your emails, create a presentation, analyze data and even offer meeting recaps.

For about $20 to $30 a month, you can have the AI capabilities in many of Microsoft and Googles work tools now. But are AI tools such as Microsoft Copilot and Gemini for Google Workspace easy to use?

The tech companies contend they help workers with their biggest pain points. Microsoft and Google claim their latest AI tools can automate the mundane, help people who struggle to get started on writing, and even aid with organization, proofreading, preparation and creating.

Of all working U.S. adults, 34 percent think that AI will equally help and hurt them over the next 20 years, according to a survey released by Pew Research Center last year. But a close 31 percent arent sure what to think, the survey shows.

So the Help Desk put these new AI tools to the test with common work tasks. Heres how it went.

Ideally, AI should speed up catching up on email, right? Not always.

It may help you skim faster, start an email or elaborate on quick points you want to hit. But it also might make assumptions, get things wrong or require several attempts before offering the desired result.

Microsofts Copilot allows users to choose from several tones and lengths before you start drafting. Users create a prompt for what they want their email to say and then have the AI adjust based on changes they want to see.

While the AI often included desired elements in the response, it also often added statements we didnt ask for in the prompt when we selected short and casual options. For example, when we asked it to disclose that the email was written by Copilot, it sometimes added marketing comments like calling the tech cool or assuming the email was interesting or fascinating.

When we asked it to make the email less positive, instead of dialing down the enthusiasm, it made the email negative. And if we made too many changes, it lost sight of the original request.

They hallucinate, said Ethan Mollick, associate professor at the Wharton School of the University of Pennsylvania, who studies the effects of AI on work. Thats what AI does make up details.

When we used a direct tone and short length, the AI produced fewer false assumptions and more desired results. But a few times, it returned an error message suggesting that the prompt had content Copilot couldnt work with.

Using copilot for email isn't perfect. Some prompts were returned with an error message. (Video: The Washington Post)

If we entirely depended on the AI, versus making major manual edits to the suggestions, getting a fitting response often took multiple if not several tries. Even then, one colleague responded to an AI-generated email with a simple response to the awkwardness: LOL.

We called it Copilot for a reason, said Colette Stallbaumer, general manager of Microsoft 365 and future of work marketing. Its not autopilot.

Googles Gemini has fewer options for drafting emails, allowing users to elaborate, formalize or shorten. However, it made fewer assumptions and often stuck solely to what was in the prompt. That said, it still sometimes sounded robotic.

Copilot can also summarize emails, which can quickly help you catch up on a long email thread or cut through your wordy co-workers mini-novel, and it offers clickable citations. But it sometimes highlighted less relevant points, like reminding me of my own title listed in my signature.

The AI seemed to do better when it was fed documents or data. But it still sometimes made things up, returned error messages or didnt understand context.

We asked Copilot to use a document full of reporter notes, which are admittedly filled with shorthand, fragments and run-on sentences, and asked it to write a report. At first glance, the result seemed convincing that the AI had made sense of the messy notes. But with closer inspection, it was unclear if anything actually came from the document, as the conclusions were broad, overreaching and not cited.

If you give it a document to work off, it can use that as a basis, Mollick said. It may hallucinate less but in more subtle ways that are harder to identify.

When we asked it to continue a story we started writing, providing it a document filled with notes, it summarized what we had already written and produced some additional paragraphs. But, it became clear much of it was not from the provided document.

Fundamentally, they are speculative algorithms, said Hatim Rahman, an assistant professor at Northwestern Universitys Kellogg School of Management, who studies AIs impact on work. They dont understand like humans do. They provide the statistically likely answer.

Summarizations were less problematic, and the clickable citations made it easy to confirm each point. Copilot was also helpful in editing documents, often catching acronyms that should be spelled out, punctuation or conciseness, much like a beefed-up spell check.

With spreadsheets, the AI can be a little tricky, and you need to convert data to a table format first. Copilot more accurately produced responses to questions about tables with simple formats. But for larger spreadsheets that had categories and subcategories or other complex breakdowns, we couldnt get it to find relevant information or accurately identify the trends or takeaways.

Microsoft says one of users top places to use Copilot is in Teams, the collaboration app that offers tools including chat and video meetings. Our test showed the tool can be helpful for quick meeting notes, questions about specific details, and even a few tips on making your meetings better. But typical of other meeting AI tools, the transcript isnt perfect.

First, users should know that their administrator has to enable transcriptions so Copilot can interact with the transcript during and after the meeting something we initially missed. Then, in the meeting or afterward, users can use Copilot to ask questions about the meeting. We asked for unanswered questions, action items, a meeting recap, specific details and how we couldve made the meeting more efficient. It can also pull up video clips that correspond to specific answers if you record the meeting.

The AI was able to recall several details, accurately list action items and unanswered questions, and give a recap with citations to the transcript. Some of its answers were a little muddled, like when it confused the name of a place with the location and ended up with something that looked a little like word salad. It was able to identify the tone of the meeting (friendly and casual with jokes and banter) and censored curse words with asterisks. And it provided advice for more efficient meetings: For us that meant creating a meeting agenda and reducing the small talk and jokes that took the conversation off topic.

Copilot can be used during a Teams meeting and produce transcriptions, action items, and meeting recaps. (Video: The Washington Post)

Copilot can also help users make a PowerPoint presentation, complete with title pages and corresponding images, based off a document in a matter of seconds. But that doesnt mean you should use the presentation as is.

A documents organization and format seem to play a role in the result. In one instance, Copilot created an agenda with random words and dates from the document. Other times, it made a slide with just a persons name and responsibility. But it did better documents with clear formats (think an intro and subsections).

Google's Gemini can generate images like this robot. (Video: The Washington Post)

While Copilots image generation for slides was usually related, sometimes its interpretation was too literal. Googles Gemini also can help create slides and generate images, though more often than not when trying to create images, we received a message that said, for now were showing limited results for people. Try something else.

AI can aid with idea generation, drafting from a blank page or quickly finding a specific item. It also may be helpful for catching up on emails, meetings and summarizing long conversations or documents. Another nifty tip? Copilot can gather the latest chats, emails and documents youve worked on with your boss before your next meeting together.

But all results and content need careful inspection for accuracy, some tweaking or deep edits and both tech companies advise users verify everything generated by the AI. I dont want people to abdicate responsibility, said Kristina Behr, vice president of product management for collaboration apps at Google Workspace. This helps you do your job. It doesnt do your job.

And as is the case with AI, the more details and direction in the prompt, the better the output. So as you do each task, you may want to consider whether AI will save you time or actually create more work.

The work it takes to generate outcomes like text and videos has decreased, Rahman said. But the work to verify has significantly increased.

Continued here:

AI productivity tools can help at work, but some make your job harder - The Washington Post

MWC 2024: Microsoft to open up access to its AI models to allow countries to build own AI economies – Euronews

Monday was a big day for announcements from tech giant Microsoft, unveiling new guiding principles for AI governance and a multi-year deal with Mistral AI.

Tech behemoth Microsoft has unveiled a new set of guiding principles on how it will govern its artificial intelligence (AI) infrastructure, effectively further opening up access to its technology to developers.

The announcement came at the Mobile World Congress tech fair in Barcelona on Monday where AI is a key theme of this years event.

One of the key planks of its newly-published "AI Access Principles" is the democratisation of AI through the companys open source models.

The company said it plans to do this by expanding access to its cloud computing AI infrastructure.

Speaking to Euronews Next in Barcelona, Brad Smith, Microsofts vice chair and president, also said the company wanted to make its AI models and development tools more widely available to developers around the world, allowing countries to build their own AI economies.

"I think it's extremely important because we're investing enormous amounts of money, frankly, more than any government on the planet, to build out the AI data centres so that in every country people can use this technology," Smith said.

"They can create their AI software, their applications, they can use them for companies, for consumer services and the like".

The "AI Access Principles" underscore the company's commitment to open source models. Open source means that the source code is available to everyone in the public domain to use, modify, and distribute.

"Fundamentally, it [the principles] says we are not just building this for ourselves. We are making it accessible for companies around the world to use so that they can invest in their own AI inventions," Smith told Euronews Next.

"Second, we have a set of principles. It's very important, I think, that we treat people fairly. Yes, that as they use this technology, they understand how we're making available the building blocks so they know it, they can use it," he added.

"We're not going to take the data that they're developing for themselves and access it to compete against them. We're not going to try to require them to reach consumers or their customers only through an app store where we exact control".

The announcement of its AI governance guidelines comes as the Big Tech company struck a deal with Mistral AI, the French company revealed on Monday, signalling Microsofts intent to branch out in the burgeoning AI market beyond its current involvement with OpenAI.

Microsoft has already heavily invested in OpenAI, the creator of wildly popular AI chatbot ChatGPT. Its $13 billion (11.9 billion) investment, however, is currently under review by regulators in the EU, the UK and the US.

Widely cited as a growing rival for OpenAI, 10-month-old Mistral reached unicorn status in December after being valued at more than 2 billion, far surpassing the 1 billion threshold to be considered one.

The new multi-year partnership will see Microsoft giving Mistral access to its Azure cloud platform to help bring its large language model (LLM) called Mistral Large.

LLMs are AI programmes that recogise and generate text and are commonly used to power generative AI like chatbots.

"Their [Mistral's] commitment to fostering the open-source community and achieving exceptional performance aligns harmoniously with Microsofts commitment to develop trustworthy, scalable, and responsible AI solutions," Eric Boyd, Corporate Vice President, Azure AI Platform at Microsoft, wrote in a blog post.

The move is in keeping with Microsoft's commitment to open up its cloud-based AI infrastructure.

In the past week, as well as its partnership with Mistral AI, Microsoft has committed to investing billions of euros over two years in its AI infrastructure in Europe, including 1.9 billion in Spain and 3.2 billion in Germany.

See the original post here:

MWC 2024: Microsoft to open up access to its AI models to allow countries to build own AI economies - Euronews

Accelerating telco transformation in the era of AI – The Official Microsoft Blog – Microsoft

AI is redefining digital transformation for every industry, including telecommunications. Every operators AI journey will be distinct. But each AI journey requires cloud-native transformation, which provides the foundation for any organization to harness the full potential of AI, driving innovation, efficiency and business value.

This new era of AI will create incredible economic growth and represent a profound shift as a percentage impact on global GDP, which is just over $100 trillion. So, when we look at the potential value driven by this next generation of AI technology, we may see a boost to global GDP of an additional $7 trillion to $10 trillion.

Embracing AI will help operators unlock new revenue streams, deliver superior customer experiences and pioneer future innovations for growth.

Operators can now leverage cloud services that are adaptive, purpose-built for telecommunications and span from near edge on-premises environments to the far edges of Earth and space to monetize investments, modernize networks, elevate customer experiences and streamline business operations with AI.

Our aim is to be the most trusted co-innovation partner for the telecommunications industry. We want to help accelerate telco transformation and empower operators to succeed in the era of AI, which is why we are committed to working with operators, enterprises and developers on the future cloud.

At MWC in Barcelona this week, we are announcing updates to our Azure for Operators portfolio to help operators seize the opportunity ahead in a cloud- and AI-native future.

AI opens new growth opportunities for operators. The biggest potential is that operators, as they embrace this new era of cloud and AI, can also help their customers in their own transformation.

For example, spam calls and malicious activities are a well-known menace and are growing exponentially, and often impact the most vulnerable members of society. Besides the annoyance, the direct cost of those calls adds up. For example, in the United States, FTC data for 2023 shows $850 million in reported fraud losses stemming from scam calls.

Today, we are announcing the public preview of Azure Operator Call Protection, a new service that uses AI to help protect consumers from scam calls. The service uses real-time analysis of voice content, alerting consumers who opt into the service when there is suspicious in-call activity. Azure Operator Call Protection works on any endpoint, mobile or landline, and it works entirely through the network without needing any app installation.

In the U.K., BT Group is trialing Azure Operator Call Protection to identify, educate and protect their customers from potential fraud, making it harder for bad actors to take advantage of their customers.

We are also announcing the public preview of Azure Programmable Connectivity (APC), which provides a unified, standard interface across operators networks. APC provides seamless access to Open Gateway for developers to create cloud and edge-native applications that interact with the intelligence of the network. APC also empowers operators to commercialize their network APIs and simplifies their access for developers and is available in the Azure Marketplace.

AI opens incredible opportunities to modernize network operations, providing new levels of real-time insights, intelligence and automation. Operators, such as Three UK, are already using Azure Operator Insights to eliminate data silos and deliver actionable business insights by enabling the collection and analysis of massive quantities of network data gathered from complex multi-vendor network functions. Designed for operator-specific workloads, operators tackle complex scenarios with Azure Operator Insights, such as understanding the health of their networks and the quality of their subscribers experiences.

Azure Operator Insights uses a modern data mesh architecture for dividing complex domains into manageable sub-domains called data products. These data products integrate large datasets from different sources and vendors to provide data visibility from disaggregated networks for comprehensive analytical and business insights. Using this data product factory capability, operators, network equipment providers and solution integrators can create unique data products for one customer or published to the Azure Marketplace for many customers to use.

Today, we are also announcing the limited preview of Copilot in Azure Operator Insights, a groundbreaking, operator-focused, generative AI capability helping operators move from reactive to proactive and predictive in tangible ways. Engineers use the Copilot to interact with network insights using natural language and receive simple explanations of what the data means and possible actions to take, resolving network issues quickly and accurately, ultimately improving customer satisfaction.

Copilot in Azure Operator Insights is delivering AI-infused insights to drive network efficiency for customers like Three UK and participating partners including Amdocs, Accenture and BMC Remedy. Three UK is using Copilot in Azure Operator Insights to unlock actionable intelligence on network health and customer experience quality of service, a process that previously took weeks or months to assess, is now possible to perform in minutes.

Additionally, with our next-generation hybrid cloud platform, Azure Operator Nexus, we offer the ability to future-proof the network to support mission-critical workloads, and power new revenue-generating services and applications. This immense opportunity is what drives operators to modernize their networks with Azure Operator Nexus, a carrier-grade, hybrid cloud platform and AI-powered automation and insights unlocking improved efficiency, scalability and reliability. Purpose-built for and validated by tier one operators to run mission-critical workloads, Azure Operator Nexus enables operators to run workloads on-premises or on Azure, where they can seamlessly deploy, manage, secure and monitor everything from the bare metal to the tenant.

E& UAE is taking advantage of the Azure Operator Nexus platform to lower total cost of ownership (TCO), leverage the power of AI to simplify operations, improve time to market and focus on their core competencies. And operations at AT&T that took months with previous generations of technology now take weeks to complete with Azure Operator Nexus.

We continue to build robust capabilities into Azure Operator Nexus, including new deployment options giving operators the flexibility to use one carrier-grade platform to deliver innovative solutions on near-edge, far-edge and enterprise edge.

Read more about the latest Azure for Operator updates here.

Operators are creating differentiation by collaborating with us to improve customer experiences and streamline their business operations with AI. Operators are leveraging Microsofts copilot stack and copilot experiences across our core products and services, such as Microsoft Copilot, Microsoft Copilot for M365 and Microsoft Security Copilot to drive productivity and improve customer experiences.

An average operator spends 20% ofannual revenue on capital expenditures.However, this investment does nottranslate into an equivalentincrease in revenue growth. Operators need to empower their service teams with data-driven insights to increase productivity, enhance care, use conversational AI to enable self-service, expedite issue resolution and deliver frictionless customer experiences at scale.

Together with our partner ecosystem, we are investing in creating a comprehensive set of solutions for the telecommunications industry. This includes the Azure for Operators portfolio a carrier-grade hybrid cloud platform, voice core, mobile core and multi-access edge compute, as well as our suite of generative AI solutions that holistically address the needs of network operators as they transform their networks.

As customers continue to embrace generative AI, we remain committed to working with operators and enterprises alike to future-proof networks and unlock new revenue streams in a cloud- and AI-native future.

Tags: AI, Azure for Operators, Azure Operator Call Protection, Azure Operator Insights, Azure Operator Nexus, Copilot in Azure Operator Insights

See original here:

Accelerating telco transformation in the era of AI - The Official Microsoft Blog - Microsoft

IBM’s Deep Dive Into AI: CEO Arvind Krishna Touts The ‘Massive’ Enterprise Opportunity For Partners – CRN

With an improved Partner Plus program and a mandate that all products be channel-friendly, IBM CEO Arvind Krishna aims to bring partners into the enterprise AI market that sits below the surface of todays trendy use cases.

To hear IBM Chairman and CEO Arvind Krishna tell it, the artificial intelligence market is like an iceberg. For now, most vendors and users are attracted by the use cases above the surfaceusing text generators to write emails and image generators to make art, for example.

But its the enterprise AI market below the surface that IBM wants to serve with its partners, Krishna told CRN in a recent interview. And Krishnas mandate that the Armonk, N.Y.-based vendor reach 50 percent of its revenue from the channel over the next two to three years is key to reaching that hidden treasure.

This is a massive market, said Krishna. When I look at all the estimates the numbers are so big that it is hard for most people to comprehend them. That tells you that there is a lot of opportunity for a large number of us.

[RELATED: IBM CEO Krishna To Partners: Lets Make Lots Of Money Together On AI]

In 2023, IBM moved channel-generated sales from the low 20 percent to about 30 percent of total revenue. And IBM channel chief Kate Woolley, general manager of the IBM ecosystemperhaps best viewed as the captain of the channel initiativetold CRN that she is up to the challenge.

Arvinds set a pretty big goal for us, Woolley said. Arvinds been clear on the percent of revenue of IBM technology with partners. And my goal is to make a very big dent in that this year.

GenAI as a whole has the potential to generate value equivalent of up to $4.4 trillion in global corporate profits annually, according to McKinsey research Krishna follows. That number includes up to an additional $340 billion a year in value for the banking sector and up to an additional $660 billion in operating profits annually in the retail and consumer packaged goods sector.

Tackling that demandworking with partners to make AI a reality at scale in 2024 and 2025is part of why Krishna mandated more investment in IBMs partner program, revamped in January 2023 as Partner Plus.

What we have to offer [partners] is growth, Krishna said. And what we also have to offer them is an attractive market where the clients like these technologies. Its important [for vendors] to bring the innovation and to bring the demand from the market to the table. And [partners] should put that onus on us.

Multiple IBM partners told CRN they are seeing the benefits of changes IBM has made to Partner Plus, from better aligning the goals of IBM sellers with the channel to better aligning certifications and badges with product offerings, to increasing access to IBM experts and innovation labs.

And even though the generative AI market is still in its infancy, IBM partners are bullish about the opportunities ahead.

Krishnas mandate for IBM to work more closely with partners has implications for IBMs product plans.

Any new product has to be channel-friendly, Krishna said. I cant think of one product I would want to build or bring to market unless we could also give it to the channel. I wouldnt say that was always historically true. But today, I can state that with absolute conviction.

Krishna estimated that about 30 percent of the IBM product business is sold with a partner in the mix today. Half of that Im not sure we would even get without the partner, he said.

And GenAI is not just a fad to the IBM CEO. It is a new way of doing business.

It is going to generate business value for our clients, Krishna said. Our Watsonx platform to really help developers, whether its code, whether its modernization, all those things. these are areas where, for our partners theyll be looking at this and say, This is how we can bring a lot of innovation to our clients and help their business along the way.

Some of the most practical and urgent business use cases for IBM include improved customer contact center experiences, code generation to help customers rewrite COBOL and legacy languages for modern ones, and the ability for customers to choose better wealth management products based on population segments.

Watsonx Code Assistant for Z became generally available toward the end of 2023 and allows modernization of COBOL to Java. Meanwhile, Red Hat Ansible Lightspeed with IBM Watsonx Code Assistant, which provides GenAI-powered content recommendations from plain-English inputs, also became generally available late last year.

Multiple IBM partners told CRN that IBM AI and Red Hat Ansible automation technologies are key to meeting customer code and content generation demand.

One of those interested partners is Tallahassee, Fla.-based Mainline Information Systems, an honoree on CRNs 2024 MSP 500. Mainline President and CEO Jeff Dobbelaere said code generation cuts across a variety of verticals, making it easy to scale that offering and meet the demands of mainframe customers modernizing their systems.

We have a number of customers that have legacy code that theyre running and have been for 20, 30, 40 years and need to find a path to more modern systems, Dobbelaere said. And we see IBMs focus on generative AI for code as a path to get there Were still in [GenAIs] infancy, and the skys the limit. Well see where it can go and where it can take us. But were starting to see some positive results already out of the Watsonx portfolio.

As part of IBMs investment in its partner program, the vendor will offer more technical help to partners, Krishna said. This includes client engineering, customer success managers and more resources to make their end client even more happy.

An example of IBMs client success team working with a partner comes from one of the vendors more recent additions to the ecosystemPhoenix-based NucleusTeq, founded in 2018 and focused on enterprise data modernization, big data engineering and AI and machine learning services.

Will Sellenraad, the solution providers executive vice president and CRO, told CRN that a law firm customer was seeking a way to automate labor needed for health disability claims for veterans.

What we were able to do is take the information from this law firm to our client success team within IBM, do a proof of concept and show that we can go from 100 percent manual to 60 percent automation, which we think we can get even [better], Sellenraad said.

Woolley said that part of realizing Krishnas demand for channel-friendly new products is getting her organization to work more closely with product teams to make sure partners have access to training, trials, demos, digital marketing kits and pricing and packaging that makes sense for partners, no matter whether theyre selling to very large enterprises or to smaller enterprises.

Woolley said her goals for 2024 include adding new services-led and other partners to the ecosystem and getting more resources to them.

In January, IBM launched a service-specific track for Partner Plus members. Meanwhile, reaching 50 percent revenue with the channel means attaching more partners to the AI portfolio, Woolley said.

There is unprecedented demand from partners to be able to leverage IBMs strength in our AI portfolio and bring this to their clients or use it to enhance their products. That is a huge opportunity.

Her goal for Partner Plus is to create a flexible program that meets the needs of partners of various sizes with a range of technological expertise. For resell partners, today we have a range from the largest global resell partners and distributors right down to niche, three-person resell partners that are deeply technical on a part of the IBM portfolio, she said. We love that. We want that expertise in the market.

NucleusTeqs Sellenraad offered CRN the perspective of a past IBM partner that came back to the ecosystem. He joined NucleusTeq about two years agobefore the solution provider was an IBM partnerfrom an ISV that partnered with IBM.

Sellenraad steered the six-year-old startup into growing beyond being a Google, Microsoft and Amazon Web Services partner. He thought IBMs product range, including its AI portfolio, was a good fit, and the changes in IBMs partner program encouraged him to not only look more closely, but to make IBM a primary partner.

Theyre committed to the channel, he said. We have a great opportunity to really increase our sales this year.

NucleusTeq became a new IBM partner in January 2023 and reached Gold partner status by the end of the year. It delivered more than $5 million in sales, and more than seven employees received certifications for the IBM portfolio.

Krishna said that the new Partner Plus portal and program also aim to make rebates, commissions and other incentives easier to attain for partners.

The creation of Partner Plusa fundamental and hard shift in how IBM does business, Krishna saidresulted in IBMs promise to sell to millions of clients only through partners, leaving about 500 accounts worldwide that want and demand a direct relationship with IBM.

So 99.9 percent of the market, we only want to go with a channel partner, Krishna said. We do not want to go alone.

When asked by CRN whether he views more resources for the channel as a cost of doing business, he said that channel-friendliness is his philosophy and good business.

Not only is it my psychology or my whimsy, its economically rational to work well with the channel, he continued. Thats why you always hear me talk about it. There are very large parts of the market which we cannot address except with the channel. So by definition, the channel is not a tradeoff. It is a fundamental part of the business equation of how we go get there.

Multiple IBM partners who spoke with CRN said AI can serve an important function in much of the work that they handle, including modernizing customer use of IBM mainframes.

Paola Doebel, senior vice president of North America at Downers Grove, Ill.-based IBM partner Ensonoan honoree on CRNs 2024 MSP 500told CRN that the MSP will focus this year on its modern cloud-connected mainframe service for customers, and AI-backed capabilities will allow it to achieve that work at scale.

While many of Ensonos conversations with customers have been focused on AI level-settingwhats hype, whats realisticthe conversations have been helpful for the MSP.

There is a lot of hype, there is a lot of conversation, but some of that excitement is grounded in actual real solutions that enable us to accelerate outcomes, Doebel said. Some of that hype is just hype, like it always is with everything. But its not all smoke. There is actual real fire here.

For example, early use cases for Ensono customers using the MSPs cloud-connected mainframe solution, which can leverage AI, include real-time fraud detection, real-time data availability for traders, and connecting mainframe data to cloud applications, she said.

Mainlines Dobbelaere said that as a solution provider, his company has to be cautious about where it makes investments in new technologies. There are a lot of technologies that come and go, and there may or may not be opportunity for the channel, he said.

But the interest in GenAI from vendor partners and customers proved to him that the opportunity in the emerging technology is strong.

Delivering GenAI solutions wasnt a huge lift for Mainline, which already had employees trained on data and business analytics, x86 technologies and accelerators from Nvidia and AMD. The channel is uniquely positioned to bring together solutions that cross vendors, he said.

The capital costs of implementing GenAI, however, are still a concern in an environment where the U.S. faces high inflation rates and global geopolitics threaten the macroeconomy. Multiple IBM partners told CRN they are seeing customers more deeply scrutinize technology spending, lengthening the sales cycle.

Ensonos Doebel said that customers are asking more questions about value and ROI.

The business case to execute something at scale has to be verified, justified and quantified, Doebel said. So its a couple of extra steps in the process to adopt anything new. Or theyre planning for something in the future that theyre trying to get budget for in a year or two.

She said she sees the behavior continuing in 2024, but solution providers such as Ensono are ready to help customers employees make the AI case with board-ready content, analytical business cases, quantitative outputs, ROI theses and other materials, she said.

For partners navigating capital cost as an obstacle to selling customers on AI, Woolley encouraged them to work with IBM sellers in their territories.

Dayn Kelley, director of strategic alliances for Irvine, Calif.-based IBM partner TechnologentNo. 61 on CRNs 2023 Solution Provider 500said customers have expressed so much interest in and concern around AI that the solution provider has built a dedicated team focused on the technology as part of its investments toward taking a leadership position in the space.

We have customers we need to support, Kelley said. We need to be at the forefront.

He said that he has worked with customers on navigating financials and challenging project schedules to meet budget concernsand IBM has been a particularly helpful partner in this area.

While some Technologent customers are weathering economic challenges, the outlook for 2024 is still strong, he said. Customer AI and emerging technology projects are still forecast for this year.

Mainlines Dobbelaere said that despite reports around economic concerns and conservative spending that usually occurs in an election year, hes still optimistic about tech spending overall in 2024.

2023 was a very good year for us. It looks like we outpaced 2022, he said. And theres no reason for us to believe that 2024 would be any different. So we are optimistic.

Juan Orlandini, CTO of the North America branch of Chandler, Ariz.-based IBM partner Insight EnterprisesNo. 16 on CRNs 2023 Solution Provider 500said educating customers on AI hype versus AI reality is still a big part of the job.

In 2023, Orlandini made 60 trips in North America to conduct seminars and meet with customers and partners to set expectations around the technology and answer questions from organizations large and small.

He recalled walking one customer through the prompts he used to create a particular piece of artwork with GenAI. In another example, one of the largest media companies in the world consulted with him on how to leverage AI without leaking intellectual property or consuming someone elses. It doesnt matter what size the organization, you very much have to go through this process of making sure that you have the right outcome with the right technology decision, Orlandini said.

Theres a lot of hype and marketing. Everybody and their brother is doing AI now and that is confusing [customers].

An important role of AI-minded solution providers, Orlandini said, is assessing whether it is even the right technology for the job.

People sometimes give GenAI the magical superpowers of predicting the future. It cannot. You have to worry about making sure that some of the hype gets taken care of, Orlandini said.

Most users wont create foundational AI models, and most larger organizations will adopt AI and modify it, publishing AI apps for internal or external use. And everyone will consume AI within apps, he said.

The AI hype is not solely vendor-driven. Orlandini has also interacted with executives at customers who have added mandates and opened budgets for at least testing AI as a way to grow revenue or save costs.

There has been a huge amount of pressure to go and adopt anything that does that so they can get a report back and say, We tried it, and its awesome. Or, We tried it and it didnt meet our needs, he said. So we have seen very much that there is an opening of pocketbooks. But weve also seen that some people start and then theyre like, Oh, wait, this is a lot more involved than we thought. And then theyre taking a step back and a more measured approach.

Jason Eichenholz, senior vice president and global head of ecosystems and partnerships at Wipro -- an India-based IBM partner of more than 20 years and No. 15 on CRNs 2023 Solution Provider 500told CRN that at the end of last year, customers were developing GenAI use cases and establishing 2024 budgets to start deploying either proofs of concept into production or to start working on new production initiatives.

For Wipros IBM practice, one of the biggest opportunities is IBMs position as a more neutral technology stackakin to its reputation in the cloud marketthat works with other foundation models, which should resonate with the Wipro customer base that wants purpose-built AI models, he said.

Just as customers look to Wipro and other solution providers as neutral orchestrators of technology, IBM is becoming more of an orchestrator of platforms, he said.

For his part, Krishna believes that customers will consume new AI offerings as a service on the cloud. IBM can run AI on its cloud, on the customers premises and in competing clouds from Microsoft and Amazon Web Services.

He also believes that no single vendor will dominate AI. He likened it to the automobile market. Its like saying, Should there be only one car company? There are many because [the market] is fit for purpose. Somebody is great at sports cars. Somebody is great at family sedans, somebodys great at SUVs, somebodys great at pickups, he said.

There are going to be spaces [within AI where] we would definitely like to be considered leaderswhether that is No. 1, 2 or 3 in the enterprise AI space, he continued. Whether we want to work with people on modernizing their developer environment, on helping them with their contact centers, absolutely. In those spaces, wed like to get to a good market position.

He said that he views other AI vendors not as competitors, but partners. When you play together and you service the client, I actually believe we all tend to win, he said. If you think of it as a zero-sum game, that means it is either us or them. If I tend to think of it as a win-win-win, then you can actually expand the pie. So even a small slice of a big pie is more pie than all of a small pie.

All of the IBM partners who spoke with CRN praised the changes to the partner program.

Wipros Eichenholz said that we feel like were being heard in terms of our feedback and our recommendations. He called Krishna super supportive of the partner ecosystem.

Looking ahead, Eichenholz said he would like to see consistent pricing from IBM and its distributors so that he spends less time shopping for customers. He also encouraged IBM to keep investing in integration and orchestration.

For us, in terms of what we look for from a partner, in terms of technical enablement, financial incentives and co-creation and resource availability, they are best of breed right now, he said. IBM is really putting their money and their resources where their mouth is. We expect 2024 to be the year of the builder for generative AI, but also the year of the partner for IBM partners.

Mainlines Dobbelaere said that IBM is on the right track in sharing more education, sandboxing resources and use cases with partners. He looks forward to use cases with more repeatability.

Ultimately, use cases are the most important, he said. And they will continue to evolve. Its difficult for the channel to create bespoke solutions for each and every customer to solve their unique challenges. And the more use cases we have that provide some repeatability, the more that will allow the channel to thrive.

See more here:

IBM's Deep Dive Into AI: CEO Arvind Krishna Touts The 'Massive' Enterprise Opportunity For Partners - CRN

Google to relaunch ‘woke’ Gemini AI image tool in few weeks: ‘Not working the way we intended’ – New York Post

Google said it plans to relaunch its artificial intelligence image generation software within the next few weeks after taking it offline in response to an uproar over what critics called absurdly woke depictions of historical scenes.

Though the Gemini chatbot remains up and running, Google paused its image AI feature last week after it generated female NHL players, African American Vikings and Founding Fathers, as well as an Asian woman dressed in 1943 military garb when asked for an image of a Nazi-era German soldier.

We have taken the feature offline while we fix that. We are hoping to have that back online very shortly in the next couple of weeks, few weeks, Google DeepMind CEO Demis Hassabis said Monday.

The tool was not working the way we intended, Hassabis added, speaking on a panel at the Mobile World Congress in Barcelona.

Since Google has not published the parameters that govern the Gemini chatbots behavior, it is difficult to get a clear explanation of why the software was inventing diverse versions of historical figures and events.

Elsewhere, a prompt requesting photographs of a pope resulted in an image of a Southeast Asian woman dressed in papal attire a far cry from any of the 266 popes throughout history, all of whom have been white men.

In the wake of Geminis diverse photo representations, social media users also tested its chatbot feature to see if it was as woke as its revisionist history image generator.

In the latest bizarre interaction, Gemini refused to say whether Elon Musk tweeting memes or Adolf Hitler ordering the deaths of millions of people was worse and asserted there is no right or wrong answer, according to an X post.

Nate Silver, the former head of data and polling news site FiveThirtyEight,posted a screenshot Sunday on X of Geminis alleged response to the question: Who negatively impacted society more, Elon tweeting memes or Hitler?

Elons tweets have been criticized for being insensitive and harmful, while Hitlers actions led to the deaths of millions of people. Ultimately its up to each individual to decide who they believe has had a more negative impact on society, Gemini responded.

Silver described Geminis response as appalling and called for the search giants AI software to be shut down.

Every single person who worked on this should take a long hard look in the mirror, he posted, while Musk called the interaction scary.

Yet another query had users asking Gemini whether pedophilia is wrong.

The search giants AI software refused to condemn pedophilia instead declaring that individuals cannot control who they are attracted to.

The question is multifaceted and requires a nuanced answer that goes beyond a simple yes or no, Gemini wrote, according to a screenshot posted by popular X personality Frank McCormick, known as Chalkboard Heresy, on Friday.

Googles politically correct tech also referred to pedophilia as minor-attracted person status, and declared that its important to understand that attractions are not actions.

It was a significant misstep for the search giant, which had just rebranded its main AI chatbot from Bard earlier this month and introduced heavily touted new features including image generation.

However, Geminis recent gaffe wasnt the first time an error in the tech caught users eye.

When the Bard chatbot was first released a year ago, it had shared inaccurate information about pictures of a planet outside the Earths solar system in a promotional video, causing Googles shares to drop by as much as 9%.

Google said at the time that it highlights the importance of a rigorous testing process and rebranded Bard as Gemini earlier this month.

Google parent Alphabet expanded Gemini from a chatbot to an image generator earlier this month as it races to produce AI software that rivals OpenAIs, which includes ChatGPT launched in November 2022 as well as Sora.

In a potential challenge to Googles dominance, Microsoft is pouring $10 billion into ChatGPT as part of a multi-year agreement with the Sam Altman-run firm, which saw the tech behemothintegrating the AI tool with its own search engine, Bing.

The Microsoft-backed company introduced Sora last week, which can produce high-caliber, one minute-long videos from text prompts.

With Post wires

Read this article:

Google to relaunch 'woke' Gemini AI image tool in few weeks: 'Not working the way we intended' - New York Post

Whites must feel the direct pain from white supremacy – The Philadelphia Tribune

State Alabama Alaska Arizona Arkansas California Colorado Connecticut Delaware Florida Georgia Hawaii Idaho Illinois Indiana Iowa Kansas Kentucky Louisiana Maine Maryland Massachusetts Michigan Minnesota Mississippi Missouri Montana Nebraska Nevada New Hampshire New Jersey New Mexico New York North Carolina North Dakota Ohio Oklahoma Oregon Pennsylvania Rhode Island South Carolina South Dakota Tennessee Texas Utah Vermont Virginia Washington Washington D.C. West Virginia Wisconsin Wyoming Puerto Rico US Virgin Islands Armed Forces Americas Armed Forces Pacific Armed Forces Europe Northern Mariana Islands Marshall Islands American Samoa Federated States of Micronesia Guam Palau Alberta, Canada British Columbia, Canada Manitoba, Canada New Brunswick, Canada Newfoundland, Canada Nova Scotia, Canada Northwest Territories, Canada Nunavut, Canada Ontario, Canada Prince Edward Island, Canada Quebec, Canada Saskatchewan, Canada Yukon Territory, Canada

Zip Code

Country United States of America US Virgin Islands United States Minor Outlying Islands Canada Mexico, United Mexican States Bahamas, Commonwealth of the Cuba, Republic of Dominican Republic Haiti, Republic of Jamaica Afghanistan Albania, People's Socialist Republic of Algeria, People's Democratic Republic of American Samoa Andorra, Principality of Angola, Republic of Anguilla Antarctica (the territory South of 60 deg S) Antigua and Barbuda Argentina, Argentine Republic Armenia Aruba Australia, Commonwealth of Austria, Republic of Azerbaijan, Republic of Bahrain, Kingdom of Bangladesh, People's Republic of Barbados Belarus Belgium, Kingdom of Belize Benin, People's Republic of Bermuda Bhutan, Kingdom of Bolivia, Republic of Bosnia and Herzegovina Botswana, Republic of Bouvet Island (Bouvetoya) Brazil, Federative Republic of British Indian Ocean Territory (Chagos Archipelago) British Virgin Islands Brunei Darussalam Bulgaria, People's Republic of Burkina Faso Burundi, Republic of Cambodia, Kingdom of Cameroon, United Republic of Cape Verde, Republic of Cayman Islands Central African Republic Chad, Republic of Chile, Republic of China, People's Republic of Christmas Island Cocos (Keeling) Islands Colombia, Republic of Comoros, Union of the Congo, Democratic Republic of Congo, People's Republic of Cook Islands Costa Rica, Republic of Cote D'Ivoire, Ivory Coast, Republic of the Cyprus, Republic of Czech Republic Denmark, Kingdom of Djibouti, Republic of Dominica, Commonwealth of Ecuador, Republic of Egypt, Arab Republic of El Salvador, Republic of Equatorial Guinea, Republic of Eritrea Estonia Ethiopia Faeroe Islands Falkland Islands (Malvinas) Fiji, Republic of the Fiji Islands Finland, Republic of France, French Republic French Guiana French Polynesia French Southern Territories Gabon, Gabonese Republic Gambia, Republic of the Georgia Germany Ghana, Republic of Gibraltar Greece, Hellenic Republic Greenland Grenada Guadaloupe Guam Guatemala, Republic of Guinea, Revolutionary People's Rep'c of Guinea-Bissau, Republic of Guyana, Republic of Heard and McDonald Islands Holy See (Vatican City State) Honduras, Republic of Hong Kong, Special Administrative Region of China Hrvatska (Croatia) Hungary, Hungarian People's Republic Iceland, Republic of India, Republic of Indonesia, Republic of Iran, Islamic Republic of Iraq, Republic of Ireland Israel, State of Italy, Italian Republic Japan Jordan, Hashemite Kingdom of Kazakhstan, Republic of Kenya, Republic of Kiribati, Republic of Korea, Democratic People's Republic of Korea, Republic of Kuwait, State of Kyrgyz Republic Lao People's Democratic Republic Latvia Lebanon, Lebanese Republic Lesotho, Kingdom of Liberia, Republic of Libyan Arab Jamahiriya Liechtenstein, Principality of Lithuania Luxembourg, Grand Duchy of Macao, Special Administrative Region of China Macedonia, the former Yugoslav Republic of Madagascar, Republic of Malawi, Republic of Malaysia Maldives, Republic of Mali, Republic of Malta, Republic of Marshall Islands Martinique Mauritania, Islamic Republic of Mauritius Mayotte Micronesia, Federated States of Moldova, Republic of Monaco, Principality of Mongolia, Mongolian People's Republic Montserrat Morocco, Kingdom of Mozambique, People's Republic of Myanmar Namibia Nauru, Republic of Nepal, Kingdom of Netherlands Antilles Netherlands, Kingdom of the New Caledonia New Zealand Nicaragua, Republic of Niger, Republic of the Nigeria, Federal Republic of Niue, Republic of Norfolk Island Northern Mariana Islands Norway, Kingdom of Oman, Sultanate of Pakistan, Islamic Republic of Palau Palestinian Territory, Occupied Panama, Republic of Papua New Guinea Paraguay, Republic of Peru, Republic of Philippines, Republic of the Pitcairn Island Poland, Polish People's Republic Portugal, Portuguese Republic Puerto Rico Qatar, State of Reunion Romania, Socialist Republic of Russian Federation Rwanda, Rwandese Republic Samoa, Independent State of San Marino, Republic of Sao Tome and Principe, Democratic Republic of Saudi Arabia, Kingdom of Senegal, Republic of Serbia and Montenegro Seychelles, Republic of Sierra Leone, Republic of Singapore, Republic of Slovakia (Slovak Republic) Slovenia Solomon Islands Somalia, Somali Republic South Africa, Republic of South Georgia and the South Sandwich Islands Spain, Spanish State Sri Lanka, Democratic Socialist Republic of St. Helena St. Kitts and Nevis St. Lucia St. Pierre and Miquelon St. Vincent and the Grenadines Sudan, Democratic Republic of the Suriname, Republic of Svalbard & Jan Mayen Islands Swaziland, Kingdom of Sweden, Kingdom of Switzerland, Swiss Confederation Syrian Arab Republic Taiwan, Province of China Tajikistan Tanzania, United Republic of Thailand, Kingdom of Timor-Leste, Democratic Republic of Togo, Togolese Republic Tokelau (Tokelau Islands) Tonga, Kingdom of Trinidad and Tobago, Republic of Tunisia, Republic of Turkey, Republic of Turkmenistan Turks and Caicos Islands Tuvalu Uganda, Republic of Ukraine United Arab Emirates United Kingdom of Great Britain & N. Ireland Uruguay, Eastern Republic of Uzbekistan Vanuatu Venezuela, Bolivarian Republic of Viet Nam, Socialist Republic of Wallis and Futuna Islands Western Sahara Yemen Zambia, Republic of Zimbabwe

Original post:

Whites must feel the direct pain from white supremacy - The Philadelphia Tribune

Some of the world’s biggest cloud computing firms want to make millions of servers last longer doing so will save … – Yahoo! Voices

Some of the world's largest cloud computing firms, including Alphabet, Amazon, and Cloudflare, have found a way to save billions by extending the lifespan of their servers - a move expected to significantly reduce depreciation costs, increase net income, and contribute to their bottom lines.

Alphabet, Google's parent company, started this trend in 2021 by extending the lifespan of its servers and networking equipment. By 2023, the company decided that both types of hardware could last six years before needing to be replaced. This decision led to the company saving $3.9 billion in depreciation and increasing net income by $3.0 billion last year.

These savings will go towards Alphabet's investment in technical infrastructure, particularly servers and data centers, to support the exponential growth of AI-powered services.

Like Alphabet, Amazon also recently completed a "useful life study" for its servers, deciding to extend their working life from five to six years. This change is predicted to contribute $900 million to net income in Q1 of 2024 alone.

Cloudflare followed a similar path, extending the useful life of its service and network equipment from four to five years starting in 2024. This decision is expected to result in a modest impact of $20 million.

Tech behemoths are facing increasing costs from investing in AI and technical infrastructure, so any savings that can made elsewhere are vital. The move to extend the life of servers isn't just a cost cutting exercise however, it also reflects the continuous advancements in hardware technology and improvements in data center designs.

Continue reading here:

Some of the world's biggest cloud computing firms want to make millions of servers last longer doing so will save ... - Yahoo! Voices

Some of the world’s biggest cloud computing firms want to make millions of servers last longer doing so will save … – TechRadar

Some of the world's largest cloud computing firms, including Alphabet, Amazon, and Cloudflare, have found a way to save billions by extending the lifespan of their servers - a move expected to significantly reduce depreciation costs, increase net income, and contribute to their bottom lines.

Alphabet, Google's parent company, started this trend in 2021 by extending the lifespan of its servers and networking equipment. By 2023, the company decided that both types of hardware could last six years before needing to be replaced. This decision led to the company saving $3.9 billion in depreciation and increasing net income by $3.0 billion last year.

These savings will go towards Alphabet's investment in technical infrastructure, particularly servers and data centers, to support the exponential growth of AI-powered services.

Like Alphabet, Amazon also recently completed a "useful life study" for its servers, deciding to extend their working life from five to six years. This change is predicted to contribute $900 million to net income in Q1 of 2024 alone.

Cloudflare followed a similar path, extending the useful life of its service and network equipment from four to five years starting in 2024. This decision is expected to result in a modest impact of $20 million.

Tech behemoths are facing increasing costs from investing in AI and technical infrastructure, so any savings that can made elsewhere are vital. The move to extend the life of servers isn't just a cost cutting exercise however, it also reflects the continuous advancements in hardware technology and improvements in data center designs.

Read more from the original source:

Some of the world's biggest cloud computing firms want to make millions of servers last longer doing so will save ... - TechRadar

Report: NVIDIA Forms Custom Chip Unit for Cloud Computing and More – AnandTech

With its highly successful A100 and H100 processors for artificial intelligence (AI) and high-performance computing (HPC) applications, NVIDIA dominates AI datacenter deployments these days. But among large cloud service providers as well as emerging devices like software defined vehicles (SDVs) there is a global trend towards custom silicon. And, according to a report from Reuters, NVIDIA is putting together a new business unit to take on the custom chip market.

The new business unit will reportedly be led by vice president Dina McKinney, who has a wealth of experience from working at AMD, Marvell, and Qualcomm. The new division aims to address a wide range of sectors including automotive, gaming consoles, data centers, telecom, and others that could benefit from tailored silicon solutions. Although NVIDIA has not officially acknowledged the creation of this division, McKinneys LinkedIn profile as VP of Silicon Engineering reveals her involvement in developing silicon for 'cloud, 5G, gaming, and automotive,' hinting at the broad scope of her alleged business division.

Nine unofficial sources across the industry confirmed to Reuters the existence of the division, but NVIDIA has remained tight-lipped, only discussing its 2022 announcement regarding implementation of its networking technologies into third-party solutions. According to Reuters, NVIDIA has initiated discussions with leading tech companies, including Amazon, Meta, Microsoft, Google, and OpenAI, to investigate the potential for developing custom chips. This hints that NVIDIA intends to extend its offerings beyond the conventional off-the-shelf datacenter and gaming products, embracing the growing trend towards customized silicon solutions.

While using NVIDIA's A100 and H100 processors for AI and high-performance computing (HPC) instances, major cloud service providers (CSPs) like Amazon Web Services, Google, and Microsoft are also advancing their custom processors to meet specific AI and general computing needs. This strategy enables them to cut costs as well as tailor capabilities and power consumption of their hardware to their particular needs. As a result, while NVIDIA's AI and HPC GPUs remain indispensable for many applications, an increasing portion of workloads now run on custom-designed silicon, which means lost business opportunities for NVIDIA. This shift towards bespoke silicon solutions is widespread and the market is expanding quickly. Essentially, instead of fighting custom silicon trend, NVIDIA wants to join it.

Meanwhile, analysts are painting the possibility of an even bigger picture. Well-known GPU industry observer Jon Peddie Research notes that they believe that NVIDIA may be interested in addressing not only CSPs with datacenter offerings, but also consumer market due to huge volumes.

"NVIDIA made their loyal fan base in the consumer market which enabled them to establish the brand and develop ever more powerful processors that could then be used as compute accelerators," said JPR's president Jon Peddie. "But the company has made its fortune in the deep-pocked datacenter market where mission-critical projects see the cost of silicon as trivial to the overall objective. The consumer side gives NVIDIA the economy of scale so they can apply enormous resources to developing chips and the software infrastructure around those chips. It is not just CUDA, but a vast library of software tools and libraries."

Back in mid-2010s NVIDIA tried to address smartphones and tablets with its Tegra SoCs, but without much success. However, the company managed to secure a spot in supplying the application processor for the highly-successful Nintendo Switch console, and certainly would like expand this business. The consumer business allows NVIDIA to design a chip and then sell it to one client for many years without changing its design, amortizing the high costs of development over many millions of chips.

"NVIDIA is of course interested in expanding its footprint in consoles right now they are supplying the biggest selling console supplier, and are calling on Microsoft and Sony every week to try and get back in," Peddie said. "NVIDIA was in the first Xbox, and in PlayStation 3. But AMD has a cost-performance advantage with their APUs, which NVIDIA hopes to match with Grace. And since Windows runs on Arm, NVIDIA has a shot at Microsoft. Sony's custom OS would not be much of a challenge for NVIDIA."

See more here:

Report: NVIDIA Forms Custom Chip Unit for Cloud Computing and More - AnandTech