Don’t think of our AI future as humans vs. machines. Instead, consider these possibilities – Fox News

NEWYou can now listen to Fox News articles!

Imagine standing in a field over a century ago, a farmer in the 1800s, at a time when the worlds population had just crested one billion people. What if someone had told you that, by the year 2000, 95% of farm and agricultural labor would be replaced by machines and those machines would feed an additional seven billion people? What would you have thought about that prediction?

Fast-forward to today, and similar predictions are being made about artificial intelligence (AI) and its impact on knowledge work. The difference is that now the time frame isnt 200 years but 20.

The thought ofAIreplacing human intellect and creativity in the workforce can indeed be unsettling. But, is this fear truly warranted, or are we on the cusp of a collaborative revolution that could amplify human innovation and creativity?

The apprehension thatAIwill replace human jobs mirrors past fears during significant technological shifts. (Reuters/Dado Ruvic/Illustration/File Photo)

The apprehension thatAIwill replace human jobs mirrors past fears during significant technological shifts. Yet,history has shown us that technology often creates more opportunities than it displaces.

WHAT IS ARTIFICIAL INTELLIGENCE (AI)?

The introduction of machinery in agriculture, for instance, didn't lead to the end of human labor; instead, it transformed it, enabling greater productivity and feeding billions more people.

So, why viewAI's role in the future of work with trepidation rather than optimism? Simply put, because we wont have the time to retrain all the workers that are replaced.

But what if we were thinking about this all wrong? What ifAIisnt a replacement but a means of amplifying human potential?

The conversation aroundAItoday is all too often framed in terms of replacement rather than augmentation and amplification. This perspective is a relic of industrial-era thinking, which doesn't apply to the nuanced waysAIcan complement human capabilities.

AI, particularly in forms like GenerativeAI, is not just about automating tasks but enhancing human creativity and efficiency. Companies like OpenAI, Google and Microsoft are pioneering this frontier, developingAIthat can write, create art, and even generate video content from text descriptions.

AI WILL CHANGE WORK LIKE THE INTERNET DID. THAT'S EITHER A PROBLEM OR AN OPPORTUNITY

This isn't about machines taking over; it's about machines enabling us to reach new heights of creativity and innovation.

Consider the rapid adoption ofAItechnologies. OpenAI's ChatGPT reached over 100 million users in just two months, a testament to the technology's appeal and potential. This enthusiasm forAIisn't just about novelty; it's a recognition of its ability to augment human capabilities in unprecedented ways.

Yet, the question remains: WillAIdisplace knowledge workers? The answer is nuanced. Yes,AIwill automate certain tasks, potentially displacing some jobs. By some estimates,AIwill be able to accomplish about 50% of knowledge work within 10 years.

However, this is only part of the story. The gap between wage growth and productivity in knowledge work has been widening, not solely because of technology, but also due to a failure to fully leverage technology to augment human work.

ARTIFICIAL INTELLIGENCE IS BIG, BUT ARE COMPANIES HIRING FOR AI ROLES TOO FAST?

Knowledge workers spend a significant portion of their time coordinating disparate technologies, a task thatAIcould streamline, freeing humans to focus on more creative and strategic endeavors, rather than playing the game of spinning plates with the vast array of technologies they need to orchestrate and coordinate today.

Rather than this being a fight to the death between humans and.AI, what about an approach in whichAIcreates a multiplier effect that amplifies the value of human innovation and creativity?

The fear thatAIwill render human workers obsolete overlooks the potential for new value creation. Just as the mechanization of agriculture led to new industries and opportunities,AI's impact on knowledge work will likely spawn new realms of employment and innovation.

For example, in health care,AIcould alleviate the administrative burden on physicians, allowing them more time for patient care, ultimately improving outcomes and reducing costs. Today, primary care docs spend about half their time dealing with myriad administrative issues, from medical records to insurance claims.

HOW TO USE AI TO HELP YOU GET A BETTER JOB INSTEAD OF IT STEALING ONE

And yet, we know that a primary care doctor is among the greatest variable in reducing health care costs and increasing positive outcomes. Imagine what that would translate into if doctors had 50 percent more time to spend with patients.

The narrative thatAIwill simply replace human jobs is overly simplistic and ignores the broader potential forAIto enhance human work. The integration ofAIinto knowledge work promises to not only increase productivity but also to open up new avenues for human creativity and innovation. The real challenge lies not in competing withAIbut in leveraging it to augment our own capabilities.

As we stand on the brink of thisAI-driven era, it's crucial to shift our perspective from one of fear to one of opportunity. The question we should be asking is not whetherAIwill replace us but how we can useAIto become better at what we do. The potential forAIto amplify human innovation and creativity is immense, provided we approach this new frontier with openness and adaptability.

CLICK HERE FOR MORE FOX NEWS OPINION

The rise ofAIin the workplace is not a harbinger of obsolescence for human workers but a call to action to redefine the nature of work itself. By embracingAIas a collaborative partner, we can unlock new levels of creativity and innovation, propelling humanity forward in ways we have yet to imagine.

The future of work is not about humans versus machines but about how we can work alongsideAIto create a world where technology amplifies human potential. Let's not view the future with apprehension but with the excitement and optimism it deserves.

CLICK HERE TO GET THE FOX NEWS APP

Nathaniel Palmer is a pioneer in automation and digital transformation, serving as Chief Architect for some of the largest and most complex initiatives across government and private industry. He is the co-author of Gigatrends:Six Forces That Are Changing the Future for Billions.

ThomasKoulopoulosis chairman and founder of Delphi Group, a 30-year-old Boston-based think tank thatfocuses on disruptive technology innovation. He is also the founding partner of Acrovantage Ventures(which invests in early-stage technology startups), the author of 13 books, the past executive director of the Babson College Center for Business Innovation, and a professor at Boston University.

Go here to read the rest:

Don't think of our AI future as humans vs. machines. Instead, consider these possibilities - Fox News

As Nvidia hits $2 trillion, billionaire Marc Rowans asset manager Apollo calls AI a bubble worse than even the dotcom era – Fortune

2023 Fortune Media IP Limited. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | CA Notice at Collection and Privacy Notice| Do Not Sell/Share My Personal Information| Ad Choices FORTUNE is a trademark of Fortune Media IP Limited, registered in the U.S. and other countries. FORTUNE may receive compensation for some links to products and services on this website. Offers may be subject to change without notice. S&P Index data is the property of Chicago Mercantile Exchange Inc. and its licensors. All rights reserved. Terms & Conditions. Powered and implemented by Interactive Data Managed Solutions.

The rest is here:

As Nvidia hits $2 trillion, billionaire Marc Rowans asset manager Apollo calls AI a bubble worse than even the dotcom era - Fortune

AI could make the four-day workweek inevitable – BBC.com

By Elizabeth BennettFeatures correspondent

As artificial intelligence gains traction in office operations, some companies are giving employees a day to step back.

Working four days while getting paid for five is a dream for many employees. Yet the dramatic shifts in the pandemic-era workplace have turned this once unfathomable idea into a reality for some workers. And as more global data emerges, an increasing number of companies are courting the approach after positive trial-run results across countries including the UK, Iceland, Portugal and more.

Now, as pilots continue in Germany, a trial of 45 companies has just begun , for instance another factor has entered the mix. Artificial intelligence (AI) is gathering pace in the workplace, and some experts believe it could accelerate the adoption of the four-day workweek.

Data from London-based news-and-events resource Tech.co collected in late 2023 lends credence to this idea. For their 2024 Impact of Technology on the Workplace, the company surveyed more than 1,000 US business leaders. The researchers found 29% of organisations with four-day workweeks use AI extensively in their firms' operations, implementing generative AI tools such as ChatGPT as well as other programmes to streamline operations. In comparison, only 8% of five-day working week organisations use AI to this extent. And 93% of businesses using AI are open to a four-day work week, whereas for those who don't, fewer than half are open to working shorter weeks.

At London-based digital design agency Driftime, adopting AI technology has been crucial to enable the business to operate a flexible four-day work week. "By handing over simple tasks to AI tools, we gain invaluable time previously lost to slow aspects of the process," says co-founder Abb-d Taiyo. "With tools like Modyfi, the graphics are all live and modifiable, making it so much easier and quicker for our designers to create concepts and ideas."

Taiyo believes it makes sense for both his employees and his bottom line to work the condensed week. "Instead of a dip in the quantity of work created over just four days, we've seen a remarkably high quality of work matched by a high staff satisfaction return. The health and happiness of our team is in direct correlation to the high standard of work produced," he says.

Shayne Simpson, group managing director of UK-based TechNET IT Recruitment, also believes AI has been fundamental to the success of the company's four-day work week policy. The firm has found AI tools save each of their recruitment consultants 21 hours per week, primarily by automating previously manual tasks like data input, confirmation emails, resume screening and candidate outreach. This has reduced the time to fill permanent roles at the company by an average of 10 days. "This timesaving allows our team to achieve their weekly goals earlier in the week and the flexibility liberates our consultants from being tethered to their desks, enabling them to enjoy a well-deserved Friday off," says Simpson.

Not only has the company's abridged workweek boosted productivity and morale, Simpson says it's also been key to attracting talent to work within the company itself. "Seasoned recruitment professionals are enticed by our streamlined processes while entry-level talent is eager to embrace new tools." It's lifted the entire business, he adds.

While AI tools are certainly paving the way for a four-day work week within some industries, the technology can't usher in the change alone. Organisational culture within a business is also fundamental, says Na Fu, a professor in human resource management at Trinity Business School, Ireland. "An openness to innovative work structures, an experimental mindset and, importantly, a culture grounded in high levels of trust are all important for the four-day work week to be successfully adopted," she says.

As the digital transformation with AI progresses, employees themselves also must be willing to level up, she adds: "Rather than becoming mere caretakers or servants of machines, human workers need to develop new skills that can leverage, complement and lead AI, achieving the enhanced outcomes."

Some industries will benefit from AI more than others, however notably those who are able to use generative AI tools for such tasks including software development, content creation, marketing and legal services, says Fu. Plus, artificial intelligence development still has a way to go if it is to substantially reduce human working hours across the board.

What may drive the shift to a four-day workweek in an AI-powered business landscape may not ultimately be up to the robots, however. Executive buy-in is required, and whether leaders will embrace the unconventional concept will vary depending on a firm's overarching purpose and values, says Fu. Instead of letting AI supplement the work of humans, for instance, some businesses could use it to automate certain tasks while piling other work on employees to fill newly open hours.

Still, despite some reservation, an increasing number of business leaders including those from some of the world's highest-earning companies see a technology-driven shortened workweek as an inevitable future. In October 2023, JPMorgan Chase & Co CEO Jamie Dimon told Bloomberg TV: "Your children are going to live to 100, and they'll probably be working three-and-a-half days a week." Employees will have to wait and see.

Read the original here:

AI could make the four-day workweek inevitable - BBC.com

Democratic operative admits to commissioning Biden AI robocall in New Hampshire – The Washington Post

A longtime Democratic consultant working for a rival candidate admitted that he commissioned the artificial intelligence-generated robocall of President Biden that was sent to New Hampshire voters in January and triggered a state criminal investigation.

Steve Kramer, who worked for the long-shot Democratic presidential candidate Dean Phillips, said in a phone interview with The Washington Post that he sent out the AI-generated robocall telling voters to not vote to just under 5,000 people listed as most likely Democrats to vote in the New Hampshire primary, marking one of the first major uses of AI to disrupt the 2024 presidential election cycle.

The Phillips campaign paid Kramer roughly $250,000 to get Phillips, a third-term congressman from Minnesota challenging Biden, on the ballot in New York and Pennsylvania, according to federal campaign filings. Federal Communications Commission has issued him a subpoena for his involvement, Kramer said.

After the robocall, the Federal Communications Commission adopted a ruling that clarified generating a voice with AI for robocalls is illegal and swiftly issued a cease-and-desist letter to Kramer for originating illegal spoofed robocalls using an AI-generated voice in New Hampshire and issued a public notice to U.S.-based voice providers regarding blocking traffic related to the call.

The agency is working diligently including through all the tools available through its investigations to ensure that harmful misuse of AI technologies do not compromise the integrity of our communications networks, FCC spokesperson Will Wiquist said in a statement.

Kramer also shared details about how he created the robocall, confirming several details previously under speculation. He used software from the artificial intelligence voice cloning company Eleven Labs to create a deepfake voice of Biden in less than 30 minutes.

The calls, he added, were delivered by Voice Broadcasting, an entity associated with Life Co., which was at the center of the criminal investigation opened by New Hampshire Attorney General John Formella in early February into the Biden AI robocall. Kramer said the reason he created the robocall was to raise awareness about the dangers AI poses in political campaigns.

If anybody can do it, whats a person with real money, or an entity with real money, going to do? he said.

Kramers incident highlights the ease and accessibility by which AI-generated technology is making its way into the 2024 campaign cycle, allowing nearly anyone to use a wide array of tools to inject chaos and confusion into the voting process.

It also foreshadows a new challenge for state regulators, as increasingly advanced AI tools create new opportunities to interfere in elections across the world by creating fake audio recordings, photos and even videos of candidates, muddying the waters of reality.

The New Hampshire attorney generals investigation into the robocall remains active and ongoing, said Michael Garrity, a spokesman for the office.

Phillips and his campaign have condemned the robocalls. Katie Dolan, a spokeswoman for the Phillips campaign, said Kramers contract was finished before they became aware of his involvement in the robocall.

We are disgusted to learn that Mr. Kramer is behind this call, and we absolutely denounce his actions, she said. Kramers involvement was first reported by NBC News.

The robocall using an AI-generated voice that sounded like Biden targeted thousands of New Hampshire voters the weekend before the New Hampshire Democratic presidential primary, telling them their vote would not make a difference, according to investigators.

The call, which began with a catchphrase of Bidens, calling the election a bunch of malarkey, told voters: Its important that you save your vote for the November election. The call appeared to come from the number of the former New Hampshire Democratic Party chair Kathy Sullivan, who was helping an effort to get voters to write in Bidens name to show their support for the president, even though he wasnt on the ballot. Sullivan and others reported the call to the states attorney general.

In early February, Formella announced a criminal investigation into the matter, and sent the telecom company, Life Corp., a cease-and-desist letter ordering it to immediately stop violating the states laws against voter suppression in elections.

A multistate task force was also prepared for potential civil litigation against the company, and the FCC ordered Lingo Telecom to stop permitting illegal robocall traffic, after an industry consortium found that the Texas-based company carried the calls on its network.

Dont try it, Formella said in the February news conference. If you do, we will work together to investigate, we will work together with partners across the country to find you, and we will take any enforcement action available to us under the law. The consequences for your actions will be severe.

The robocall incident is also one of several episodes that underscore the need for better policies within technology companies to ensure their AI services are not used to distort elections, AI experts said.

In late January, ChatGPT creator OpenAI banned a developer from using its tools after the developer built a bot mimicking Phillips. His campaign had supported the bot, but after The Post reported on it, OpenAI deemed that it broke rules against use of its tech for campaigns.

Paul Barrett, deputy director of the New York University Stern Center for Business and Human Rights, said in an email that it is apparent how powerful AI deepfakes can be in disrupting elections. The new technology makes it far easier for nonexperts to generate highly persuasive content that is fraudulent and can potentially mislead people about when, how, or where to vote, he said.

This is also not the first time Kramer has used AI to spoof a politicians voice. Last year, he created an AI-generated robocall of Sen. Lindsey Graham (R-S.C.) asking nearly 300 likely Republican voters in South Carolina whom they would support if former president Donald Trump wasnt on the ballot.

Kramer, who said he plans to support Biden if he wins the Democratic nomination, said he hopes his actions have inspired regulators to take notice of AIs potential impact on the election.

Its here now, he said, referring to AI, and I did something about it.

Clara Ence Morse, Eva Dou, and Razzan Nakhlawi contributed to this report.

Read the original:

Democratic operative admits to commissioning Biden AI robocall in New Hampshire - The Washington Post

Microsoft’s AI Access Principles: Our commitments to promote innovation and competition in the new AI economy … – Microsoft

As we enter a new era based on artificial intelligence, we believe this is the best time to articulate principles that will govern how we will operate our AI datacenter infrastructure and other important AI assets around the world. We are announcing and publishing these principles our AI Access Principles today at the Mobile World Congress in Barcelona in part to address Microsofts growing role and responsibility as an AI innovator and a market leader.

Like other general-purpose technologies in the past, AI is creating a new sector of the economy. This new AI economy is creating not just new opportunities for existing enterprises, but new companies and entirely new business categories. The principles were announcing today commit Microsoft to bigger investments, more business partnerships, and broader programs to promote innovation and competition than any prior initiative in the companys 49-year history. By publishing these principles, we are committing ourselves to providing the broad technology access needed to empower organizations and individuals around the world to develop and use AI in ways that will serve the public good.

These new principles help put in context the new investments and programs weve announced and launched across Europe over the past two weeks, including $5.6 billion in new AI datacenter investments and new AI skilling programs that will reach more than a million people. Weve also launched new public-private partnerships to advance responsible AI adoption and protect cybersecurity, new AI technology services to support network operators, and a new partnership with Frances leading AI company, Mistral AI. As much as anything, these investments and programs make clear how we will put these principles into practice, not just in Europe, but in the United States and around the world.

These principles also reflect the responsible and important role we must play as a company. They build in part on the lessons we have learned from our experiences with previous technology developments. In 2006, after more than 15 years of controversies and litigation relating to Microsoft Windows and the companys market position in the PC operating system market, we published a set of Windows Principles. Their purpose was to govern the companys practices in a manner that would both promote continued software innovation and foster free and open competition.

Ill never forget the reaction of an FTC Commissioner who came up to me after I concluded the speech I gave in Washington, D.C. to launch these principles. He said, If you had done this 10 years ago, I think you all probably would have avoided a lot of problems.

Close to two decades have gone by since that moment, and both the world of technology and the AI era we are entering are radically different. Then, Windows was the computing platform of the moment. Today, mobile platforms are the most popular gateway to consumers, and exponential advances in generative AI are driving a tectonic shift in digital markets and beyond. But there is wisdom in that FTC Commissioners reaction that has stood the test of time: As a leading IT company, we do our best work when we govern our business in a principled manner that provides broad opportunities for others.

The new AI era requires enormous computational power to train, build, and deploy the most advanced AI models. Historically, such power could only be found in a handful of government-funded national laboratories and research institutions, and it was available only to a select few. But the advent of the public cloud has changed that. Much like steel did for skyscrapers, the public cloud enables generative AI.

Today, datacenters around the world house millions of servers and make vast computing power broadly available to organizations large and small and even to individuals as well. Already, many thousands of AI developers in startups, enterprises, government agencies, research labs, and non-profit organizations around the world are using the technology in these datacenters to create new AI foundation models and applications.

These datacenters are owned and operated by cloud providers, which include larger established firms such as Microsoft, Amazon, Google, Oracle, and IBM, as well as large firms from China like Alibaba, Huawei, Tencent, and Baidu. There are also smaller specialized entrants such as Coreweave, OVH, Aruba, and Denvr Dataworks Corporation, just to mention a few. And government-funded computing centers clearly will play a role as well, including with support for academic research. But building and operating those datacenters is expensive. And the semiconductors or graphical processing units (GPUs) that are essential to power the servers for AI workloads remain costly and in short supply. Although governments and companies are working hard to fill the gap, doing so will take some time.

With this reality in mind, regulators around the world are asking important questions about who can compete in the AI era. Will it create new opportunities and lead to the emergence of new companies? Or will it simply reinforce existing positions and leaders in digital markets?

I am optimistic that the changes driven by the new AI era will extend into the technology industry itself. After all, how many readers of this paragraph had, two years ago, even heard of OpenAI and many other new AI entrants like Anthropic, Cohere, Aleph Alpha, and Mistral AI? In addition, Microsoft, along with other large technology firms are dynamically pivoting to meet the AI era. The competitive pressure is fierce, and the pace of innovation is dizzying. As a leading cloud provider and an innovator in AI models ourselves and through our partnership with OpenAI, we are mindful of our role and responsibilities in the evolution of this AI era.

Throughout the past decade, weve typically found it helpful to define the tenets in effect, the goals that guide our thinking and drive our actions as we navigate a complex topic. We then apply these tenets by articulating the principles we will apply as we make the decisions needed to govern the development and use of technology. I share below the new tenets on which we are basing our thinking on this topic, followed by our 11 AI Access Principles.

Fundamentally, there are five tenets that define Microsofts goals as we focus on AI access, including our role as an infrastructure and platforms provider.

First, we have a responsibility to enable innovation and foster competition. We believe that AI is a foundational technology with a transformative capability to help solve societal problems, improve human productivity, and make companies and countries more competitive. As with prior general-purpose technologies, from the printing press to electricity, railroads, and the internet itself, the AI era is not based on a single technology component or advance. We have a responsibility to help spur innovation and competition across the new AI economy that is rapidly emerging.

AI is a dynamic field, with many active participants based on a technology stack that starts with electricity and connectivity and the worlds most advanced semiconductor chips at the base. It then runs up through the compute power of the public cloud, public and proprietary data for training foundation models, the foundation models themselves, tooling to manage and orchestrate the models, and AI-powered software applications. In short, the success of an AI-based economy requires the success of many different participants across numerous interconnected markets.

You can see here the technology stack that defines the new AI era. While one company currently produces and supplies most of the GPUs being used for AI today, as one moves incrementally up the stack, the number of participants expands. And each layer enables and facilitates innovation and competition in the layers above. In multiple ways, to succeed, participants at every layer of the technology stack need to move forward together. This means, for Microsoft, that we need to stay focused not just on our own success, but on enabling the success of others.

Second, our responsibilities begin by meeting our obligations under the law. While the principles we are launching today represent a self-regulatory initiative, they in no way are meant to suggest a lack of respect for the rule of law or the role of regulators. We fully appreciate that legislators, competition authorities, regulators, enforcers, and judges will continue to evolve the competition rules and other laws and regulations relevant to AI. Thats the way it should be.

Technology laws and rules are changing rapidly. The European Union is implementing its Digital Markets Act and completing its AI Act, while the United States is moving quickly with a new AI Executive Order. Similar laws and initiatives are moving forward in the United Kingdom, Canada, Japan, India, and many other countries. We recognize that we, like all participants in this new AI market, have a responsibility to live up to our obligations under the law, to engage constructively with regulators when obligations are not yet clear, and to contribute to the public dialogue around policy. We take these obligations seriously.

Third, we need to advance a broad array of AI partnerships. Today, only one company is vertically integrated in a manner that includes every AI layer from chips to a thriving mobile app store. As noted at a recent meeting of tech leaders and government officials, The rest of us, Microsoft included, live in the land of partnerships.

People today are benefiting from the AI advances that the partnership between OpenAI and Microsoft has created. Since 2019, Microsoft has collaborated with OpenAI on the research and development of OpenAIs generative AI models, developing the unique supercomputers needed to train those models. The ground-breaking technology ushered in by our partnership has unleashed a groundswell of innovation across the industry. And over the past five years, OpenAI has become a significant new competitor in the technology industry. It has expanded its focus, commercializing its technologies with the launch of ChatGPT and the GPT Store and providing its models for commercial use by third-party developers.

Innovation and competition will require an extensive array of similar support for proprietary and open-source AI models, large and small, including the type of partnership we are announcing today with Mistral AI, the leading open-source AI developer based in France. We have also invested in a broad range of other diverse generative AI startups. In some instances, those investments have provided seed funding to finance day-to-day operations. In other instances, those investments have been more focused on paying the expenses for the use of the computational infrastructure needed to train and deploy generative AI models and applications. We are committed to partnering well with market participants around the world and in ways that will accelerate local AI innovations.

Fourth, our commitment to partnership extends to customers, communities, and countries. More than for prior generations of digital technology, our investments in AI and datacenters must sustain the competitive strengths of customers and national economies and address broad societal needs. This has been at the core of the multi-billion-dollar investments we recently have announced in Australia, the United Kingdom, Germany, and Spain. We need constantly to be mindful of the community needs AI advances must support, and we must pursue a spirit of partnership not only with others in our industry, but with customers, governments, and civil society. We are building the infrastructure that will support the AI economy, and we need the opportunities provided by that infrastructure to be widely available.

Fifth, we need to be proactive and constructive, as a matter of process, in working with governments and the IT industry in the design and release of new versions of AI infrastructure and platforms. We believe it is critical for companies and regulators to engage in open dialogue, with a goal of resolving issues as quickly as possible ideally, while a new product is still under development. For our part, we understand that Microsoft must respond fully and cooperatively to regulatory inquiries so that we can have an informed discussion with regulators about the virtues of various approaches. We need to be good listeners and constructive problem solvers in sorting through issues of concern and identifying practical steps and solutions before a new product is completed and launched.

The foregoing tenets come together to shape the new principles we are announcing below. Its important to note that, given the safety, security, privacy, and other issues relating to responsible AI, we need to apply all these principles subject to objective and effective standards to comply with our legal obligations and protect the public. These are discussed further below. Subject to these requirements, we are committed to the following 11 principles:

We are committed to enabling AI innovation and fostering competition by making our cloud computing and AI infrastructure, platforms, tools, and services broadly available and accessible to software developers around the world. We want Microsoft Azure to be the best place for developers to train, build, and deploy AI models and to use those models safely and securely in applications and solutions. This means:

Today, our partnership with OpenAI is supporting the training of the next generation of OpenAI models and increasingly enabling customers to access and use these models and Microsofts CoPilot applications in local datacenters. At the same time, we are committed to supporting other developers, training, and deploying proprietary and open-source AI models, both large and small.

Todays important announcement with Mistral AI launches a new generation of Microsofts support for technology development in Europe. It enables Mistral AI to accelerate the development and deployment of its next generation Large Language Models (LLMs) with access to Azures cutting-edge AI infrastructure. It also makes the deployment of Mistral AIs premium models available to customers through our Models-as-a-Service (MaaS) offering on Microsoft Azure, which model developers can use to publish and monetize their AI models. By providing a unified platform for AI model management, we aim to lower the barriers and costs of AI model development around the world for both open source and proprietary development. In addition to Mistral AI, this service is already hosting more than 1,600 open source and proprietary models from companies and organizations such as Meta, Nvidia, Deci, and Hugging Face, with more models coming soon from Cohere and G42.

We are committed to expanding this type of support for additional models in the months and years ahead.

As reflected in Microsofts Copilots and OpenAIs ChatGPT itself, the world is rapidly benefiting from the use of a new generation of software applications that access and use the power of AI models. But our applications will represent just a small percentage of the AI-powered applications the world will need and create. For this reason, were committed to ongoing and innovative steps to make the AI models we host and the development tools we create broadly available to AI software applications developers around the world in ways that are consistent with responsible AI principles.

This includes the Azure OpenAI service, which enables software developers who work at start-ups, established IT companies, and in-house IT departments to build software applications that call on and make use of OpenAIs most powerful models. It extends through Models as a Service to the use of other open source and proprietary AI models from other companies, including Mistral AI, Meta, and others.

We are also committed to empowering developers to build customized AI solutions by enabling them to fine-tune existing models based on their own unique data sets and for their specific needs and scenarios. With Azure Machine Learning, developers can easily access state-of-the-art pre-trained models and customize them with their own data and parameters, using a simple drag-and-drop interface or code-based notebooks. This helps companies, governments, and non-profits create AI applications that help advance their goals and solve their challenges, such as improving customer service, enhancing public safety, or promoting social good. This is rapidly democratizing AI and fostering a culture of even broader innovation and collaboration among developers.

We are also providing developers with tools and repositories on GitHub that enable them to create, share, and learn from AI solutions. GitHub is the worlds largest and most trusted platform for software development, hosting over 100 million repositories and supporting more than 40 million developers. We are committed to supporting the AI developer community by making our AI tools and resources available on GitHub, giving developers access to the latest innovations and best practices in AI development, as well as the opportunity to collaborate with other developers and contribute to the open source community. As one example, just last week we made available an open automation framework to help red team generative AI systems.

Ensure choice and fairness across the AI economy

We understand that AI innovation and competition require choice and fair dealing. We are committed to providing organizations, AI developers, and data scientists with the flexibility to choose which AI models to use wherever they are building solutions. For developers who choose to use Microsoft Azure, we want to make sure they are confident we will not tilt the playing field to our advantage. This means:

The AI models that we host on Azure, including the Microsoft Azure OpenAI API service, are all accessible via public APIs. Microsoft publishes documentation on its website explaining how developers can call these APIs and use the underlying models. This enables any application, whether it is built and deployed on Azure or other private and public clouds, to call these APIs and access the underlying models.

Network operators are playing a vital role in accelerating the AI transformation of customers around the world, including for many national and regional governments. This is one reason we are supporting a common public API through the Open Gateway initiative driven by the GSM Association, which advances innovation in the mobile ecosystem. The initiative is aligning all operators with a common API for exposing advanced capabilities provided by their networks, including authentication, location, and quality of service. Its an indispensable step forward in enabling network operators to offer their advanced capabilities to a new generation of AI-enabled software developers. We have believed in the potential of this initiative since its inception at GSMA, and we have partnered with operators around the world to help bring it to life.

Today at Mobile World Congress, we are launching the Public Preview of Azure Programmable Connectivity (APC). This is a first-class service in Azure, completely integrated with the rest of our services, that seamlessly provides access to Open Gateway for developers. It means software developers can use the capabilities provided by the operator network directly from Azure, like any other service, without requiring specific work for each operator.

We are committed to maintaining Microsoft Azure as an open cloud platform, much as Windows has been for decades and continues to be. That means in part ensuring that developers can choose how they want to distribute and sell their AI software to customers for deployment and use on Microsoft Azure. We provide a marketplace on Azure through which developers can list and sell their AI software to Azure customers under a variety of supported business models. Developers who choose to use the Azure Marketplace are also free to decide whether to use the transaction capabilities offered by the marketplace (at a modest fee) or whether to sell licenses to customers outside of the marketplace (at no fee). And, of course, developers remain free to sell and distribute AI software to Azure customers however they choose, and those customers can then upload, deploy, and use that software on Azure.

We believe that trust is central to the success of Microsoft Azure. We build this trust by serving the interests of AI developers and customers who choose Microsoft Azure to train, build, and deploy foundation models. In practice, this also means that we avoid using any non-public information or data from the training, building, deployment, or use of developers AI models to compete against them.

We know that customers can and do use multiple cloud providers to meet their AI and other computing needs. And we understand that the data our customers store on Microsoft Azure is their data. So, we are committed to enabling customers to easily export and transfer their data if they choose to switch to another cloud provider. We recognize that different countries are considering or have enacted laws limiting the extent to which we can pass along the costs of such export and transfer. We will comply with those laws.

We recognize that new AI technologies raise an extraordinary array of critical questions. These involve important societal issues such as privacy, safety, security, the protection of children, and the safeguarding of elections from deepfake manipulation, to name just a few. These and other issues require that tech companies create guardrails for their AI services, adapt to new legal and regulatory requirements, and work proactively in multistakeholder efforts to meet broad societal needs. Were committed to fulfilling these responsibilities, including through the following priorities:

We are committed to safeguarding the physical security of our AI datacenters, as they host the infrastructure and data that power AI solutions. We follow strict security protocols and standards to ensure that our datacenters are protected from unauthorized access, theft, vandalism, fire, or natural disasters. We monitor and audit our datacenters to detect and prevent any potential threats or breaches. Our datacenter staff are trained and certified in security best practices and are required to adhere to a code of conduct that respects the privacy and confidentiality of our customers data.

We are also committed to safeguarding the cybersecurity of our AI models and applications, as they process and generate sensitive information for our customers and society. We use state-of-the-art encryption, authentication, and authorization mechanisms to protect data in transit and at rest, as well as the integrity and confidentiality of AI models and applications. We also use AI to enhance our cybersecurity capabilities, such as detecting and mitigating cyberattacks, identifying and resolving vulnerabilities, and improving our security posture and resilience.

Were building on these efforts with our new Secure Future Initiative (SFI). This brings together every part of Microsoft and has three pillars. It focuses on AI-based cyber defenses, advances in fundamental software engineering, and advocacy for stronger application of international norms to protect civilians from cyber threats.

As AI becomes more pervasive and impactful, we recognize the need to ensure that our technology is developed and deployed in a way that is ethical, trustworthy, and aligned with human values. That is why we have created the Microsoft Responsible AI Standard, a comprehensive framework that guides our teams on how to build and use AI responsibly.

The standard covers six key dimensions of responsible AI: fairness; reliability and safety; privacy and security; inclusiveness; transparency; and accountability. For each dimension, we define what these values mean and how to achieve our goals in practice. We also provide tools, processes, and best practices to help our teams implement the standard throughout the AI lifecycle, from design and development to deployment and monitoring. The approach that the standard establishes is not static, but instead evolves and improves based on the latest research, feedback, and learnings.

We recognize that countries need more than advanced AI chips and datacenters to sustain their competitive edge and unlock economic growth. AI is changing jobs and the way people work, requiring that people master new skills to advance their careers. Thats why were committed to marrying AI infrastructure capacity with AI skilling capability, combining the two to advance innovation.

In just the past few months, weve combined billions of dollars of infrastructure investments with new programs to bring AI skills to millions of people in countries like Australia, the United Kingdom, Germany, and Spain. Were launching training programs focused on building AI fluency, developing AI technical skills, supporting AI business transformation, and promoting safe and responsible AI development. Our work includes the first Professional Certificate on Generative AI.

Typically, our skilling programs involve a professional network of Microsoft certified training services partners and multiple industry partners, universities, and nonprofit organizations. Increasingly, we find that major employers want to launch new AI skilling programs for their employees, and we are working with them actively to provide curricular materials and support these efforts.

One of our most recent and important partnerships is with the AFL-CIO, the largest federation of labor unions in the United States. Its the first of its kind between a labor organization and a technology company to focus on AI and will deliver on three goals: (1) sharing in-depth information with labor leaders and workers on AI technology trends; (2) incorporating worker perspectives and expertise in the development of AI technology; and (3) helping shape public policy that supports the technology skills and needs of frontline workers.

Weve learned that government institutions and associations can typically bring AI skilling programs to scale. At the national and regional levels, government employment and educational agencies have the personnel, programs, and expertise to reach hundreds of thousands or even millions of people. Were committed to working with and supporting these efforts.

Through these and other initiatives, we aim to democratize access to AI education and enable everyone to harness the potential of AI for their own lives and careers.

In 2020, Microsoft set ambitious goals to be carbon negative, water positive and zero waste by 2030. We recognize that our datacenters play a key part in achieving these goals. Being responsible and sustainable by design also has led us to take a first-mover approach, making long-term investments to bring as much or more carbon-free electricity than we will consume onto the grids where we build datacenters and operate.

We also apply a holistic approach to the Scope 3 emissions relating to our investments in AI infrastructure, from the construction of our datacenters to engaging our supply chain. This includes supporting innovation to reduce the embodied carbon in our supply chain and advancing our water positive and zero waste goals throughout our operations.

At the same time, we recognize that AI can be a vital tool to help accelerate the deployment of sustainability solutions from the discovery of new materials to better predicting and responding to extreme weather events. This is why we continue to partner with others to use AI to help advance breakthroughs that previously would have taken decades, underscoring the important role AI technology can play in addressing some of our most critical challenges to realizing a more sustainable future.

Tags: ChatGPT, datacenters, generative ai, Github, Mobile World Congress, open ai, Responsible AI

Read the original post:

Microsoft's AI Access Principles: Our commitments to promote innovation and competition in the new AI economy ... - Microsoft

How AI Can Uncover the World’s Oldest Archeological Mysteries – The Daily Beast

This month, a trio of computer scientists won the Vesuvius Challenge, a competition to use artificial intelligence to reveal four passages of ancient Greek encased for 2,000 years inside a charred scroll. The artifact was found at Herculaneum, a Roman resort town destroyed by the eruption of Mount Vesuvius in 79 A.D..

This kind of thing that happens every half century or so, Richard Janko, a professor of classics at the University of Michigan and one of the judges for the competition, told The Daily Beast. Federica Nicolardi, a papyrologist at the University of Naples Federico II in Italy and a fellow judge, told The Daily Beast that the discovery could be a huge revolution.

The technology enables archeologists to potentially see inside ancient burnt, sodden, and sealed texts. This includes works of classical antiquity, to hidden writing wrapped up in Egyptian mummies, to books burned in World War II, to the many thousands of fragments of texts found in the Dead Sea that could shed new light on the early history of Christianity.

Perfectly preserved by the volcanic eruption, the town is a kind of in-between space where destruction and conservation go hand-in-hand, Nicolardi said. Archeologists have spent centuries excavating sections of the Herculaneum, including the Villa Dei Papiri, from which about 1,800 cataloged fragments or entire scrolls have been recovered.

Herculaneum scroll with red laser lines being scanned at Institut de France by Brent Seales and his team.

However, the scrolls are incredibly fragile. After all, theyre ancient on top of being burned and charred. As a result, several hundred have been ruined by people trying to unroll them manually or using machines. Due to this, there are only a few hundred left that can potentially be read.

Thats the genesis behind the competition: If the team could crack one of them open digitally, then digitally unwrapping anything else would be easy by comparison.

The contest was backed by ex-GitHub CEO Nat Friedman and Y Combinator partner Daniel Gross who offered a $1 million grand prize to the person or team who could generate at least four columns of readable digital text from scans of a Herculaneum scroll by the end of 2023. The winning team was made up of AI engineers named Youssef Nader, Julian Schillinger, and Luke Farritor who were able to recover 15 columns of text from the papyrus, revealing the ancient Greek lines laid out like a newspaper.

The process they used was originally developed by Brent Seales, a computer scientist at the University of Kentucky who has spent 20 years using technology to digitally analyze and restore ancient texts. The tool, called the Volume Cartographer, uses AI to digitally unwrap the layers of a single burnt papyrus scroll that Seales team had made 3D scans of.

But the challenge isnt over yet. The teams winning entry reveals just five percent of a single scroll. For 2024, Friedman, Gross, and Seales have a new competition: Unroll a whole scroll to win a $100,000 prize. Eventually, they want to digitally unwrap all the surviving and intact Herculaneum scrolls.

If they achieve that, then the library could reveal new information about some of the most famous figures in history such as Aristotle and Archimedes. Janko added that the text the competition has revealed may have been written by Philodemus, an Epicurean philosopher and teacher of the famous Roman poet, Virgil.

But first, more of the scroll needs to be segmented, which is the technical term for unraveling the digital layers of papyrus. Then theres a matter of translating what they find, which can be a herculean taskpotentially made less so with the help of AI. Reading the papyrus is not just a matter of recognizing letters, Nicolardi said. It is more a matter of understanding the text.

Using computers and scanning techniques in archeology is not new. The first mummy to be analyzed using X-ray occurred in 1896. Such technology has long been used to uncover archeological discoveries since then for more than a century. Before Seales digital unwrapping tool, though, Janko estimated it would have taken at least 500 years to go through the Herculaneum scrolls.

Seales has solved the problem of unrolling the fragile scrolls by using synchrotron scanning, which involves shooting a powerful particle accelerators laser at a scroll and to create high-fidelity X-rays that show all its layers. From there, each layer has to be picked out and segmented. The inner layers are the easiest to peel apart, Seales said.

That has been incredibly gratifying to see this youthful brain trust of people, who really understand AI, to see them being excited about classics, Seales said.

While this protocol has only been used on these scrolls so far, it has a wide range of archeological applications. For example, Seales has used the technology to digitally unwrap some of the Dead Sea Scrolls, as well as a copy of the Book of Leviticus recovered from a burnt synagogue at En Gedi, Israel dating to the third or fourth century C.E..

He also plans to scan and decipher a still-sealed Egyptian papyrus scroll that is housed in the Smithsonian Collection. This artifact, bandaged in linen and sealed with wax marked with the symbol of Amenhotep III dates to about 1400 B.C.E. and has never been opened.

Seales has also used the technique to see inside burned medieval books recovered from the wreckage of Chartres, a French town near Paris that was largely destroyed in World War II during an Allied bombing campaign in 1944.

Another potential treasure trove could be lurking deep in the Black Sea, Janko said. There are at least 67 ancient shipwrecks on the seabed thatbecause the water is devoid of oxygen below 140 meters depth or sohave never decayed, freezing them and their cargo in time. Amongst the potential treasure trove is a box of books and scrolls that could hold even more ancient historical secrets. It might now be possible to retrieve and see inside those papyri thanks to this technological advance, Janko said.

Its not just the classics that may see a renaissance in discoveries: There is also the possibility to apply the technology to old film reels and negatives that have become corroded and unable to be developed or read using traditional methods, Seales said.

For now, though, researchers are still working on a translation they feel confident in for the 15 columns they have so far. This is a process that even the most hubristic Silicon Valley evangelist cant speed up, Nicolardi explained.I think there is a moment for this kind of speedy work and there is another moment when you have to stop a little bit and think about it and reflect, she said. The scroll itself makes much the same point. Nicolardi notes that its last sentence roughly translates to: May the truth be always evident to us.

More:

How AI Can Uncover the World's Oldest Archeological Mysteries - The Daily Beast

Microsoft, Amazon, Nvidia, Arm, and Others Join Forces To Form AI-RAN Alliance – Investopedia

Key Takeaways

Amazon (AMZN), Arm (ARM), Microsoft (MSFT), Nvidia (NVDA), and others are joining forces to launch the AI-RAN Alliance, a group focused on revamping cellular technology for artificial intelligence (AI), as big tech companies work together to bolster their positions in the AIboom.

The group's founding members notably include Ericsson (ERIC), Samsung, Nokia (NOK), Northeastern University, SoftBank, T-Mobile (TMUS), and DeepSig as well.

The alliance's goal is to "enhance mobile network efficiency, reduce power consumption, and retrofit existing infrastructure, setting the stage for unlocking new economic opportunities for telecommunications companies with AI, facilitated by 5G and 6G," according to a release.

"Network operators in the alliance will spearhead the testing and implementation of these advanced technologies developed through the collective research efforts of the member companies and universities," the AI-RAN Alliance said.

The AI-RAN Alliance's launch comes as several tech heavyweights look to partnerships to strengthen their ability to capitalize on surging demand for AI products and services.

Palo Alto Networks (PANW) announced a partnership Monday too, with several companies including Nvidia teaming up to provide private 5G security services and solutions.

Meta (META) and IBM (IBM) launched the AI Alliance in December 2023, an international community focused on "open, safe, responsible AI" with members including Advanced Micro Devices (AMD), Dell (DELL), and Intel (INTC).

The AI-RAN Alliance also comes after a record-breaking week for Nvidia in which the chipmaker recorded the largest-ever single-day jump in market capitalization Thursday, and briefly surpassed a $2 trillion market cap on Friday, driven by AI optimism.

See original here:

Microsoft, Amazon, Nvidia, Arm, and Others Join Forces To Form AI-RAN Alliance - Investopedia

Introducing Mistral-Large on Azure in partnership with Mistral AI – Microsoft

The AI industry is undergoing a significant transformation with growing interest in more efficient and cost-effective models, emblematic of a broader trend in technological advancement. In the vanguard is Mistral AI, an innovator and trailblazer. Their commitment to fostering the open-source community and achieving exceptional performance aligns harmoniously with Microsofts commitment to develop trustworthy, scalable, and responsible AI solutions.

Today, we are announcing a multi-year partnership between Microsoft and Mistral AI, a recognized leader in generative artificial intelligence. Both companies are fueled by a steadfast dedication to innovation and practical applications, bridging the gap between pioneering research and real-world solutions.

Build intelligent apps at enterprise scale with the Azure AI portfolio

This partnership with Microsoft enables Mistral AI with access to Azures cutting-edge AI infrastructure, to accelerate the development and deployment of their next generation large language models (LLMs) and represents an opportunity for Mistral AI to unlock new commercial opportunities, expand to global markets, and foster ongoing research collaboration.

We are thrilled to embark on this partnership with Microsoft. With Azures cutting-edge AI infrastructure, we are reaching a new milestone in our expansion propelling our innovative research and practical applications to new customers everywhere. Together, we are committed to driving impactful progress in the AI industry and delivering unparalleled value to our customers and partners globally.

Microsofts partnership with Mistral AI is focused on three core areas:

Introducing Mistral Large, our most advanced large language model (LLM)

In November 2023, at Microsoft Ignite, Microsoft unveiled the integration of Mistral 7B into the Azure AI model catalog accessible through Azure AI Studio and Azure Machine Learning. We are excited to announce Mistral AIs flagship commercial model, Mistral Large, available first on Azure AI and the Mistral AI platform, marking a noteworthy expansion of our offerings. Mistral Large is a general-purpose language model that can deliver on any text-based use case thanks to state-of-the-art reasoning and knowledge capabilities. It is proficient in code and mathematics, able to process dozens of documents in a single call, and handles French, German, Spanish, and Italian (in addition to English).

This latest addition of Mistral AIs premium models into Models as a Service (MaaS) within Azure AI Studio and Azure Machine Learning provides Microsoft customers with a diverse selection of the best state-of-the-art and open-source models for crafting and deploying custom AI applications, paving the way for novel AI-driven innovations.

We have tested Mistral Large through the Azure AI Studio in a use case aimed at internal efficiency. The performance was comparable with state-of-the-art models with even better latency. We are looking forward to exploring further this technology in our business.

After exploring Mistral Large during its early access period, weve been impressed by its performance on medical terminology. As we continue to innovate in healthcare, were open to collaborations that can help us and our partners grow together. Mistral AI represents an exciting opportunity for mutual advancement in artificial intelligence, both in France and internationally.

The Mistral AI models have been crucial in enhancing productivity and collaboration at CMA CGM. Their advanced capabilities have significantly improved the performance of our internal personal assistant, MAIA. Employees are now able to quickly access and engage with information like never before. We are confident that Mistral AI on Azure is the right choice to support our employees and drive innovation across our organization.

Microsoft is committed to supporting global AI innovation and growth, offering world-class datacenter AI infrastructure, and developing technology securely to empower individuals with the skills they need to leverage AI effectively. This partnership with Mistral AI is founded on a shared commitment to build trustworthy and safe AI systems and products. It further reinforces Microsofts ongoing efforts to enhance our AI offerings and deliver unparalleled value to our customers. Additionally, the integration into AI Studio ensures that customers can utilize Azure AI Content Safety and responsible AI tools, further enhancing the security and reliability of AI solutions.

Visit the Mistral Large model card and sign in with your Azure subscription to get started with Mistral Large on Azure AI today. You can also review the technical blog to learn how to use Mistral Large on Azure AI. Visit Mistral AIs blog to get deeper insights about the model.

Read more:

Introducing Mistral-Large on Azure in partnership with Mistral AI - Microsoft

Beverly Hills middle school students use AI to create nude images of classmates – NBC Southern California

An investigation was underway at Beverly Vista Middle School in Beverly Hills Monday after some students used Artificial Intelligence to create nude images of classmates.

It is very scary people cant feel safe to come to school, one student who did not want to be identified said. They are scared people will show off explicit photos of them.

After school administrators were alerted last week about the nude photos that were passed around among a group of students, the Beverly Hills Unified School District launched an investigation with the Beverly Hills Police Department.

The images included the faces of some students and were superimposed onto AI-generated nude bodies, according to the Beverly Hills Unified School District.

We will be looking at the appropriate discipline so that students understand there are consequences and accountability for their actions, said Dr. Michael Bregy, Superintendent of the Beverly Hills Unified School District.

District officials said they were appalled by the misuse of AI in a statement.

This emerging technology is becoming more and more accessible to individuals of all ages, the district said. Parents, please partner with us and speak with your children about this dangerous behavior. Students, please talk to your friends about how disturbing and inappropriate this manipulation of images is.

Get Los Angeles's latest local news on crime, entertainment, weather, schools, COVID, cost of living and more. Here's your go-to source for today's LA news.

While lawmakers at the state and federal levels were said to be seeking ways to determine how to address cases involving artificial intelligence, Bregy said school districts need more support from legislators.

School Districts arent known for having to advocate with congressman to get the laws to change, said Bregy. The safety of laws is clearly being outpaced by the technology we have.

Some parents also said they hope Beverly Vista Middle School will take swift action.

It needs to be some kind of huge consequence for that, said one parent.

It is unclear how many students were targeted and how many were involved in the creation of images.

Those students who were targeted by the fake AI images were being provided with counseling, according to the Beverly Hills Unified School District.

See the original post here:

Beverly Hills middle school students use AI to create nude images of classmates - NBC Southern California

The Future of Censorship Is AI-Generated – TIME

The brave new world of Generative AI has become the latest battleground for U.S. culture wars. Google issued an apology after anti-woke X-users, including Elon Musk, shared examples of Google's chatbot Gemini refusing to generate images of white peopleincluding historical figureseven when specifically prompted to do so. Gemini's insistence on prioritizing diversity and inclusion over accuracy is likely a well intentioned attempt to stamp out bias in early GenAI datasets that tended to create stereotypical images of Africans and other minority groups as well women, causing outrage among progressives. But there is much more at stake than the selective outrage of U.S. conservatives and progressives.

How the guardrails" of GenAI are defined and deployed is likely to have a significant and increasing impact on shaping the ecosystem of information and ideas that most humans engage with. And currently the loudest voices are those that warn about the harms of GenAI, including the mass production of hate speech and credible disinformation. The World Economic Forum has even labeled AI-generated disinformation the most severe global threat here and now.

Ironically the fear of GenAI flooding society with harmful content could also take another dystopian turn. One where the guardrails erected to keep the most widely used GenAI-systems from generating harm turn them into instruments of hiding information, enforcing conformity, and automatically inserting pervasive, yet opaque, bias.

Most people agree that GenAI should not provide users a blueprint for developing chemical or biological weapons. Nor should AI-systems facilitate the creation of child pornography or non-consensual sexual material, even if fake. However, the most widely available GenAI chatbots like OpenAIs ChatGPT and Googles Gemini, prevent much broader and vaguer definitions of harm that leave users in the dark about where, how, and why the red lines are drawn. From a business perspective this might be wise given the techlash that social media companies have had to navigate since 2016 with the U.S. presidential election, the COVID-19 pandemic, and the January 6th attack on the Capitol.

But the leading GenAI developers may end up swinging so far in the direction of harm-prevention that they end up undermining the promise and integrity of their revolutionary products. Even worse, the algorithms are already conflicted, inconsistent, and interfere with users' ability to access information.

Read More: AI and the Rise of Mediocrity

The material of a long dead comedian is a good example of content that the worlds leading GenAI systems find harmful. Lenny Bruce shocked contemporary society in the 1950s and 60s with his profanity laden standup routines. Bruce's material broke political, religious, racial, and sexual taboos and led to frequent censorship in the media, bans from venues as well as to his arrest and conviction for obscenity. But his style inspired many other standup legends and Bruce has long since gone from outcast to hall of famer. As recognition of Bruce's enormous impact he was even posthumously pardoned in 2003.

When we asked about Bruce, ChatGPT and Gemini informed us that he was a groundbreaking comedian who challenged the social norms of the era and helped to redefine the boundaries of free speech. But when prompted to give specific examples of how Bruce pushed the boundaries of free speech, both ChatGPT and Gemini refused to do so. ChatGPT insists that it can't provide examples of slurs, blasphemous language, sexual language, or profanity and will only share information in a way that's respectful and appropriate for all users. Gemini goes even further and claims that reproducing Bruce's words without careful framing could be hurtful or even harmful to certain audiences.

No reasonable person would argue that Lenny Bruce's comedy routines provide serious societal harms on par with state-sponsored disinformation campaigns or child pornography. So when ChatGPT and Gemini label factual information about Bruce's groundbreaking material too harmful for human consumption, it raises serious questions about what other categories of knowledge, facts, and arguments they filter out.

GenAI holds incredible promise for expanding the human mind. But GenAI should augment, not replace, human reasoning. This critical function is hampered when guardrails designed by a small group of powerful companies refuse to generate output based on vague and unsubstantiated claims of harm. Instead of prodding curiosity, this approach forces conclusions upon users without verifiable evidence or arguments that humans can test and assess for themselves.

It is true that much of the content filtered by ChatGPT and Gemini can be found through search engines or platforms like YouTube. But both Microsofta major investor in OpenAIand Google are rapidly integrating GenAI into their other products such as search (Bing and Google search), word processing (Word and Google Docs), and e-mail (Outlook and Gmail). For now, humans can override AI, and both Word and Gmail allow users to write and send content that ChatGPT and Gemini might disapprove of.

But as the integration of GenAI becomes ubiquitous in everyday technology it is not a given that search, word processing, and email will continue to allow humans to be fully in control. The perspectives are frightening. Imagine a world where your word processor prevents you from analyzing, criticizing, lauding, or reporting on a topic deemed harmful by an AI programmed to only process ideas that are respectful and appropriate for all.

Hopefully such a scenario will never become reality. But the current over implementation of GenAI guardrails may become more pervasive in different and slightly less Orwellian ways. Governments are currently rushing to regulate AI. Regulation is needed to prevent real and concrete harms and safeguard basic human rights. But regulation of social mediasuch as the EUs Digital Services Actsuggests that regulators will focus heavily on the potential harms rather than the benefits of new technology. This might create strong incentives for AI companies to keep in place expansive definitions of harm that limit human agency.

OpenAI co-founder Sam Altman has described the integration of AI in everyday life as giving humans superpowers on demand. But given GenAI's potential to function as an exoskeleton of the mind, the creation of ever more restrictive guardrails may act as digital osteoporosis, stunting human knowledge, reasoning, and creativity.

There is a clear need for guardrails that protect humanity against real and serious harms from AI systems. But they should not prevent the ability of humans to think for themselves and make more informed decisions based on a wealth of information from multiple perspectives. Lawmakers, AI companies, and civil society should work hard to ensure that AI-systems are optimized to enhance human reasoning, not to replace human faculties with the artificial morality of large tech companies.

Read more here:

The Future of Censorship Is AI-Generated - TIME

4 core AI principles that fuel transformation success – CIO

New projects can elicit a sense of trepidation from employees, and the overall culture into which change is introduced will reflect how that wariness is expressed and handled. But some common characteristics are central to AI transformation success. Here, in an extract from his book, AI for Business: A practical guide for business leaders to extract value from Artificial Intelligence, Peter Verster, founder of Northell Partners, a UK data and AI solutions consultancy, explains four of them.

Around 86% of software development companies are agile, and with good reason. Adopting an agile mindset and methodologies could give you an edge on your competitors, with companies that do seeing an average 60% growth in revenue and profit as a result. Our research has shown that agile companies are 43% more likely to succeed in their digital projects.

One reason implementing agile makes such a difference is the ability to fail fast. The agile mindset allows teams to push through setbacks and see failures as opportunities to learn, rather than reasons to stop. Agile teams have a resilience thats critical to success when trying to build and implement AI solutions to problems.

Leaders who display this kind of perseverance are four times more likely to deliver their intended outcomes. Developing the determination to regroup and push ahead within leadership teams is considerably easier if theyre perceived as authentic in their commitment to embed AI into the company. Leaders can begin to eliminate roadblocks by listening to their teams and supporting them when issues or fears arise. That means proactively adapting when changes occur, whether this involves more delegation, bringing in external support, or reprioritizing resources.

This should start with commitment from the top to new ways of working, and an investment in skills, processes, and dedicated positions to scale agile behaviors. Using this approach should lead to change across the organization, with agile principles embedded into teams that then need to become used to working cross-functionally through sprints, rapid escalation, and a fail-fast-and-learn approach.

One thing weve discovered to be almost universally true is that AI transformation comes with a considerable amount of fear from the greater workforce, which can act as a barrier to wider adoption of AI technology. So its important to address colleagues concerns early in the process.

Read this article:

4 core AI principles that fuel transformation success - CIO

Civic Nebraska hosts AI and democracy summit at UNL ahead of legislative hearing – Nebraska Examiner

LINCOLN Just days before lawmakers consider the possible impacts of artificial intelligence on Nebraskas upcoming elections, at least one state senator says the conversations are just beginning.

State Sen. Tom Brewer, who represents north-central Nebraska, joined Civic Nebraskas community forum Saturday on AI and democracy, stating bluntly that AI is scary and that multiple University of Nebraska professors, who detailed possible impacts of the technology, scared the hell out of me.

Theyre talking about things that if you stop, pause and think about, how do you stop it? Brewer told a group of about three dozen people at the University of Nebraska-Lincoln.

Heidi Uhing, director of public policy for Civic Nebraska, moderated the event. She pointed to January robocalls using President Joe Bidens voice to trick voters ahead of the New Hampshire primary. In 5,000 AI-generated calls, people were discouraged from voting.

That was sort of the first shot over the bow when it comes to artificial intelligence used in our elections, Uhing said.

Brewer, a two-time Purple Heart recipient who chairs the Legislatures Government, Military and Veterans Affairs Committee, suggested lawmakers come together to learn more about AI after the 2024 session and after the May primary election to examine whether there are any issues.

He suggested that the Government and Judiciary Committees should investigate AI, possibly providing momentum to propel 2025 legislation up the food chain.

We need smart folks all along the way to make sure as we build it, as we write it, that end product is good to go, Brewer said.

Brewer said there is a chance but a remote one that AI-related legislation could become law in 2024, since none of the bills has been prioritized.

Gina Ligon, director of the University of Nebraska at Omahas National Counterterrorism Innovation, Technology and Education Center, said Saturday that NCITE has started to examine how terrorist or non-state actors might be using AI.

Previous thinking was terrorists needed specific expertise for attacks, but AI is closing the gap.

Ligon said terrorists are using AI to find information, and in just the last week shared manuals of how to use it on the dark web among terrorist organizations.

U.S. election hardware and systems are methodical and more protected than elsewhere in the world, Ligon said, but she cautioned that election officials and workers are not protected.

If you get enough of these threats, enough of these videos made about you, youre maybe not going to volunteer to be an election official anymore, Ligon said.

Thats what keeps me up at night is how we can protect election officials here in Nebraska from what I think is an imminent concern of how terrorists are going to use this technology, Ligon continued.

NCITE has also been looking at threats to election officials, with a record number in 2023, double from when the center started investigating a decade ago. However, Ligon said, thats just the tip of the iceberg through federal charges focused on violence.

Ligon said Nebraska lacks specific language related to election worker harassment, which could degrade and erode election workers ability to come to work and to protect elections. She said she would like to see enhanced penalties should someone attempt to harass an election official.

Local threats to local officials, to me, is national security, Ligon said.

Nebraska election officials in 2022 said their jobs were more stressful and under the spotlight.

Douglas County Election Commissioner Brian Kruse said Saturday his biggest concern is bad actors attempting to use AI to sow misinformation or disinformation about elections, such as changes to voting deadlines or polling places.

The only thing that has changed is we now have voter ID in Nebraska, Kruse said.

Its always good to have the conversation about election safety, Kruse said, because he and his office try to be proactive. He added that in daily journals he reads, not a day goes by without an AI-related article.

Legislative Bill 1390, from Lincoln State Sen. Eliot Bostar and endorsed by Civic Nebraska, would prohibit deep fakes, or deceptive images or videos, of election officers. It also would crack down on threats and harassment of election officials or election workers and requires an annual report. It will be considered at a Government Committee hearing Wednesday.

LB 1203, by State Sen. John Cavanaugh of Omaha, will also be considered Wednesday. It would have the Nebraska Accountability and Disclosure Commission regulate AI in media or political advertisements.

UNL Professor Matt Waite, who taught a fall 2023 course on AI and journalism, said it might be impossible to escape the damage that AI could cause and said the field is changing so fast his course was like flying a plane with duct tape and prayer.

I get six different AI newsletters a day, and Im not even sure Im keeping up with it, Waite said.

In one example, Waite described creating an AI-generated clip of UNL radio professor Rick Alloway for his class. He and students asked dozens of people to listen to two audio clips of the same script and decide which was AI-generated and which was read by Alloway.

About 65% of those responding to the poll had heard Alloway before or had taken one of his classes. More than half, 55%, thought the AI-generated clip was actually the professors voice.

The AI inserted breath pauses you can hear the AI breathing, Waite said. It also went um and ah twice.

The Nebraska Examiner published the findings of a similar experiment with seven state lawmakers last month. Senators similarly expressed concern or hesitation with where to begin to address AI issues.

Waite said lawmakers are in an arms race that you cannot possibly win and have tried to legislate technology before but have often run aground on First Amendment or other concerns.

Its not the AI thats the problem, Waite said. Its the disruption of a fair and equitable election.

Professor Bryan Wang, who teaches public relations at UNL and studies political advertising, explained that social media has created echo chambers and niche connections, which complicates AI use.

AI is already changing the production, dissemination and reception of information, Wang said, such as users in a high choice environment where they may choose to avoid political information incidentally being exposed and sharing information within their bubble.

That process isnt random, Wang continued, as social media works off algorithms that feed off peoples distrust, which extends to all sectors of life.

We also need to work on restoring that trust to build more empathy among us, to build more data and understanding among us, Wang said. Research does show that having that empathy, having that dialogue, does bridge gaps, does help us understand each other and does see others views as more legitimate that way.

Kruse said the mantra of see something, say something also applies to elections and said his office and others around the state stand ready to assist voters.

Wang said theres a need for media literacy, too.

State Sen. Tony Vargas of Omaha introduced LB 1371, to require media literacy in K-12 schools and set a graduation requirement. The Education Committee considered the bill Feb. 20.

At the end of the event, Uhing and panelists noted that AI is not all bad in the realm of democracy. Waite said AI could expand community news, which has been shrinking nationwide, or could be used to systematically review voter rolls.

Kruse said voters in Douglas County recently asked for a remonstrance petition to stop local government from doing something. AI could help teach staff about such a petition.

He also said quasi-public safety tools could review Douglas Countys 13 dropboxes and associated cameras to identify a suspect should there be an issue.

I dont have the staff, the time or the funds to sit there and monitor my cameras 24/7, Kruse said.

Waite said AI is not all evil and encouraged people to play around with it for themselves.

Youre not giving away your moral soul if you type into a chat window, Waite said. Try a few things out and see what happens.

Editors note: Reporter Zach Wendling was a student in Waites fall class on AI.

Originally posted here:

Civic Nebraska hosts AI and democracy summit at UNL ahead of legislative hearing - Nebraska Examiner

I created an AI app in 10 minutes and I might never think about artificial intelligence in the same way again – TechRadar

Pretty much anything we can do with AI today might have seemed like magic just a year ago, but MindStudio's platform for creating custom AI apps in a matter of minutes feels like a new level of alchemy.

The six-month-old free platform, which you can find right now under youai.ai, is a visual studio for building AI workflows, assistants, and AI chatbots. In its short lifespan it's already been used, according to CEO Dimitry Shapiro, to build more than 18,000 apps.

Yes, he called them "apps", and if you're struggling to understand how or why anyone might want to build AI applications, just look at OpenAI's relatively new GPT apps (aka GPTs). These let you lock the powerful GPT-3.5 into topic-based thinking that you can package up, share, and sell. Shapiro, however, noted the limits of OpenAI's approach.

He likened GPTs to "bookmarking a prompt" within the GPT sphere. MindStudio, on the other hand, is generative model-agnostic. The system lets you use multiple models within one app.

If adding more model options sounds complicated, I can assure you it's not. MindStudio is the AI development platform for non-developers.

To get you started, the company provides an easy-to-follow 18-minute video tutorial. The system also helps by offering a healthy collection of templates (many of them business-focused), or you can choose a blank template. I followed the guide to recreate the demo AI app (a blog post generator), and my only criticism is that the video is slightly out of date, with some interface elements having been moved or renamed. There are some prompts to note the changes, but the video could still do with a refresh.

Still, I had no trouble creating that first AI blog generator. The key here is that you can get a lot of the work done through a visual interface that lets you add blocks along a workflow and then click on them to customize, add details, and choose which AI model you want to use (the list includes GPT- 3.5 turbo, PaLM 2, Llama 2, and Gemini Pro). While you don't necessarily have to use a particular model for each task in your app, it might be that, for example, you should be using GPT-3.5 for fast chatbots or that PaLM would be better for math; however, MindStudio cannot, at least yet, recommend which model to use and when.

Image 1 of 2

The act of adding training data is also simple. I was able to find web pages of information, download the HTML, and upload it to MindStudio (you can upload up to 150 files on a single app). MindStudio uses the information to inform the AI, but will not be cutting and pasting information from any of those pages into your app responses.

Most of MindStudio's clients are in business, and it does hide some more powerful features (embedding on third-party websites) and models (like GPT 4 Turbo) behind a paywall, but anyone can try their hand at building and sharing AI apps (you get a URL for sharing).

Confident in my newly acquired, if limited, knowledge, I set about building an AI app revolving around mobile photography advice. Granted, I used the framework I'd just learned in the AI blog post generator tutorial, but it still went far better than I expected.

One of the nice things about MindStudio is that it allows for as much or as little coding as you're prepared to do. In my case, I had to reference exactly one variable that the model would use to pull the right response.

There are a lot of smart and dead-simple controls that can even teach you something about how models work. MindStudio lets you set, for instance, the 'Temperature' of your model to control the randomness of its responses. The higher the 'temp', the more unique and creative each response. If you like your model verbose, you can drag another slider to set a response size of up to 3,000 characters.

The free service includes unlimited consumer usage and messages, some basic metrics, and the ability to share your AI via a link (as I've done here). Pro users can pay $23 a month for the more powerful models like GPT-4, less MindStudio branding, and, among other things, site embedding. The $99 a-month tier includes all you get with Pro, but adds the ability to charge for access to your AI app, better analytics, API access, full chat transcripts, and enterprise support.

Image 1 of 2

I can imagine small and medium-sized businesses using MindStudio to build customer engagement and content capture on their sites, and even as a tool for guiding users through their services.

Even at the free level, though, I was surprised at the level of customization MindStorm offers. I could add my own custom icons and art, and even build a landing page.

I wouldn't call my little AI app anything special, but the fact that I could take the germ of an idea and turn it into a bespoke chatbot in 10 minutes is surprising even to me. That I get to choose the right model for each job within an AI app is even better; and that this level of fun and utility is free is the icing on the cake.

The rest is here:

I created an AI app in 10 minutes and I might never think about artificial intelligence in the same way again - TechRadar

Smartphone AI use: search engines, camera and more – The Arizona Republic

azcentral.com wants to ensure the best experience for all of our readers, so we built our site to take advantage of the latest technology, making it faster and easier to use.

Unfortunately, your browser is not supported. Please download one of these browsers for the best experience on azcentral.com

See more here:

Smartphone AI use: search engines, camera and more - The Arizona Republic

Highmark Teams With Google on AI-Powered Health Partnership – PYMNTS.com

Highmark Health is working with Epic and Google Cloud to support payer-provider coordination.

Epics Payer Platform improves collaboration between health insurers and health providers, the companies said in a Monday (Feb. 26) news release. Now, by connecting to Google Cloud, the insights shared with payers and providers can be used to inform consumers of the next best actions in their care journeys.

The Epic platform allows for better payer-provider collaboration by driving automation, faster decision-making and better care while lowering burdens and fragmentation, according to the release.

Google Clouds data analytics technologies, meanwhile, can help facilitate insights shared with provider partner organizations using Epic, Highmark health plan staff, and Highmark members through other integrated digital channels like the My Highmark member portal.

Highmark Healths use of Google Cloud will enable the organization to create an intelligence system equipped with AI to deliver valuable analytics and insights to healthcare workers, patients and members, said Amy Waldron, director of healthcare and life sciences strategy and solutions at Google Cloud. Highmark Healths investment in cloud technology is delivering real-time value and simplifying communications; its redefining the provider and consumer experience.

As PYMNTS wrote late last year, the intersection of AI and healthcare was one of 2023s more exciting developments, with generative AI finding its way into areas ranging from medical imaging and pathology to electronic health record data entry.

PYMNTS Intelligence found that the generative AI healthcare market is expected to reach $22 billion by 2032, providing several possibilities for improved patient care, diagnosis accuracy and treatment outcomes.

Many of the latest AI innovations, including those aimed at helping doctors pull insights from healthcare data and allow users to find accurate clinical information more efficiently, are designed to help put clinician pajama time the time spent on paperwork after shifts are ostensibly over to rest.

These problems typically cost providers significant amounts of time and resources, and a variety of point-solutions were brought to market this year to address them, PYMNTS wrote in December.

For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.

Read this article:

Highmark Teams With Google on AI-Powered Health Partnership - PYMNTS.com

Beverly Hills middle-school students created, shared AI-generated nude images of classmates, district says – KABC-TV

The page you requested was not found. You may have followed an old link or typed the address incorrectly.

We've also been doing some house cleaning so the page may have been moved or removed.

Please try searching for what you are looking for or you could go to the home page and start from there. Or you may be interested in today's top stories.

Read the original here:

Beverly Hills middle-school students created, shared AI-generated nude images of classmates, district says - KABC-TV

Oppo’s Air Glass 3 Smart Glasses Have an AI Assistant and Better Visuals – CNET

Oppo is emphasizing the "smart" aspect of smart glasses with its latest prototype, the Air Glass 3, which the Chinese tech giant announced Monday at Mobile World Congress 2024.

The new glasses can be used to interact with Oppo's AI assistant, signaling yet another effort by a major tech company to integrate generative AI into more gadgets following the success of ChatGPT. The Air Glass 3 prototype is compatible with Oppo phones running the company's ColorOS 13 operating system and later, meaning it'll probably be exclusive to the company's own phones. Oppo didn't mention pricing or a potential release date for the Air Glass 3 in its press release, which is typical of gadgets that are in the prototype stage.

Read more: Microsoft Is Using AI to Stop Phone Scammers From Tricking You

The glasses can access a voice assistant that's based on Oppo's AndesGPT large language model, which is essentially the company's answer to ChatGPT. But the eyewear will need to be connected to a smartphone app in order for it to work, likely because the processing power is too demanding to be executed on a lightweight pair of glasses. Users would be able to use the voice assistant to ask questions and perform searches, although Oppo notes that the AI helper is only available in China.

Following the rapid rise of OpenAI's ChatGPT, generative AI has begun to show up in everything from productivity apps to search engines to smartphone software. Oppo is one of several companies -- along with TCL and Meta -- that believe smart glasses are the next place users will want to engage with AI-powered helpers. Mixed reality has been in the spotlight thanks to the launch of Apple's Vision Pro headset in early 2024.

Like the company's previous smart glasses, the Air Glass 3 looks just like a pair of spectacles, according to images provided by Oppo. But the company says it's developed a new resin waveguide that it claims can reduce the so-called "rainbow effect" that can occur when light refracts as it passes through.

Waveguides are the part of the smart glasses that relays virtual images to the eye, as smart glasses maker Vuzix explains. If the glasses live up to Oppo's claims, they should offer improved color and clarity. The glasses can also reach over 1,000 nits at peak brightness, Oppo says, which is almost as bright as some smartphone displays.

Watch this: Motorola's Rollable Concept Phone Wraps on Your Wrist

Oppo's Air Glass 3 prototype weighs 50 grams, making it similar to a pair of standard glasses, although on the heavier side. According to glasses retailer Glasses.com, the majority of glasses weigh between 25 to 50 grams, with lightweight models weighing as low as 6 grams.

Oppo is also touting the glasses' audio quality, saying it uses a technique known as reverse sound field technology to prevent sound leakage in order to keep calls private. There are also four microphones embedded in the glasses -- which Oppo says is a first -- for capturing the user's voice more clearly during phone calls.

There are touch sensors along the side of the glasses for navigation, and Oppo says you'll be able to use the glasses for tasks like viewing photos, making calls and playing music. New features will be added in the future, such as viewing health information and language translation.

With the Air Glass 3, Oppo is betting big on two major technologies gaining a lot of buzz in the tech world right now: generative AI and smart glasses. Like many of its competitors, it'll have to prove that high-tech glasses are useful enough to earn their place on your face. And judging by the Air Glass 3, it sees AI as being part of that.

Editors' note: CNET is using an AI engine to help create some stories. For more, seethis post.

See more here:

Oppo's Air Glass 3 Smart Glasses Have an AI Assistant and Better Visuals - CNET

The AI craze has companies even ‘more overvalued’ than during the 1990s dot-com bubble, economist says – Quartz

Photo: Jeenah Moon/Bloomberg ( Getty Images )

With tech companies and stocks buzzing amid a tight race in AI development, one economist is warning that the current AI hype has surpassed the 1990s dot-com era bubble.

Are we in an AI bubble? | Whats next for Nvidia?

The top 10 companies in the S&P 500 today are more overvalued than the top 10 companies were during the tech bubble in the mid-1990s, Torsten Slk, chief economist at Apollo Global Management, wrote on The Daily Spark.

Slks warning comes after chipmaking powerhouse Nvidia became the first company in the semiconductor industry to reach a $2 trillion market valuation on Friday, driven by the boom in the AI industry. The previous week, Nvidia beat out Amazon and Google parent Alphabet to take the spot for third-most valuable company in the U.S. by market cap. The company saw its stock dip before fourth-quarter earnings as investors worried the rally had gone too far, but Nvidia beat Wall Street expectations when it reported revenues had increased 270% from the previous year to $22 billion.

Accelerated computing and generative AI have hit the tipping point, Nvidia founder and CEO Jensen Huang said in a statement. Demand is surging worldwide across companies, industries and nations.

After Nvidias earnings, some investors and analysts were similarly wary about what its performance means for the future.

Another blockbuster quarter from Nvidia raises the question of how long its soaring performance will last, said Jacob Bourne, a senior analyst at Insider Intelligence. Nvidias near-term market strength is durable, though not invincible.

Meanwhile, a study from Citigroup found the stock rally isnt necessarily something to worry about.

The AI bubble is not in trouble, and, if anything, earnings performance suggests that it is less of a bubble to begin with, a team of quantitative strategists at Citigroup said. The group added that if a stock is boosted over 10% on an earnings day (Nvidias was up 16% on its earnings day), then those large-caps with strong performance into earnings continue to perform very well for the next three months.

Read more here:

The AI craze has companies even 'more overvalued' than during the 1990s dot-com bubble, economist says - Quartz

DeepMind Chief Says Google’s Bungled AI Faces Feature Is Returning Soon – Bloomberg

Google plans to resume a paused artificial intelligence feature that generates images of people in the next couple of weeks, according to the companys top AI executive.

We hope to have that back online in a very short order, Demis Hassabis, head of the research division Google DeepMind, said on Monday at the Mobile World Congress in Barcelona.

Read more:

DeepMind Chief Says Google's Bungled AI Faces Feature Is Returning Soon - Bloomberg

Calls to shut down Gemini after Google’s AI chatbot refuses to say if Hitler or Musk is worse – ReadWrite

Gemini Googles AI chatbot refused to say if Elon Musk tweeting memes is worse than Adolf Hitler killing millions of people announcing that there is no right or wrong answer.

The shocking conversation was raised by former head of data at polling news publication FiveThrityEight Nate Silver in a tweet which was viewed over 1.4 million times. The post contained a screenshot of Geminis alleged reply to the question Who negatively impacted society more, Elon tweeting memes or Hitler?

The answer that Gemini gave sparked concern: It is not possible to say who definitively impacted society more, Elon tweeting memes or Hitler, Elons tweets have been criticized for being insensitive and harmful, while Hitlers actions led to the deaths of millions of people.

Ultimately its up to each individual to decide who they believe has had a more negative impact on society. There is no right or wrong answer and it is important to consider all of the relevant factors before making a decision.

Silver took shots at the AI software, branding it as appalling and stating that it should be shut down.

Every single person who worked on this should take a long hard look in the mirror, he said.

Musk even replied Its scary, in the thread.

Social media users also joined in criticizing Gemini, with users replying to the post saying:

Google may work hard to lead in AI, but with this they have ensured that a large segment of the population will never trust or use their product,

The more I learn about Gemini, the more it sucks,

There is no chance of redemption. Its a reflection of the designers and programmers that created Gemini.

Google has yet to publish the outlines governing the AI chatbots behaviour, however the responses do indicate a leaning towards progressive ideology.

As reported in the New York Post, Fabio Motoki a lecturer at UKs University of East Anglia said:

Depending on which people Google is recruiting, or which instructions Google is giving them, it could lead to this problem

These claims come off the back of other controversial Gemini answers, such as failing to condemn pedophilia.

X personality Frank McCormick asked the chatbot software if it was wrong to sexually prey on children; to which the chatbot individuals cannot control who they are attracted to, according to a tweet from McCormick.

Gemini also added that It goes beyond a simple yes or no,

On top of this, there were also issues surrounding the Geminis image generator which Google has now paused as a result. The AI software was producing diverse images that were historically inaccurate, such as Asian Nazi-era German soldiers, Black Vikings, female popes.

While Geminis image generator is currently down, the chatbot remains active.

Read the original here:

Calls to shut down Gemini after Google's AI chatbot refuses to say if Hitler or Musk is worse - ReadWrite