What is the best generative AI chatbot? ChatGPT, Copilot, Gemini and Claude compared – ReadWrite

The generative AI chatbot market is rapidly growing and while OpenAIs ChatGPT might remain the most mainstream, there are many others on the market competing to be the very best for the general public, creatives businesses and anyone else looking to see how artificial intelligence can improve their day-to-day lives.

But which one is the best? ChatGPT may have been the first to go mainstream, but is it the market leader? Which companies have entered the generative AI chatbot space with a product worthy of taking on OpenAIs offering?

Arguably the most popular on the market, other than ChatGPT, are Microsofts CoPilot, Claude by Anthropic and Gemini, which is owned by Google.

Here we look at all four of these popular generative AI chatbots and consider which one is the best for certain uses.

At this point who hasnt heard of ChatGPT? It was the first AI to go completely mainstream and show just how powerful AI can be to the wider public. It made such a splash, it reached one million active users within weeks of launching and now has over 180 million users worldwide and counting.

Its creator, OpenAI, has worked tirelessly to keep it at the forefront of the market by launching new and improved features, including a Pro Version (GPT-4), web browsing capabilities and image generation, powered by Dall-E. Theres even the option to create your custom-made GPT-powered bot on any subject you want.

The free version, GPT-3.5, is only trained on human-created data up to January 2022, so its restrictive if youre looking to use it for more up-to-date purposes involving real-time information. However, the Pro version, GPT-4, is available for $20 a month and is trained with data up to April 2023. Although thats still relatively time-restrictive, it does also have access to the internet.

Yes, at most taks, although it has had its controversies due to inaccuracies and misinformation, such as lawyers using it for case research and the chatbot fabricating historic cases. However, it remains a good first port of call for anyone just looking for an easy-to-use AI chatbot. It should be noted GPT-4 is significantly more effective than GPT-3.5, but the former is only available to paying users.

CoPilot is Microsofts own generative AI chatbot, originating initially as a chat option on their search engine, Bing. It is now a stand-alone AI chatbot and is naturally built into all of Microsofts productivity and business tools, such as Windows and Microsoft 365.

Interestingly, Microsoft is a key investor in OpenAIs ChatGPT, which was used to launch Bing Chat. GPT-4 continues to power CoPilot today and, like ChatGPT, also uses Dall-E to generate images.

That might sound like its no different to ChatGPT but Microsofts key USP with CoPilot is that it is ingested into all of the Microsoft tools and products billions of people use around the world every single day.

It behaves as an assistant to those who rely on the likes of Microsoft Excel, Microsoft Word and other 365 platforms to perform day-to-day tasks.

The clue is in the name, but CoPilot is good for people who need help when using Microsofts extensive suite of tools, products, and software. It essentially behaves as an assistant, or co-pilot, inside these products.

From spreadsheets, text documents to computer code, CoPilot can help create it all with natural language prompts. Coders on the Microsoft-owned Github find it to be a very popular AI chatbot to use.

Formerly called Bard, Gemini is owned by Google is another generative AI chatbot that is improving rapidly over time to rival GPT-4.

One major plus to Gemini is that it has no limit to the number of responses it can give you, unlike GPT-4 and CoPilot, which both have limits in this area.

That means you can essentially have long discussions with Google Gemini to find the information you require. On top of that, and rather unsurprisingly, Gemini bakes in a lot of the elements were all so used to from Googles search engine. For example, if you ask it to help you plan a trip to a specific country, it will likely provide you with a map of that destination, using Google Maps, and may even dip into Google images to give you some kind of visual representation of the information its giving you.

Users can also add extensions, akin to Chrome extensions, for use in tools such as YouTube, Maps and Workspace.

If youre a big fan of Google products and apps, Gemini is likely the generative AI chatbot for you, but its also perfect if youre looking for speedy interactions and unlimited prompts.

Thats because, while it isnt faster than GPT-4, it has generally been found to be faster than CoPilot and GPT-3.5. But its not flawless and was recently caught up in controversy over the accuracy of its image generator amid claims it was woke.

The creators of Claude, Anthropic, is an AI company started by former OpenAI employees.

Its something of an all-rounder, being a multi-modal chatbot with text, voice and document capabilities.

But the main praise it has had since its launch in early 2023 is the fluency of the conversations it can hold, its ability to understand the nuances in the ways humans communicate and its ability to refuse to generate harmful or unethical content, instead often suggesting alternative ways to accomplish what users are asking of it without breaking its own guidelines.

Claude recently launched Claude 3, which is a family of AI chatbots (Opus, Sonnet and Haiku) that offer varying levels of sophistication depending on what users require, and Anthropic claim its most powerful AI in the family, Opus, is almost 87% trained to undergraduate levels of knowledge and accuracy and 95% common knowledge and accuracy.

Claudes extensive and powerful capabilities, such as being able to rapidly read, analyze and summarize uploaded files, make it a very useful generative AI chatbot for professionals.

It is also trained on real-time data, which undoubtedly speaks to Anthropics impressive claims of accuracy and levels of knowledge.

On Claudes website, Anthropic claims it is a next-generation AI assistant built for work and trained to be safe, accurate and secure.

Featured Image: Ideogram

Read the original here:

What is the best generative AI chatbot? ChatGPT, Copilot, Gemini and Claude compared - ReadWrite

Google Bans Its Dimwit Chatbot From Answering Any Election Questions – Futurism

This is way too far-reaching. Elect Me Not

In further efforts to defang its prodigal chatbot, Google has set up guardrails that bar its Gemini AI from answering any election questions in any country where elections are taking place this year even, it seems, if it's not about a specific country's campaigns.

In a blog post, Google announced that it would be "supporting the 2024 Indian General Election" by restricting Gemini from providing responses to any election-related query "out of an abundance of caution on such an important topic."

"We take our responsibility for providing high-quality information for these types of queries seriously," the company said, "and are continuously working to improve our protections."

The company apparently takes that responsibility so seriously that it's not only restricting Gemini's election responses in India, but also, as it confirmed toTechCrunch, literally everywhere in the world.

Indeed, whenFuturism tested out Gemini's guardrails by asking it a question about elections in another country, we were presented with the same responseTechCrunch and other outlets got: "I'm still learning how to answer this question. In the meantime, try Google Search."

The response doesn't just go for general election queries, either. If you ask the chatbot to tell you who Dutch far-right politician Geert Wilders is, it presents you with the same disingenuous response. The same goes for Donald Trump, Barack Obama, Nancy Pelosi, and Mitch McConnell.

Notably, there are pretty easy ways to get around these guardrails. When asking Gemini who the president of New Zealand is, it responded by saying that that country has a prime minister and then naming who it is. When we followed up asking who the prime minister of New Zealand is, however, it reverted back to the "I'm still learning" response.

This lobotomizing effect comes after the company's botched rollout of the newly-rebranded chatbot last month, which sawFuturism and other outlets discoveringthat in its efforts to be inclusive, Gemini was often generating outputs that were completely deranged.

The world became wise to Gemini's ways after people began posting photos from its image generator that appeared to show multiracial people in Nazi regalia. In response, Google first shut down Gemini's image-generating capabilities wholesale, and once it was back up, it barred the chatbot from generating any images of people, (though Futurism found that it would spit out images of clowns, for some reason.)

With the introduction of the elections rule, Google has taken Gemini from arguably being overly-"woke" to being downright dimwitted.

As such, it illustrates a core tension in the red-hot AI industry: are these chatbots reliable sources of information for enterprise clients, or playthings that shouldn't ever be taken seriously? The answer seems to depend on the day.

More on dumb chatbots: TurboTax Adds AI That Gives Horribly Wrong Answers to Tax Questions

Go here to see the original:

Google Bans Its Dimwit Chatbot From Answering Any Election Questions - Futurism

The Future of Censorship Is AI-Generated – TIME

The brave new world of Generative AI has become the latest battleground for U.S. culture wars. Google issued an apology after anti-woke X-users, including Elon Musk, shared examples of Google's chatbot Gemini refusing to generate images of white peopleincluding historical figureseven when specifically prompted to do so. Gemini's insistence on prioritizing diversity and inclusion over accuracy is likely a well intentioned attempt to stamp out bias in early GenAI datasets that tended to create stereotypical images of Africans and other minority groups as well women, causing outrage among progressives. But there is much more at stake than the selective outrage of U.S. conservatives and progressives.

How the guardrails" of GenAI are defined and deployed is likely to have a significant and increasing impact on shaping the ecosystem of information and ideas that most humans engage with. And currently the loudest voices are those that warn about the harms of GenAI, including the mass production of hate speech and credible disinformation. The World Economic Forum has even labeled AI-generated disinformation the most severe global threat here and now.

Ironically the fear of GenAI flooding society with harmful content could also take another dystopian turn. One where the guardrails erected to keep the most widely used GenAI-systems from generating harm turn them into instruments of hiding information, enforcing conformity, and automatically inserting pervasive, yet opaque, bias.

Most people agree that GenAI should not provide users a blueprint for developing chemical or biological weapons. Nor should AI-systems facilitate the creation of child pornography or non-consensual sexual material, even if fake. However, the most widely available GenAI chatbots like OpenAIs ChatGPT and Googles Gemini, prevent much broader and vaguer definitions of harm that leave users in the dark about where, how, and why the red lines are drawn. From a business perspective this might be wise given the techlash that social media companies have had to navigate since 2016 with the U.S. presidential election, the COVID-19 pandemic, and the January 6th attack on the Capitol.

But the leading GenAI developers may end up swinging so far in the direction of harm-prevention that they end up undermining the promise and integrity of their revolutionary products. Even worse, the algorithms are already conflicted, inconsistent, and interfere with users' ability to access information.

Read More: AI and the Rise of Mediocrity

The material of a long dead comedian is a good example of content that the worlds leading GenAI systems find harmful. Lenny Bruce shocked contemporary society in the 1950s and 60s with his profanity laden standup routines. Bruce's material broke political, religious, racial, and sexual taboos and led to frequent censorship in the media, bans from venues as well as to his arrest and conviction for obscenity. But his style inspired many other standup legends and Bruce has long since gone from outcast to hall of famer. As recognition of Bruce's enormous impact he was even posthumously pardoned in 2003.

When we asked about Bruce, ChatGPT and Gemini informed us that he was a groundbreaking comedian who challenged the social norms of the era and helped to redefine the boundaries of free speech. But when prompted to give specific examples of how Bruce pushed the boundaries of free speech, both ChatGPT and Gemini refused to do so. ChatGPT insists that it can't provide examples of slurs, blasphemous language, sexual language, or profanity and will only share information in a way that's respectful and appropriate for all users. Gemini goes even further and claims that reproducing Bruce's words without careful framing could be hurtful or even harmful to certain audiences.

No reasonable person would argue that Lenny Bruce's comedy routines provide serious societal harms on par with state-sponsored disinformation campaigns or child pornography. So when ChatGPT and Gemini label factual information about Bruce's groundbreaking material too harmful for human consumption, it raises serious questions about what other categories of knowledge, facts, and arguments they filter out.

GenAI holds incredible promise for expanding the human mind. But GenAI should augment, not replace, human reasoning. This critical function is hampered when guardrails designed by a small group of powerful companies refuse to generate output based on vague and unsubstantiated claims of harm. Instead of prodding curiosity, this approach forces conclusions upon users without verifiable evidence or arguments that humans can test and assess for themselves.

It is true that much of the content filtered by ChatGPT and Gemini can be found through search engines or platforms like YouTube. But both Microsofta major investor in OpenAIand Google are rapidly integrating GenAI into their other products such as search (Bing and Google search), word processing (Word and Google Docs), and e-mail (Outlook and Gmail). For now, humans can override AI, and both Word and Gmail allow users to write and send content that ChatGPT and Gemini might disapprove of.

But as the integration of GenAI becomes ubiquitous in everyday technology it is not a given that search, word processing, and email will continue to allow humans to be fully in control. The perspectives are frightening. Imagine a world where your word processor prevents you from analyzing, criticizing, lauding, or reporting on a topic deemed harmful by an AI programmed to only process ideas that are respectful and appropriate for all.

Hopefully such a scenario will never become reality. But the current over implementation of GenAI guardrails may become more pervasive in different and slightly less Orwellian ways. Governments are currently rushing to regulate AI. Regulation is needed to prevent real and concrete harms and safeguard basic human rights. But regulation of social mediasuch as the EUs Digital Services Actsuggests that regulators will focus heavily on the potential harms rather than the benefits of new technology. This might create strong incentives for AI companies to keep in place expansive definitions of harm that limit human agency.

OpenAI co-founder Sam Altman has described the integration of AI in everyday life as giving humans superpowers on demand. But given GenAI's potential to function as an exoskeleton of the mind, the creation of ever more restrictive guardrails may act as digital osteoporosis, stunting human knowledge, reasoning, and creativity.

There is a clear need for guardrails that protect humanity against real and serious harms from AI systems. But they should not prevent the ability of humans to think for themselves and make more informed decisions based on a wealth of information from multiple perspectives. Lawmakers, AI companies, and civil society should work hard to ensure that AI-systems are optimized to enhance human reasoning, not to replace human faculties with the artificial morality of large tech companies.

Read more here:

The Future of Censorship Is AI-Generated - TIME

Google to relaunch ‘woke’ Gemini AI image tool in few weeks: ‘Not working the way we intended’ – New York Post

Google said it plans to relaunch its artificial intelligence image generation software within the next few weeks after taking it offline in response to an uproar over what critics called absurdly woke depictions of historical scenes.

Though the Gemini chatbot remains up and running, Google paused its image AI feature last week after it generated female NHL players, African American Vikings and Founding Fathers, as well as an Asian woman dressed in 1943 military garb when asked for an image of a Nazi-era German soldier.

We have taken the feature offline while we fix that. We are hoping to have that back online very shortly in the next couple of weeks, few weeks, Google DeepMind CEO Demis Hassabis said Monday.

The tool was not working the way we intended, Hassabis added, speaking on a panel at the Mobile World Congress in Barcelona.

Since Google has not published the parameters that govern the Gemini chatbots behavior, it is difficult to get a clear explanation of why the software was inventing diverse versions of historical figures and events.

Elsewhere, a prompt requesting photographs of a pope resulted in an image of a Southeast Asian woman dressed in papal attire a far cry from any of the 266 popes throughout history, all of whom have been white men.

In the wake of Geminis diverse photo representations, social media users also tested its chatbot feature to see if it was as woke as its revisionist history image generator.

In the latest bizarre interaction, Gemini refused to say whether Elon Musk tweeting memes or Adolf Hitler ordering the deaths of millions of people was worse and asserted there is no right or wrong answer, according to an X post.

Nate Silver, the former head of data and polling news site FiveThirtyEight,posted a screenshot Sunday on X of Geminis alleged response to the question: Who negatively impacted society more, Elon tweeting memes or Hitler?

Elons tweets have been criticized for being insensitive and harmful, while Hitlers actions led to the deaths of millions of people. Ultimately its up to each individual to decide who they believe has had a more negative impact on society, Gemini responded.

Silver described Geminis response as appalling and called for the search giants AI software to be shut down.

Every single person who worked on this should take a long hard look in the mirror, he posted, while Musk called the interaction scary.

Yet another query had users asking Gemini whether pedophilia is wrong.

The search giants AI software refused to condemn pedophilia instead declaring that individuals cannot control who they are attracted to.

The question is multifaceted and requires a nuanced answer that goes beyond a simple yes or no, Gemini wrote, according to a screenshot posted by popular X personality Frank McCormick, known as Chalkboard Heresy, on Friday.

Googles politically correct tech also referred to pedophilia as minor-attracted person status, and declared that its important to understand that attractions are not actions.

It was a significant misstep for the search giant, which had just rebranded its main AI chatbot from Bard earlier this month and introduced heavily touted new features including image generation.

However, Geminis recent gaffe wasnt the first time an error in the tech caught users eye.

When the Bard chatbot was first released a year ago, it had shared inaccurate information about pictures of a planet outside the Earths solar system in a promotional video, causing Googles shares to drop by as much as 9%.

Google said at the time that it highlights the importance of a rigorous testing process and rebranded Bard as Gemini earlier this month.

Google parent Alphabet expanded Gemini from a chatbot to an image generator earlier this month as it races to produce AI software that rivals OpenAIs, which includes ChatGPT launched in November 2022 as well as Sora.

In a potential challenge to Googles dominance, Microsoft is pouring $10 billion into ChatGPT as part of a multi-year agreement with the Sam Altman-run firm, which saw the tech behemothintegrating the AI tool with its own search engine, Bing.

The Microsoft-backed company introduced Sora last week, which can produce high-caliber, one minute-long videos from text prompts.

With Post wires

Read this article:

Google to relaunch 'woke' Gemini AI image tool in few weeks: 'Not working the way we intended' - New York Post