Americans’ use of ChatGPT is ticking up, but few trust its election information – Pew Research Center

Its been more than a year since ChatGPTs public debut set the tech world abuzz. And Americans use of the chatbot is ticking up: 23% of U.S. adults say they have ever used it, according to a Pew Research Center survey conducted in February, up from 18% in July 2023.

The February survey also asked Americans about several ways they might use ChatGPT, including for workplace tasks, for learning and for fun. While growing shares of Americans are using the chatbot for these purposes, the public is more wary than not of what the chatbot might tell them about the 2024 U.S. presidential election. About four-in-ten adults have not too much or no trust in the election information that comes from ChatGPT. By comparison, just 2% have a great deal or quite a bit of trust.

Pew Research Center conducted this study to understand Americans use of ChatGPT and their attitudes about the chatbot. For this analysis, we surveyed 10,133 U.S. adults from Feb. 7 to Feb. 11, 2024.

Everyone who took part in the survey is a member of the Centers American Trends Panel (ATP), an online survey panel that is recruited through national, random sampling of residential addresses. This way, nearly all U.S. adults have a chance of selection. The survey is weighted to be representative of the U.S. adult population by gender, race, ethnicity, partisan affiliation, education and other categories. Read more about the ATPs methodology.

Here are the questions used for this analysis, along with responses, and the survey methodology.

Below well look more closely at:

Most Americans still havent used the chatbot, despite the uptick since our July 2023 survey on this topic. But some groups remain far more likely to have used it than others.

Differences by age

Adults under 30 stand out: 43% of these young adults have used ChatGPT, up 10 percentage points since last summer. Use of the chatbot is also up slightly among those ages 30 to 49 and 50 to 64. Still, these groups remain less likely than their younger peers to have used the technology. Just 6% of Americans 65 and up have used ChatGPT.

Differences by education

Highly educated adults are most likely to have used ChatGPT: 37% of those with a postgraduate or other advanced degree have done so, up 8 points since July 2023. This group is more likely to have used ChatGPT than those with a bachelors degree only (29%), some college experience (23%) or a high school diploma or less (12%).

Since March 2023, weve also tracked three potential reasons Americans might use ChatGPT: for work, to learn something new or for entertainment.

The share of employed Americans who have used ChatGPT on the job increased from 8% in March 2023 to 20% in February 2024, including an 8-point increase since July.

Turning to U.S. adults overall, about one-in-five have used ChatGPT to learn something new (17%) or for entertainment (17%). These shares have increased from about one-in-ten in March 2023.

Differences by age

Use of ChatGPT for work, learning or entertainment has largely risen across age groups over the past year. Still, there are striking differences between these groups (those 18 to 29, 30 to 49, and 50 and older).

For example, about three-in-ten employed adults under 30 (31%) say they have used it for tasks at work up 19 points from a year ago, with much of that increase happening since July. These younger workers are more likely than their older peers to have used ChatGPT in this way.

Adults under 30 also stand out in using the chatbot for learning. And when it comes to entertainment, those under 50 are more likely than older adults to use ChatGPT for this purpose.

Differences by education

A third of employed Americans with a postgraduate degree have used ChatGPT for work, compared with smaller shares of workers who have a bachelors degree only (25%), some college (19%) or a high school diploma or less (8%).

Those shares have each roughly tripled since March 2023 for workers with a postgraduate degree, bachelors degree or some college. Among workers with a high school diploma or less, use is statistically unchanged from a year ago.

Using ChatGPT for other purposes also varies by education level, though the patterns are slightly different. For example, a quarter each of postgraduate and bachelors degree-holders have used ChatGPT for learning, compared with 16% of those with some college experience and 11% of those with a high school diploma or less education. Each of these shares is up from a year ago.

With more people using ChatGPT, we also wanted to understand whether Americans trust the information they get from it, particularly in the context of U.S. politics.

About four-in-ten Americans (38%) dont trust the information that comes from ChatGPT about the 2024 U.S. presidential election that is, they say they have not too much trust (18%) or no trust at all (20%).

A mere 2% have a great deal or quite a bit of trust, while 10% have some trust.

Another 15% arent sure, while 34% have not heard of ChatGPT.

Distrust far outweighs trust regardless of political party. About four-in-ten Republicans and Democrats alike (including those who lean toward each party) have not too much or no trust at all in ChatGPTs election information.

Notably, however, very few Americans have actually used the chatbot to find information about the presidential election: Just 2% of adults say they have done so, including 2% of Democrats and Democratic-leaning independents and 1% of Republicans and GOP leaners.

These survey findings come amid growing national attention on chatbots and misinformation. Several tech companies have recently pledged to prevent the misuse of artificial intelligence including chatbots in this years election. But recent reports suggest chatbots themselves may provide misleading answers to election-related questions.

Note: Here are the questions used for this analysis, along with responses, and the survey its methodology.

Continue reading here:

Americans' use of ChatGPT is ticking up, but few trust its election information - Pew Research Center

ChatGPT Use Linked to Memory Loss, Procrastination in Students – Futurism

You won't always have an AI chatbot in your pocket... right? Brain Drain

New research has found a worrying link to memory loss and tanking grades in students who relied on ChatGPT, in an early but fascinating exploration of the swift impact that large language models have had in education.

As detailed in a new studypublished in the International Journal of Educational Technology in Higher Education, the researchers surveyed hundreds of university students ranging from undergrads to doctoral candidates over two phases, using self-reported evaluations. They were spurred on by witnessing more and more of their own students turn to ChatGPT.

"My interest in this topic stemmed from the growing prevalence of generative artificial intelligence in academia and its potential impact on students," study co-author Muhammad Abhas at the National University of Computer and Emerging Sciences in Pakistan told PsyPost. "For the last year, I observed an increasing, uncritical, reliance on generative AI tools among my students for various assignments and projects I assigned."

In the first phase, the researchers collected responses from 165 students who used an eight-item scale to report their degree of ChatGPT reliance. The items ranged from "I use ChatGPT for my course assignments" to "ChatGPT is part of my campus life."

To validate those results, they also conducted a more rigorous "time-lagged" second phase, in which they expanded their scope to nearly 500 students, who were surveyed three times at one to two week intervals.

Perhaps unsurprisingly, the researchers found that students under a heavy academic workload and "time pressure" were much more likely to use ChatGPT. They observed that those who relied on ChatGPT reported more procrastination, more memory loss, and a drop in GPA. And the reason why is quite simple: the chatbot, however good or bad its responses are, is making schoolwork too easy.

"Since ChatGPT can quickly respond to any questions asked by a user," the researchers wrote in the study, "students who excessively use ChatGPT may reduce their cognitive efforts to complete their academic tasks, resulting in poor memory."

There were a few curveballs, however.

"Contrary to expectations, students who were more sensitive to rewards were less likely to use generative AI," Abbas told PsyPost, suggesting that those seeking good grades avoided using the chatbot out of fear of getting caught.

It's possible that the relationship between ChatGPT usage and its negative effects is bidirectional, notes PsyPost. A student may turn to the chatbot because they already have bad grades, and not the other way around. It's also worth considering that the data was self-reported, which comes with its own biases.

That's not to exonerate AI, though. Based on these findings, we should be wary about ChatGPT's role in education.

"The average person should recognize the dark side of excessive generative AI usage," Abbas told Psypost. "While these tools offer convenience, they can also lead to negative consequences such as procrastination, memory loss, and compromised academic performance."

More on AI: Google's AI Search Caught Pushing Users to Download Malware

Read the original:

ChatGPT Use Linked to Memory Loss, Procrastination in Students - Futurism

Saving hours of work with AI: How ChatGPT became my virtual assistant for a data project – ZDNet

David Gewirtz/ZDNET

There's certainly been a lot of golly-wow, gee-whiz press about generative artificial intelligence (AI) over the past year or so. I'm certainly guilty of producing some of it myself. But tools like ChatGPT are also just that: tools. They can be used to help out with projects just like other productivity software.

Today, I'll walk you through a quick project where ChatGPT saved me a few hours of grunt work. While you're unlikely to need to do the same project, I'll share my thinking for the prompts, which may inspire you to use ChatGPT as a workhorse tool for some of your projects.

Also: 4 generative AI tools your enterprise can leverage to boost productivity

This is just the sort of project I would have assigned to a human assistant, back when I had human assistants. I'm telling you this fact because I structured the assignments for ChatGPT similarly to how I would have for someone working for me, back when I was sitting in a cubicle as a managerial cog of a giant corporation.

In a month or so, I'll post what I like to call a "stunt article." Stunt articles are projects I come up with that are fun and that I know readers will be interested in. The article I'm working on is a rundown of how much computer gear I can buy from Temu for under $100 total. I came in at $99.77.

Putting this article together involved looking on the Temu site for items to spotlight. For example, I found an iPad keyboard and mouse that cost about $6.

Also: Is Temu legit? What to know before you place an order

To stay under my $100 budget, I wanted to add all the Temu links to a spreadsheet, find each price, and then move things around until I got the exact total budget I wanted to spend.

The challenge was converting the Temu links into something useful. That's where ChatGPT came in.

The first thing I did was gather all my links. For each product, I copied the link from Temu and pasted it into a Notion page. When pasting a URL, Notion gives you the option to create bookmark blocks that not only contain links but also contain, crucially, product names. Here's a snapshot of that page:

As you can see, I've started selecting the blocks. Once you select all the blocks, you can copy them. I just pasted the entire set into a text editor, which looked like this:

The page looks ugly, but the result is useful.

Let's take a look at one of the data blocks. I switched my editor out of dark mode so it's easier for you to see the data elements in the block:

There are three key elements. The gold text shows the name of the product, surrounded by braces. The green text is the base URL of the product, surrounded by parenthesis. There's a question mark that separates the main page URL from all the random tracking data passed to the Temu page. I just wanted the main URL. The purple sections highlight the delimiters -- this is the data we're going to feed into ChatGPT.

I first fed ChatGPT this prompt:

Accept the following data and await further instructions.

Then I copied all the information from the text editor and pasted it into ChatGPT. At this point, ChatGPT knew to wait for more details.

The next step is where the meat of the project took place. I wanted ChatGPT to pull out the titles and the links, and leave the rest behind. Here's that prompt:

The data above consists of a series of blocks of data. At the beginning of each block is a section within [] brackets. For each block, designate this as TITLE.

Following the [] brackets is an open paren (followed by a web URL). For each block, extract that URL, but dispose of everything following the question mark, and also dispose of the question mark. Most URLs will then end in .html. We will designate this as URL.

For each block, display the TITLE followed by a carriage return, followed by the URL, followed by two newlines.

This process accomplished two things. It allowed me to name the data, so I could refer to it later. The process also allowed me to test whether ChatGPT understood the assignment.

Also: How to use ChatGPT

ChatGPT did the assignment correctly but stopped about two-thirds through when its buffer ran out. I told the bot to continue and got the rest of the data.

Doing this process by hand would have involved lots of annoying cutting and pasting. ChatGPT did the work in less than a minute.

For my project, Temu's titles are just too much. Instead of:

10 Inch LCD Writing Tablet, Electronis Memo With Leather Protective Case, Electronic Drawing Board For Digital Handwriting Pad Doodle Board, Gifts For

I wanted something more like:

LCD writing tablet with case

I gave this assignment to ChatGPT as well. I reminded the tool that it had previously parsed and identified the data. I find that reminding ChatGPT about a previous step helps it more reliably incorporate that step into subsequent steps. Then I told it to give me titles. Here's that prompt:

You just created a list with TITLE and URL. Do you remember? For the above items, please summarize the TITLE items in 4-6 words each. Only capitalize proper words and the first word. Give it back to me in a bullet list.

I got back a list like this, but for all 26 items:

My goal was to copy and paste this list of clickable links into Excel so I could use column math to play around with the items I planned to order, adding and removing items until I got to my $100 budget. I wanted the names clickable in the spreadsheet because it would be much easier to manage and jump back and forth between Temu and my project spreadsheet.

So, my final ChatGPT task was to turn the list above into a set of clickable links. Again, I started by reminding the tool of the work it had completed. Then I told it to create a list with links:

Do you see the bulleted list you just created? That is a list of summarized titles.

Okay, make the same list again, but turn each summarized title into a live web link with its corresponding URL.

And that was that. I got all the links I needed and ChatGPT did all the grunt work. I pasted the results into my spreadsheet, chose the products, and placed the order.

Also: 6 ways ChatGPT can make your everyday life easier

This is the final spreadsheet. There were more products when I started the process, but I added and removed them from the REMAINING column until I got the budget I was aiming for:

This was a project I could have done myself. But it would have required a ton of cutting and pasting, and a reasonable amount of extra thought to summarize all the product titles. It would have taken me two or three hours of grunt work and probably added to my wrist pain.

But by thinking this work through as an assignment that could be delegated, the entire ChatGPT experience took me less than 10 minutes. It probably took me less time to use ChatGPT to do all that grunt work and write this article than it would have taken me to do all that cutting, pasting, and summarizing.

Also:Thanks to my 5 favorite AI tools, I'm working smarter now

This sort of project isn't fancy and it isn't sexy. But it saved me a few hours of work I would have found tedious and unpleasant. Next time you have a data-parsing project, consider using ChatGPT.

Oh, and stay tuned. As soon as Temu sends me their haul, I'll post the detailed article about how much tech gear you can get for under $100. It'll be fun. See you there.

You can follow my day-to-day project updates on social media. Be sure to subscribe to my weekly update newsletter, and follow me on Twitter/X at @DavidGewirtz, on Facebook at Facebook.com/DavidGewirtz, on Instagram at Instagram.com/DavidGewirtz, and on YouTube at YouTube.com/DavidGewirtzTV.

Excerpt from:

Saving hours of work with AI: How ChatGPT became my virtual assistant for a data project - ZDNet

ChatGPT linked to declining academic performance and memory loss in new study – PsyPost

Students tend to turn to ChatGPT, a generative artificial intelligence tool, when faced with increased academic workload and time constraints, according to new research published in the International Journal of Educational Technology in Higher Education. The study also reveals a concerning trend: reliance on ChatGPT is linked to procrastination, memory loss, and a decline in academic performance. These findings shed light on the role of generative AI in education, suggesting both its widespread use and potential drawbacks.

The motivation behind this research stems from the explosive growth of generative AI technologies in educational settings. Despite their potential to assist in learning and research, theres a growing concern among educators about their misuse, especially in relation to academic integrity. Previous studies have largely focused on theoretical discussions without much empirical data to support the claims.

My interest in this topic stemmed from the growing prevalence of generative artificial intelligence in academia and its potential impact on students, explained study author Muhammad Abbas, an associate professor at the FAST School of Management at the National University of Computer and Emerging Sciences in Pakistan. For the last year, I observed an increasing, uncritical, reliance on generative AI tools among my students for various assignments and projects I assigned. This prompted me to delve deeper into understanding the underlying causes and consequences of its usage among them.

To understand these dynamics, the study was conducted in two phases. Initially, the researchers developed and validated a scale to measure university students use of ChatGPT for academic purposes. They began by generating an initial set of 12 items, which was refined to 10 after expert evaluations for content validity. Further refinement through an exploratory factor analysis and reliability testing led to the final selection of eight items that effectively measured the extent of ChatGPTs academic use.

The scale included items such as: I use ChatGPT for my course assignments, I am addicted to ChatGPT when it comes to studies, and ChatGPT is part of my campus life.

In the second phase of the study, the researchers sought to validate the findings from the first phase while also testing specific hypotheses related to ChatGPTs impact. The sample consisted of 494 university students who were surveyed across three timepoints, each separated by a 1-2 week interval.

This time-lagged approach allowed the researchers to first gather data on predictor variables (academic workload, time pressure, sensitivity to rewards, and sensitivity to quality), followed by the measurement of ChatGPT usage, and finally, the assessment of outcomes (procrastination, memory loss, and academic performance).

Abbas and his colleagues found that high levels of academic workload and time pressure were significant predictors of increased ChatGPT usage, suggesting that students under significant academic stress are more likely to turn to generative AI tools for assistance.

Students who were more sensitive to rewards were less inclined to use ChatGPT, indicating a possible concern about the academic integrity and the potential negative consequences of relying on AI for academic tasks.

Moreover, the study uncovered significant adverse effects of ChatGPT usage on students personal and academic outcomes. Increased reliance on ChatGPT was associated with higher levels of procrastination and memory loss, and a negative impact on academic performance, as reflected in students grade point averages. These findings suggest that while ChatGPT can be a valuable resource under certain circumstances, its excessive use might lead to detrimental effects on learning behaviors and outcomes.

One surprising finding was the role of sensitivity to rewards, Abbas told PsyPost. Contrary to expectations, students who were more sensitive to rewards were less likely to use generative AI. Another surprising finding was the positive relationship of generative AI usage with procrastination and self-reported memory loss and negative relationship between generative AI usage and academic performance.

Interestingly, the study did not find a significant relationship between sensitivity to quality and ChatGPT usage, suggesting that concerns over the quality of academic work do not necessarily influence the decision to use AI tools.

The findings highlight the potential dual impact of ChatGPT in academia, serving both as a helpful tool under academic pressure and as a potential risk to academic integrity and student learning outcomes.

The average person should recognize the dark side of excessive generative AI usage, Abbas said. While these tools offer convenience, they can also lead to negative consequences such as procrastination, memory loss, and compromised academic performance. Also, factors like academic workload, sensitivity to rewards, and time pressure play significant roles in influencing students decision to use generative AI.

The study provides important details about ChatGPT usage among university students. But the study, like all research, includes limitations. For example, the time-lagged design, while more robust than cross-sectional designs, does not entirely eliminate the possibility of reciprocal relationships.

The study suggests a one-way impact of ChatGPT usage on students academic workload and personal outcomes, such as procrastination and memory loss. However, its conceivable that these relationships could be bidirectional. For instance, students who are prone to procrastination might be more inclined to use ChatGPT, rather than ChatGPT usage leading to increased procrastination.

The research opens the door to investigating the broader effects of ChatGPT usage on students learning outcomes and health. Future research could delve into how reliance on generative AI tools affects cognitive skills, mental health, and overall learning experiences.

My long-term goals involve expanding this line of research to further explore through other methods, such as experiments, how excessive use of generative AI affects students outcomes, Abbas said.

The study, Is it harmful or helpful? Examining the causes and consequences of generative AI usage among university students, was authored by Muhammad Abbas, Farooq Ahmed Jam, and Tariq Iqbal Khan.

Read this article:

ChatGPT linked to declining academic performance and memory loss in new study - PsyPost

Nearly a third of employed Americans under 30 used ChatGPT for work: Poll – The Hill

More employed Americans have used the artificial intelligence (AI) tool ChatGPT for work since last year, with the biggest increase among the younger portion of the workforce, according to a Pew Research poll released Tuesday.  

The survey found that 31 percent of employed Americans between 18 and 29 surveyed in February said they have used ChatGPT for tasks at work, up from 12 percent who said the same last March.

The number of employed Americans who said they use ChatGPT for work decreased by age group. Twenty-one percent of employed adults aged 30 to 49 said they use it, up from 8 percent last year, and just 10 percent aged 50 and older said the same, up from only 4 percent last year.

Overall, the share of employed Americans who have used ChatGPT for work rose to double digits in the past year — reaching 20 percent based on the February survey, up from just 8 percent last March. But in general, most Americans still have not used ChatGPT, according to the survey.  

Twenty-three percent of Americans said they have used ChatGPT. That amount is on the rise from July, when 18 percent said the same.  

Use of ChatGPT has particularly spiked among younger adults. Forty-three percent of adults younger than 30 said they have used ChatGPT in the February survey, compared to 27 percent of adults 30 to 49, 17 percent of adults 50 to 64 and 6 percent of adults 65 and older.  

As the tool becomes more popular, OpenAI has also faced scrutiny about risks it presents about the spread of misinformation. OpenAI CEO Sam Altman faced questions about those risks and how it could impact the upcoming election when he testified before the Senate last year.  

Pew found that 38 percent of Americans said they do not trust the information from ChatGPT about the 2024 presidential election. Only 2 percent said they trust it a “great deal” or “quite a bit” and 10 percent said they have “some” trust in ChatGPT.  

The distrust of ChatGPT about information about the 2024 election was fairly evenly split between Republicans and Democrats.  

The survey also found that very few Americans, roughly 2 percent, said they have used the chatbot to find information about the presidential election.  

The survey is based on data from the American Trends Panel created by Pew Research Center and was conducted from Feb. 7-11. A total of 10,133 panelists responded out of 11,117 who were sampled. The margin of error for the full sample of 10,133 respondents is 1.5 percentage points.  

See original here:

Nearly a third of employed Americans under 30 used ChatGPT for work: Poll - The Hill

AI is More Than ChatGPT: It is a Ticking Time Bomb for Women – Torch – St. John’s University

Photo Courtesy / Unsplash Melanie Wasser

In recent months, image-based sexual abuse has been on the rise due to alternative intelligence (AI) mainly targeting high-profile women. It also imposes an increased risk to the LGBTQ+ community, sex workers and women everywhere. A 2023 UPenn article on the rise of deepfake porn says, Broadly speaking, minoritized women and femmes are more likely to experience image-based sexual abuse, as are single people and adolescents. LGBTQ populations are also at increased risk of harassment.

There are currently four states that created laws based on image-based sexual abuse. But with the growth of the internet, now more than anything what society needs is protection of the most vulnerable. When 14 year old Mia Janine takes her own life as a result of bullying and her face being placed onto the bodies of porn stars, it makes me fear what AI could do next.

What do we turn to when we see our own faces reflected back at us on the news and social media? When one girl dies or is faced with an inconceivable amount of tragedy, all girls watching stand as testaments to her pain.

We turn on the news and see our politicians arguing for more law enforcement and to lock people up in prisons overflowing with blue-collar criminals. But there is something about the politics of it all that makes my stomach turn and keep me from making eye contact with the girls face staring back at me, especially knowing that the politicians raising their voices only comes from a sense of inherent whiteness and lack of acknowledgement for women of color.

In order to stop these things from happening, the culture around womens existences must shift. Image-based sexual abuse is an example of the continual affects AI pornography can have on generations of people. If boys grow up believing that behavior like this is okay, what will stop them from using it to harm the women that they know? The cycle continues.

This is not a call for more policing or for longer prison sentences when tragedy does strike, this is a call for accountability. For resources available to victims and perpetrators, for laws to be created to catch crimes before they increase.

Resources can include community-led programs about sexual assault prevention and affordable therapy for people dealing with the effects of abuse and assault on their lives. More than anything, this is a call to see one less smiling girls eyes staring into mine, knowing that she died and nothing can be done to save her. Knowing that I cannot reach into my screen and pull her out.

These girls are suspended in time for me as the same age they were when they died. When the boys mugshot appears on the screen, I try to imagine what he was like as a child and what happened down the line for everything to go so wrong for him.

Social media, Deepfake images and an entire world of systemic, personal and institutional oppression fosters a world where the most heinous thoughts are validated. In order to be here for our women, we need to start with our boys.

Continue reading here:

AI is More Than ChatGPT: It is a Ticking Time Bomb for Women - Torch - St. John's University

There Might Be No ChatGPT-like Apple Chatbot in iOS 18 – The Mac Observer

The recent months in the tech scene have been all about artificial intelligence and its impact, but one company that has been late to the party is Apple. Apple first hinted about inhouse-AI development during a recent earnings call, which followed the earlier reports of the company reaching out to major publishers to use their data to train its AIs dataset, canceling the Apple Car project and shifting the team to AI. However, according to Bloombergs Mark Gurman, Apple might not debut a ChatGPT-like chatbot, at all. Instead, the company is exploring deals with established tech giants such as Chinas Baidu, OpenAI, and Google about potential partnerships.

That said, Apple might instead focus on licensing already-established chatbots like Googles Gemini (fka Bard) or OpenAIs ChatGPT. They might delay all plans to release an Apple chatbot, internally dubbed Ajax GPT.

Nevertheless, Mark Gurman believes AI will remain in the shows spotlight at the upcoming Worldwide Developers Conference (WWDC), slated for June 10-14, 2024 where we expect to see iOS 18, iPadOS 18, watchOS 11, tvOS 18, macOS 15, and visionOS 2. Although he doesnt delve into details of the upcoming AI feature, he mentions the companys plans to unveil new AI features, which could serve as the backbone of the next iOS 18. This suggests that even if Apple doesnt intend to bring a native AI chatbot to the devices, we might see a popular chatbot pre-installed on the phones or supported natively by the device. For reference, a London-based consumer tech firm, Nothing, recently partnered with the Perplexity AI search engine to power up its latest release, Phone 2(a), and Apple might have similar plans, but with generative AI giants.

CEO Tim Cook recently spoke to investors that the company will disclose its AI plans to the public later this year. Despite Apples overall reticence on the topic, Cook has been notably vocal about the potential of AI, particularly generative AI.

More importantly, according to previous reports, he has indicated that generative AI will improve Siris ability to respond to more complex queries and enable the Messages app to complete sentences automatically. Furthermore, other Apple apps such as Apple Music, Shortcuts, Pages, Numbers, and Keynote are expected to integrate generative AI functionality.

Source

Read the rest here:

There Might Be No ChatGPT-like Apple Chatbot in iOS 18 - The Mac Observer

Universities build their own ChatGPT-like AI tools – Inside Higher Ed

When ChatGPT debuted in November 2022, Ravi Pendse knew fast action was needed. While the University of Michigan formed an advisory group to explore ChatGPTs impact on teaching and learning, Pendse, UMichs chief information officer, took it further.

Months later, before the fall 2023 semester, the university launched U-M GPT, a homebuilt generative AI tool that now boasts between 14,000 to 16,000 daily users.

A report is great, but if we could provide tools, that would be even better, Pendse said, noting that Michigan is very concerned about equity. U-M GPT is all free; we wanted to even the playing field.

Most Popular

The University of Michigan is one of a small number of institutions that have created their own versions of ChatGPT for student and faculty use over the last year. Those include Harvard University, Washington University, the University of California, Irvine and UC San Diego. The effort goes beyond jumping on the artificial intelligence (AI) bandwagonfor the universities, its a way to overcome concerns about equity, privacy and intellectual property rights.

We need to talk about AI for good of course, but lets talk about not creating the next version of the digital divide.

Students can use OpenAIs ChatGPT and similar tools for everything from writing assistance to answering homework questions. The newest version of ChatGPT costs $20 per month, while older versions remain free. The newer models have more up-to-date information, which could give students who can afford it a leg up.

That fee, no matter how small, creates a gap unfair to students, said Tom Andriola, UC Irvines chief digital officer.

Do we think its right, in who we are as an organization, for some students to pay $20 a month to get access to the best [AI] models while others have access to lesser capabilities? Andriola said. Principally, it pushes us on an equity scale where AI has to be for all. We need to talk about AI for good of course, but lets talk about not creating the next version of the digital divide.

UC Irvine publicly announced their own AI chatbotdubbed ZotGPTon Monday. Deployed in various capacities since October 2023, it remains in testing and is only available to staff and faculty. The tool can help them with everything from creating class syllabi to writing code.

Offering their own version of ChatGPT allows faculty and staff to use the technology without the concerns that come with OpenAIs version, Andriola said.

When we saw generative AI, we said, We need to get people learning this as fast as possible, with as many people playing with this that we could, he said. [ZotGPT] lets people overcome privacy concerns, intellectual property concerns, and gives them an opportunity of, How can I use this to be a better version of myself tomorrow?

That issue of intellectual property has been a major concern and a driver behind universities creating their own AI tools. OpenAI has not been transparent in how it trains ChatGPT, leaving many worried about research and potential privacy violations.

Albert Lai, deputy faculty lead for digital transformation at Washington University, spearheaded the launch of WashU GPT last year.

WashUalong with UC Irvine and University of Michiganbuilt their tools using Microsofts Azure platform, which allows users to integrate the work into their institutions applications. The platform uses open source software available for free. In contrast, proprietary platforms like OpenAIs ChatGPT have an upfront fee.

A look at WashU GPT, a version of Washington Universitys own generative AI platform that promises more privacy and IP security than ChatGPT.

Provided/Washington University

There are some downsides when universities train their own models. Because a universitys GPT is based on the research, tests and lectures put in by an institution, it may not be as up-to-date as the commercial ChatGPT.

But thats a price we agreed to pay; we thought about privacy, versus what were willing to give up, Lai said. And we felt the value in maintaining privacy was higher in our community.

To ensure privacy is kept within a universitys GPT, Lai encouraged other institutions to ensure any Microsoft institutional agreements include data protection for IP. UC Irvine and UMichigan also have agreements with Microsoft that any information put into their GPT models will stay within the university and not be publicly available.

Weve developed a platform on top of [Microsofts] foundational models to provide faculty comfort that their IP is protected, Pendse said. Any faculty memberincluding myselfwould be very uncomfortable in putting a lecture and exams in an OpenAI model (such as ChatGPT) because then its out there for the world.

Once you figure out the secret sauce, its pretty straightforward.

It remains to be seen whether more universities will build their own generative AI chatbots.

Consulting firm Ithaka S+R formed a 19-university task force in September dubbed Making AI Generative for Higher Education to further study the use and rise of generative AI. The task force members include Princeton University, Carnegie Mellon University and the University of Chicago.

Lai and others encourage university IT officials to continue experimenting with what is publicly available, which can eventually morph into their own versions of ChatGPT.

I think more places do want to do it and most places havent figured out how to do it yet, he said. But frankly, in my opinion, once you figure out the magic sauce its pretty straightforward.

Visit link:

Universities build their own ChatGPT-like AI tools - Inside Higher Ed

ChatGPT use linked to sinking academic performance and memory loss – Yahoo News UK

ChatGPT use is linked to bad results and memory loss. (Getty Images)

Using AI software such as ChatGPT is linked to poorer academic performance, memory loss and increased procrastination, a study has shown.

The AI chatbot ChatGPT can generate convincing answers to simple text prompts, and is already used weekly by up to 32% of university students, according to research last year.

The new study found that university students who use ChatGPT to complete assignments find themselves in a vicious circle where they dont give themselves enough time to do their work and are forced to rely on ChatGPT, and over time, their ability to remember facts diminishes.

The research was published in the International Journal of Educational Technology in Higher Education. Scientists conducted interviews with 494 students about their use of ChatGPT, with some admitting to being "addicted" to using the technology to complete assignments.

The researchers wrote: "Since ChatGPT can quickly respond to any questions asked by a user, students who excessively use ChatGPT may reduce their cognitive efforts to complete their academic tasks, resulting in poor memory. Over time, over-reliance on generative AI tools for academic tasks, instead of critical thinking and mental exertion, may damage memory retention, cognitive functioning, and critical thinking abilities."

In the interviews, the researchers were able to pinpoint problems experienced by students who habitually used ChatGPT to complete their assignments.

The researchers surveyed students three times to work out what sort of student is most likely to use ChatGPT, and what effects heavy users experienced.

The researchers then asked questions about the effects of using ChatGPT.

Study author Mohammed Abbas, from the National University of Computer and Emerging Sciences in Pakistan, told PsyPost: "My interest in this topic stemmed from the growing prevalence of generative artificial intelligence in academia and its potential impact on students.

Story continues

"For the last year, I observed an increasing, uncritical, reliance on generative AI tools among my students for various assignments and projects I assigned. This prompted me to delve deeper into understanding the underlying causes and consequences of its usage among them."

The study found that students who were results-focused were less likely to rely on AI tools to do tasks for them.

The research also found that students who relied on ChatGPT were not getting the full benefit of their education - and actually lost the ability to remember facts.

"Our findings suggested that excessive use of ChatGPT can have harmful effects on students personal and academic outcomes. Specifically, those students who frequently used ChatGPT were more likely to engage in procrastination than those who rarely used ChatGPT," Abbas said.

"Similarly, students who frequently used ChatGPT also reported memory loss. In the same vein, students who frequently used ChatGPT for their academic tasks had a poor grade average."

The researchers found that students who felt under pressure were more likely to turn to ChatGPT - but that this then led to worsening academic performance and further procrastination and memory loss.

The researchers suggest that academic institutions should be mindful that heavy workloads can drive students to use ChatGPT.

The researchers also said academics should warn students of the negative impact of using the software.

"Higher education institutions should emphasise the importance of efficient time management and workload distribution while assigning academic tasks and deadlines," they said.

"While ChatGPT may aid in managing heavy academic workloads under time constraints, students must be kept aware of the negative consequences of excessive ChatGPT usage."

Read more from the original source:

ChatGPT use linked to sinking academic performance and memory loss - Yahoo News UK

ChatGPT: student chatbot use ‘increasing loneliness’ – Times Higher Education

Universities should exercise caution as they outsource more functions to artificial intelligence (AI), according to the authors of a study that links student usage of ChatGPT to loneliness and a reduced sense of belonging.

Australian researchers surveyed 387 university students in different parts of the globe to seek to understand the less understood side effects of the rapid uptake of the OpenAI tool since itslaunch in November 2022.

They found evidence that while AI chatbots designed for information provision may be associated with student performance, when social support, psychological well-being, loneliness and sense of belonging are considered it has a net negative effect on achievement, according to the paper published inStudies in Higher Education.

Alongside ChatGPT which is primarily used by students for help with academic tasks universitieshave adopted a range of chatbotsto help with other processes, including in admissions and student support.

It seems students may be seeking out AI help instead of librarians, student advisers and counsellors, and this means universities have no visibility from a whole-of-student continuity of care perspective, said Joseph Crawford, a senior lecturer in management at the University of Tasmania and one of the authors of the study.

Universities could save money deploying these tools at the expense of students spending time building their social skills and social capital.

The study found that students who reported using ChatGPT more displayed some evidence of feeling socially supported by the AI, explained Dr Crawford, who worked on thepaper with Kelly-Ann Allen and Bianca Pani, both of Monash University, and Michael Cowling, based at Central Queensland University.

But the paper also shows that increased chatbot usage led to human relationships weakening possibly without users even realising.

Those who got their support from friends and family reported reduced loneliness, higher grade performance and were less willing to leave universitythan those who reported being socially supported by the AI.

Dr Crawford said it was still not completely clear whether AI use causes lower performance, or whether students experiencing lower performance turn more often to AI.

But he recommended that universities should find ways to promote peer networks, social opportunities for students and other ways of building social connections as a way of insulatingthem from some of the more negative effects of AI use.

tom.williams@timeshighereducation.com

Originally posted here:

ChatGPT: student chatbot use 'increasing loneliness' - Times Higher Education

What is the best generative AI chatbot? ChatGPT, Copilot, Gemini and Claude compared – ReadWrite

The generative AI chatbot market is rapidly growing and while OpenAIs ChatGPT might remain the most mainstream, there are many others on the market competing to be the very best for the general public, creatives businesses and anyone else looking to see how artificial intelligence can improve their day-to-day lives.

But which one is the best? ChatGPT may have been the first to go mainstream, but is it the market leader? Which companies have entered the generative AI chatbot space with a product worthy of taking on OpenAIs offering?

Arguably the most popular on the market, other than ChatGPT, are Microsofts CoPilot, Claude by Anthropic and Gemini, which is owned by Google.

Here we look at all four of these popular generative AI chatbots and consider which one is the best for certain uses.

At this point who hasnt heard of ChatGPT? It was the first AI to go completely mainstream and show just how powerful AI can be to the wider public. It made such a splash, it reached one million active users within weeks of launching and now has over 180 million users worldwide and counting.

Its creator, OpenAI, has worked tirelessly to keep it at the forefront of the market by launching new and improved features, including a Pro Version (GPT-4), web browsing capabilities and image generation, powered by Dall-E. Theres even the option to create your custom-made GPT-powered bot on any subject you want.

The free version, GPT-3.5, is only trained on human-created data up to January 2022, so its restrictive if youre looking to use it for more up-to-date purposes involving real-time information. However, the Pro version, GPT-4, is available for $20 a month and is trained with data up to April 2023. Although thats still relatively time-restrictive, it does also have access to the internet.

Yes, at most taks, although it has had its controversies due to inaccuracies and misinformation, such as lawyers using it for case research and the chatbot fabricating historic cases. However, it remains a good first port of call for anyone just looking for an easy-to-use AI chatbot. It should be noted GPT-4 is significantly more effective than GPT-3.5, but the former is only available to paying users.

CoPilot is Microsofts own generative AI chatbot, originating initially as a chat option on their search engine, Bing. It is now a stand-alone AI chatbot and is naturally built into all of Microsofts productivity and business tools, such as Windows and Microsoft 365.

Interestingly, Microsoft is a key investor in OpenAIs ChatGPT, which was used to launch Bing Chat. GPT-4 continues to power CoPilot today and, like ChatGPT, also uses Dall-E to generate images.

That might sound like its no different to ChatGPT but Microsofts key USP with CoPilot is that it is ingested into all of the Microsoft tools and products billions of people use around the world every single day.

It behaves as an assistant to those who rely on the likes of Microsoft Excel, Microsoft Word and other 365 platforms to perform day-to-day tasks.

The clue is in the name, but CoPilot is good for people who need help when using Microsofts extensive suite of tools, products, and software. It essentially behaves as an assistant, or co-pilot, inside these products.

From spreadsheets, text documents to computer code, CoPilot can help create it all with natural language prompts. Coders on the Microsoft-owned Github find it to be a very popular AI chatbot to use.

Formerly called Bard, Gemini is owned by Google is another generative AI chatbot that is improving rapidly over time to rival GPT-4.

One major plus to Gemini is that it has no limit to the number of responses it can give you, unlike GPT-4 and CoPilot, which both have limits in this area.

That means you can essentially have long discussions with Google Gemini to find the information you require. On top of that, and rather unsurprisingly, Gemini bakes in a lot of the elements were all so used to from Googles search engine. For example, if you ask it to help you plan a trip to a specific country, it will likely provide you with a map of that destination, using Google Maps, and may even dip into Google images to give you some kind of visual representation of the information its giving you.

Users can also add extensions, akin to Chrome extensions, for use in tools such as YouTube, Maps and Workspace.

If youre a big fan of Google products and apps, Gemini is likely the generative AI chatbot for you, but its also perfect if youre looking for speedy interactions and unlimited prompts.

Thats because, while it isnt faster than GPT-4, it has generally been found to be faster than CoPilot and GPT-3.5. But its not flawless and was recently caught up in controversy over the accuracy of its image generator amid claims it was woke.

The creators of Claude, Anthropic, is an AI company started by former OpenAI employees.

Its something of an all-rounder, being a multi-modal chatbot with text, voice and document capabilities.

But the main praise it has had since its launch in early 2023 is the fluency of the conversations it can hold, its ability to understand the nuances in the ways humans communicate and its ability to refuse to generate harmful or unethical content, instead often suggesting alternative ways to accomplish what users are asking of it without breaking its own guidelines.

Claude recently launched Claude 3, which is a family of AI chatbots (Opus, Sonnet and Haiku) that offer varying levels of sophistication depending on what users require, and Anthropic claim its most powerful AI in the family, Opus, is almost 87% trained to undergraduate levels of knowledge and accuracy and 95% common knowledge and accuracy.

Claudes extensive and powerful capabilities, such as being able to rapidly read, analyze and summarize uploaded files, make it a very useful generative AI chatbot for professionals.

It is also trained on real-time data, which undoubtedly speaks to Anthropics impressive claims of accuracy and levels of knowledge.

On Claudes website, Anthropic claims it is a next-generation AI assistant built for work and trained to be safe, accurate and secure.

Featured Image: Ideogram

Read the original here:

What is the best generative AI chatbot? ChatGPT, Copilot, Gemini and Claude compared - ReadWrite

‘Materially better’ GPT-5 could come to ChatGPT as early as this summer – ZDNet

Leon Neal/Getty Images

OpenAI has released several iterations of the large language model (LLM) powering ChatGPT, including GPT-4 and GPT-4 Turbo. Still, sources say the highly anticipated GPT-5 could be released as early as mid-year.

According to reports from Business Insider, GPT-5 is expected to be a major leap from GPT-4 and was described as "materially better" by early testers. The new LLM will offer improvements that have reportedly impressed testers and enterprise customers, including CEOs who've been demoed GPT bots tailored to their companies and powered by GPT-5.

Also: What does GPT stand for? Understanding GPT 3.5, GPT 4, and more

`A customer who got a GPT-5 demo from OpenAI told BI that the company hinted at new, yet-to-be-released GPT-5 features, including its ability to interact with other AI programs that OpenAI is developing. These AI programs, called AI agents by OpenAI, could perform tasks autonomously.

This feature hints at an interconnected ecosystem of AI tools developed by OpenAI, which would allow its different AI systems to collaborate to complete complex tasks or provide more comprehensive services.

The specific launch date for GPT-5 has yet to be released. OpenAI is reportedly training the model and will conduct red-team testing to identify and correct potential issues before its public release.

Also: 3 ways we tried to outwit AI last week: Legislation, preparation, intervention

It's unclear whether GPT-5 will be released exclusively to Plus subscribers, who pay a $20-a-month fee to access GPT-4. GPT-3.5 powers the free tier of ChatGPT, but anyone can access GPT-4 Turbo in Copilot for free by choosing the Creative or Precise conversation styles.

OpenAI has been the target of scrutiny and dissatisfaction from users amid reports of quality degradation with GPT-4, making this a good time to release a newer and smarter model.

Excerpt from:

'Materially better' GPT-5 could come to ChatGPT as early as this summer - ZDNet

GPT-5 might arrive this summer as a materially better update to ChatGPT – Ars Technica

When OpenAI launched its GPT-4 AI model a year ago, it created a wave of immense hype and existential panic from its ability to imitate human communication and composition. Since then, the biggest question in AI has remained the same: When is GPT-5 coming out? During interviews and media appearances around the world, OpenAI CEO Sam Altman frequently gets asked this question, and he usually gives a coy or evasive answer, sometimes coupled with promises of amazing things to come.

According to a new report from Business Insider, OpenAI is expected to release GPT-5, an improved version of the AI language model that powers ChatGPT, sometime in mid-2024and likely during the summer. Two anonymous sources familiar with the company have revealed that some enterprise customers have recently received demos of GPT-5 and related enhancements to ChatGPT.

One CEO who recently saw a version of GPT-5 described it as "really good" and "materially better," with OpenAI demonstrating the new model using use cases and data unique to his company. The CEO also hinted at other unreleased capabilities of the model, such as the ability to launch AI agents being developed by OpenAI to perform tasks automatically.

We asked OpenAI representatives about GPT-5's release date and the Business Insider report. They responded that they had no particular comment, but they included a snippet of a transcript from Altman's recent appearance on the Lex Fridman podcast.

Lex Fridman(01:06:13) So when is GPT-5 coming out again? Sam Altman(01:06:15) I dont know. Thats the honest answer. Lex Fridman(01:06:18) Oh, thats the honest answer. Blink twice if its this year. Sam Altman(01:06:30) We will release an amazing new model this year. I dont know what well call it. Lex Fridman(01:06:36) So that goes to the question of, whats the way we release this thing? Sam Altman(01:06:41) Well release in the coming months many different things. I think thatd be very cool. I think before we talk about a GPT-5-like model called that, or not called that, or a little bit worse or a little bit better than what youd expect from a GPT-5, I think we have a lot of other important things to release first.

In this conversation, Altman seems to imply that the company is prepared to launch a major AI model this year, but whether it will be called "GPT-5" or be considered a major upgrade to GPT-4 Turbo (or perhaps an incremental update like GPT-4.5) is up in the air.

Like its predecessor, GPT-5 (or whatever it will be called) is expected to be a multimodal large language model (LLM) that can accept text or encoded visual input (called a "prompt"). And like GPT-4, GPT-5 will be a next-token prediction model, which means that it will output its best estimate of the most likely next token (a fragment of a word) in a sequence, which allows for tasks such as completing a sentence or writing code. When configured in a specific way, GPT models can power conversational chatbot applications like ChatGPT.

OpenAI launched GPT-4 in March 2023 as an upgrade to its most major predecessor, GPT-3, which emerged in 2020 (with GPT-3.5 arriving in late 2022). Last November, OpenAI released GPT-4 Turbo, which lowered inference (running) costs of OpenAI's best AI model dramatically but has been plagued with accusations of "laziness" where the model sometimes refuses to answer prompts or complete coding projects as requested. OpenAI has attempted to fix the laziness issue several times.

LLMs like those developed by OpenAI are trained on massive datasets scraped from the Internet and licensed from media companies, enabling them to respond to user prompts in a human-like manner. However, the quality of the information provided by the model can vary depending on the training data used, and also based on the model's tendency to confabulate information. If GPT-5 can improve generalization (its ability to perform novel tasks) while also reducing what are commonly called "hallucinations" in the industry, it will likely represent a notable advancement for the firm.

According to the report, OpenAI is still training GPT-5, and after that is complete, the model will undergo internal safety testing and further "red teaming" to identify and address any issues before its public release. The release date could be delayed depending on the duration of the safety testing process.

Of course, the sources in the report could be mistaken, and GPT-5 could launch later for reasons aside from testing. So, consider this a strong rumor, but this is the first time we've seen a potential release date for GPT-5 from a reputable source. Also, we now know that GPT-5 is reportedly complete enough to undergo testing, which means its major training run is likely complete. Further refinements will likely follow.

Visit link:

GPT-5 might arrive this summer as a materially better update to ChatGPT - Ars Technica

Meet Kimi AI, The Chinese ChatGPT – Dataconomy

Moonshot AI has introduced a Chinese ChatGPT: Kimi AI, backed by Alibaba, an advanced chatbot designed to revolutionize how we interact with technology, even if you put 2 million characters or tens of thousands to hundreds of thousands of words in the chatbox. But what exactly is Kimi, and what sets it apart in the world of AI? Lets dive in and explore the marvels of this innovative creation.

Kimi is a large language model (LLM) chatbot developed by Moonshot AI, a Beijing-based startup. Essentially, a large language model is an artificial intelligence (AI) system trained on vast amounts of text data to understand and generate human-like text responses. These models have become increasingly sophisticated, enabling them to process and generate natural language with remarkable accuracy. In Kimi AIs case, its like a smart robot that can talk to people using written words on a computer or phone screen. You can connect to the web or upload PDFs on the AI chatbot.

What makes Kimi AI special is that its really good at understanding what people say and responding in a way that makes sense. Recently, the team behind Kimi made it even smarter by teaching it to understand really long messagesup to 2 million Chinese characters in one go!

Kimi Smart Assistant is a versatile tool designed to cater to the diverse needs of various groups of people. Heres how Kimi can lend a helping hand:

Kimi makes life easier for all these people by helping them complete tasks quickly and get things done better, especially in Chinese. However, if you want to use Kimi AI, you should have a WeChat account.

Claude 3 vs ChatGPT: Are Claude 3 models better than ChatGPT?

Moonshot AI is a Beijing-based startup specializing in advanced artificial intelligence (AI) technology. Founded in April 2023, the company has quickly become a leader in AI innovation, particularly in the development of large language models (LLMs) and conversational AI solutions.

Their flagship product, Kimi AI chatbot, has gained widespread acclaim for its ability to understand and generate human-like text responses. Moonshot AIs mission is to harness AIs power to solve real-world problems and enhance human-machine interactions.

Through strategic partnerships like Alibaba and products like Kimi, we probably hear their name more soon.

Featured image credit: Eray Eliak/Bing

Go here to see the original:

Meet Kimi AI, The Chinese ChatGPT - Dataconomy

ChatGPT is here. There is no going back. – The Presbyterian Outlook

Working on a college campus, you must be careful about mentioning the use of AI or the purpose of such a tool. If youre not, you may catch a professor reciting their monolog outlining the evils of AI in the academic world. And while there is some validity to their reaction and concerns about this emerging technological tool, I find it to be just that, a tool.

I think part of what makes AI a challenge for the academic world is that there are no true rules or guides to help navigate this new instrument. Students can use it, and do use it, in ways others might deem harmful to academic integrity. I understand that side. I get the hesitation. We received this tool before we could develop the ethics about its use.

But in my experience, it is never a good practice to shut something out or make it restrictive in a way that will cause pushback and challenge. I try to embrace this tool instead of running away or ignoring it.

I try to embrace this tool instead of running away or ignoring it.

I am currently reworking my future lesson plans with the help of AI and finding ways to integrate its use alongside traditional coursework. To me, this process is fascinating. There is still a lot to learn about AI and plenty of need for ethical reflection on its use. But this much is clear to me: it can be helpful.

Several months ago, my coworkers and I decided to try ChatGPT. We wanted to see what all the fuss from our faculty colleagues was about. We sat together and thought of questions related to our work. We created the parameters for our topics and entered them all into ChatGPT. What resulted was a wild experience: outlines for emails, basic lesson plans, liturgy for worship, prayers and letters to community partners. The list went on and on. And it was captivating to engage in the process.

The items ChatGPT produced were not perfect. There were grammatical errors. There were some oddly worded phrases. All these things indicated that the product was not something created by a human. And that absence is the key to AI ethics for me.

We are just starting to build an ethical framework of AI in the academic world, and I hope the church is also thinking about such a thing. But the key to me is the human element. When working with ChatGPT to craft prayers, it does a decent job. But if you compare an AI prayer to a Chaplain Maggie prayer, the thing missing would be the heart the human element.

ChatGPT has been introduced to our lives. There is no going back. We should find ways to integrate it into our work rather than push back or turn from it. It can offer words when you are having a brain freeze or are too tired to think. It can offer a frame for your writing. It isnt perfect, but it is a tool that we can and should learn how to use just dont forget to add your human uniqueness as you go along.

The Presbyterian Outlook is committed to fostering faithful conversations by publishinga diversity ofvoices.The opinions expressed are the authors and may or may not reflect the opinions and beliefs of the Outlooks editorial staff or the Presbyterian Outlook Foundation.Want to join the conversation?You can write to us or submit your own articlehere.

Read more from the original source:

ChatGPT is here. There is no going back. - The Presbyterian Outlook

4 Reasons to Start Using Claude 3 Instead of ChatGPT – MUO – MakeUseOf

Key Takeaways

In the AI chatbot space, ChatGPT has been the undisputed leader since its launch in November 2022. However, with the release of Claude 3, it is increasingly looking like ChatGPT might be losing that title. Here are four reasons you should consider switching from ChatGPT to Claude.

Besides occasional science homework, programming tasks, and fun games, one of the most popular use cases of AI chatbots is creative writing. Most users use AI chatbots to help draft an email, cover letter, resume, article, or song lyricsbasically one creative write-up or another. While ChatGPT has clearly been the favored option owing mostly to its brand name and publicity, Claude has consistently delivered top-notch results even in earlier iterations of the AI chatbots. But it's not just about providing top-notch results. Claude, especially backed by the latest Claude 3 model, outperforms ChatGPT in a wide range of creative writing tasks.

As someone who has consistently used both chatbots since their launch, Claude, although not necessarily the overall better model, is significantly better at creating write-ups that better mimic human "creativity and imperfections." Putting both chatbots to the test, ChatGPT's write-ups, although grammatically correct, were full of tell-tale signs of an AI-written piece. Claude's write-ups read more naturally and sound human. Although not perfect, they are likely to be more engaging and creative.

Too frequently, ChatGPT falls victim to the use of so many clichs and predictable word choices. Ask ChatGPT to write about some business topics, and there's a good chance you will see words like "In today's business environment," "In recent history," and "In the fast-paced digital landscape" in the starting paragraphs.

Putting our theory to the test, it was just as predicted. ChatGPT (GPT-3.5 and GPT-4) used clich intros in five out of five trials. Here are the first three samples:

Claude, on the other hand, produced varying results four times out of five trials, avoiding the cliche on the first trial:

Besides clich, ChatGPT, more than Claude, tends to fall victim to the sporadic use of joining words like "in conclusion," "as a result," and a tendency for unnecessary emphasis where emphatic words like "undisputed, critical, unquestionable, must" etc., are used.

But besides these flaws, how do write-ups from each chatbot sound from a holistic point of view?

To top off the comparison, I asked both chatbots to produce rhyming rap lyrics on the theme "coconut to wealth." Claude seems the better option, but I'll let you be the judge.

Here's ChatGPT's take:

And here's Claude's take:

Early adopters of ChatGPT probably have a deep-rooted preference for the AI chatbot, but when it comes to creative writing, ChatGPT has some serious catching up to do in many areas.

Besides Google's Gemini AI chatbot, there are hardly any major AI chatbots in the market that offer Claude's multimodal features for free. With the free version of ChatGPT, all you get is text generation abilities, and that's it. No file uploads for analysis, no image processing, nothing else! On the other hand, Claude offers these premium features on its free tier. So, you can use image prompting or upload files for analysis on the chatbot for free if you use the free beta version of the bot.

Context window is the limit of text data an AI chatbot can process at a go. Think of it as how many things you can keep in your memory (and be able to recall) at a time.

Depending on the version of ChatGPT you use, you should get anywhere between 4k, 8k, 16k, 32k, and 128k context windows. For clarity, a 4k context window can accommodate around 3,000 words, while a 32k window can accommodate around 24,000 words. With the ChatGPT free tier, you get the lowest limits of the context window options (4k or 8k), meaning a few pages of text. You can access the 16k and possibly 32k options on ChatGPT Plus or Team plans, while the 128k context window seems to be an exclusive reserve of the ChatGPT Enterprise plans.

Whereas Claude has a 200k context window on its free and premium plansa significant improvement from ChatGPT's 4k or 8k window.

Why does this even matter? Well, the larger the context window, the more text data you can process at a time without the AI chatbot making things up. Claude's 200k context window is equivalent to around 150,000 words. Yep, it means you'll theoretically be able to process 150,000 words simultaneously with Claude, while ChatGPT could cap you out at 24,000 words even on its premium tier. You see? The difference is like night and dayat least in theory.

Rate limits can be a pain. You're in the middle of an interesting prompting session, you get an alert that you've reached your limit and have to wait (sometimes hours!) to get a reset. It's a huge joy killer and can set your work back hours. However, this happens both on ChatGPT and Claude, so it's an even ground on that point.

ChatGPT offers 40 messages every three hours on the Plus plan, while Claude offers 100 messages per eight hours. If you're not lost in the optics and do the math, ChatGPT's message limits are slightly better than Claude's. But there's more to it.

OpenAI dynamically throttles your usage limits. This means the limit you see isn't what you'll always get. It depends on the demand, as per OpenAI. On the other hand, despite having slightly lower usage limits, Claude can actually be more liberal with the limits depending on how much text you use per message.

So, if, for instance, you send around 2,000 words (around 200 English sentences of 1525 words each), you should be able to get "at least" the 100 messages per 8-hour limit. Two thousand words per prompt is a generous number; only a few people get that wordy when doing basic prompting. If you use a lower number of words per prompt, you should be able to get a larger number of messages per hour theoretically.

So, while ChatGPT might seem more generous on the outside if you use both chatbots daily, Claude seems to be the more generous option, although not necessarily at all times.

While early adopters may have a sentimental attachment to ChatGPT, it's becoming increasingly clear that Claude is a force to be reckoned with. As the AI landscape continues to evolve, it will be fascinating to see how these titans of conversational AI push each other to new heights, ultimately benefiting users with ever-improving and more capable chatbots. The future of AI-powered interactions has never been more exciting.

Original post:

4 Reasons to Start Using Claude 3 Instead of ChatGPT - MUO - MakeUseOf

ChatGPT Predicts Ethereum Price Post-ETH ETF Approval – Watcher Guru

With the Bitcoin ETF witnessing a huge surge in inflows, driving the assets price to a new all-time high, the anticipation surrounding an Ethereum ETF has reached fever pitch.

The price of Ethereum (ETH) has significantly increased due to the growing buzz around a potential Ethereum ETF, increasing institutional investments, and community support. Bulls have managed to push the ETH price towards the $4,000 mark, setting a new 52-week high and signaling a strong bullish sentiment in the market.

Also read: Shiba Inu: Machine Learning AI Predicts SHIBs Price for March 31

We shared the weekly ETH price chart with ChatGPT to gain further insight into the potential impact of an Ethereum ETF on the assets price. Based on its analysis of the recent price action and utilizing Fibonacci retracement levels, ChatGPT predicts that Ethereum could surge to new heights, reaching a target of $6,835 amidst the growing enthusiasm surrounding the ETF launch.

The AI model further suggests that beyond this initial target, Ethereum may face psychological barriers at the $7,000 and $8,000 levels. However, with the increasing institutional interest and the potential influx of capital from the ETF, Ethereum could be well-positioned to overcome these obstacles and establish new all-time highs.

Also read: Shiba Inu vs. Dogecoin: ChatGPT Predicts If SHIB Can Outperform DOGE

As the cryptocurrency market eagerly awaits the launch of an Ethereum ETF, the road ahead for the second-largest cryptocurrency appears promising. The growing institutional adoption, combined with the assets strong fundamentals and the robustness of the Ethereum ecosystem, has created fertile ground for significant price appreciation.

As Ethereum stands on the cusp of a potential ETF launch, the cryptocurrency market is abuzz with excitement and anticipation. With bulls driving the ETH price towards $4,000 and ChatGPT predicting even higher targets, the future looks bright for Ethereum and its investors.

Also read: Cryptocurrency: Top 3 Coins Under $1 To Buy Before Bitcoin Halving

Follow this link:

ChatGPT Predicts Ethereum Price Post-ETH ETF Approval - Watcher Guru

9 mind-blowing things you can do with ChatGPT-4 Vision – Android Authority

Not long ago, OpenAI unveiled a new iteration of ChatGPT, known as ChatGPT-4V or 4 Vision. This version allows users to upload images, photos, text, or mathematical problems, and it can analyze these and respond to questions based on the uploaded image. This remarkably powerful feature is currently only available to ChatGPT Plus account holders. If you happen to be a subscriber, here are nine things you can do with ChatGPT-4 Vision.

For a full demonstration of how to use ChatGPT-4 Vision to accomplish these tasks, be sure to watch the video embedded above.

Andy Walker / Android Authority

I started with something simple: a picture of a house plant that looked like a cabbage growing in a pot. I asked ChatGPT with Vision to identify it. It turns out its an ornamental kale or cabbage, known for its vibrant and colorful leaves and often used for decorative purposes.

Andy Walker / Android Authority

One of the impressive features of ChatGPT-4 Vision is its ability to read handwritten notes and diagrams. I tested it with a flow chart that describes a simple loop. Despite the poor handwriting and drawing, ChatGPT managed to interpret it accurately and even converted it into Python code.

Andy Walker / Android Authority

I also used a chart from a recent video about the Tensor G3 chipset, which shows Geekbench 6 multi-core scores. I asked ChatGPT to convert this into a table, and it did so easily. This feature can be handy for converting graphical data into a more manageable format.

Andy Walker / Android Authority

Next, I presented it with a visual math puzzle involving fireworks and stars. Despite some color confusion, ChatGPT correctly identified the fireworks that hadnt been launched.

Andy Walker / Android Authority

I then uploaded a US dollar to Euro currency conversion chart covering a period of one year. ChatGPT accurately described the chart and even provided some analysis of the value of the US dollar compared to the Euro. However, its important to note that ChatGPT should not be used for financial or medical advice.

Andy Walker / Android Authority

For those interested in family history research, ChatGPT can be a useful tool. I uploaded an image of a UK census document from 1851 and asked ChatGPT to transcribe it. Despite a minor error in transcribing a surname, it did a commendable job.

Andy Walker / Android Authority

I also tested it with an AI-generated image of a seascape with two moons. ChatGPT provided a detailed image description, including the smallest elements, demonstrating its ability to interpret and describe complex visuals.

Andy Walker / Android Authority

Finally, I gave it an image of an unbalanced binary tree and an AVL tree and asked it to create a lesson plan for a high school computer science class based on the image. It developed a comprehensive lesson plan, demonstrating its potential as an educational tool.

Andy Walker / Android Authority

I uploaded a seemingly blank yellow image with a hidden message in a fun final test. ChatGPT successfully read the hidden message written in a color thats barely noticeable to the naked eye. This demonstrates ChatGPT-4 Visions ability to detect subtle color differences.

In conclusion, ChatGPT-4 Vision is a powerful tool with many applications, from image analysis to educational planning. Its an exciting development in the field of AI, and I look forward to seeing how it advances.

Go here to read the rest:

9 mind-blowing things you can do with ChatGPT-4 Vision - Android Authority

6 Unexpected Uses For ChatGPT You’ll Want To Try For Yourself – SlashGear

Many folks have recently been obsessed with knowing their personal colors, and for good reason. Your personal color tells you exactly what shades of clothing, makeup, and even accessories go well with your skin tone, hair, and eye color. When you wear the right shades, you look less dull and more youthful. However, getting a professional color analysis done can cost a pretty penny, sometimes even going over $500. If you're not too keen on shelling out that much just to know your color palette, you can just use ChatGPT.

Right in the GPT Store on your ChatGPT Plus account, you'll find the Personal Color Analysis GPT, and it does exactly that: determine what your personal colors may be. Here's how to use it:

It will then provide you with an analysis based on your photo. You can send additional prompts, like "Give me visual examples" or "I like wearing a cottagecore style, can you suggest specific clothes to buy?" should you need more information.

See the rest here:

6 Unexpected Uses For ChatGPT You'll Want To Try For Yourself - SlashGear

VTouch’s WIZPR RING Redefines Wearable Tech with ChatGPT AI Voice Command – stupidDOPE.com

In the realm of wearable technology, VTouch emerges as a pioneer with its latest innovation: the WIZPR RING. Unveiled at CES 2024, this fashion-forward accessory is about style, and being a gateway to seamless interactions with artificial intelligence, featuring none other than ChatGPT.

Imagine whispering commands to your ring, and AI responding with precision and speed. Thats the promise of WIZPR RING. Equipped with cutting-edge technology, it filters out background noise, responding only to the whisper of its wearer. But its capabilities extend beyond mere commands; it fosters ASMR-style conversations, making interactions with AI an intimate experience.

Privacy concerns? VTouch has you covered. The WIZPR RINGs design incorporates a proximity sensor and microphone, ensuring that conversations remain confidential. With no wake words required, users can effortlessly activate the device by bringing it close to their lips, with automatic deactivation upon withdrawal.

Functionality meets elegance with this wearable marvel. Users can seamlessly switch between AI tools like ChatGPT, Siri, Alexa, and more, all with a simple whisper. Contextual conversations? Just mutter Whats up? or press a button, and the ring delves into your smartphones calendar, messages, and even the weather.

But the WIZPR RING isnt just about convenience; its a lifeline in emergencies. With a simple five-time button press, it activates an SOS mode, alerting pre-set contacts and providing location data for swift assistance.

And lets talk design. Available in eight sizes and crafted from titanium and epoxy resin, this accessory blends seamlessly into your style. Plus, with up to 66 hours of connectivity on a single charge, its ready for the long haul.

Excited to get your hands on one? VTouch has already kicked off an online campaign, with release and shipping slated for July 2024. Dont miss out on the future of wearable tech. Whisper your commands with WIZPR RING and step into a new era of AI integration.

Explore opportunities for maximum brand exposure. For advertising, contact us.

More:

VTouch's WIZPR RING Redefines Wearable Tech with ChatGPT AI Voice Command - stupidDOPE.com