Americans’ use of ChatGPT is ticking up, but few trust its election information – Pew Research Center

Its been more than a year since ChatGPTs public debut set the tech world abuzz. And Americans use of the chatbot is ticking up: 23% of U.S. adults say they have ever used it, according to a Pew Research Center survey conducted in February, up from 18% in July 2023.

The February survey also asked Americans about several ways they might use ChatGPT, including for workplace tasks, for learning and for fun. While growing shares of Americans are using the chatbot for these purposes, the public is more wary than not of what the chatbot might tell them about the 2024 U.S. presidential election. About four-in-ten adults have not too much or no trust in the election information that comes from ChatGPT. By comparison, just 2% have a great deal or quite a bit of trust.

Pew Research Center conducted this study to understand Americans use of ChatGPT and their attitudes about the chatbot. For this analysis, we surveyed 10,133 U.S. adults from Feb. 7 to Feb. 11, 2024.

Everyone who took part in the survey is a member of the Centers American Trends Panel (ATP), an online survey panel that is recruited through national, random sampling of residential addresses. This way, nearly all U.S. adults have a chance of selection. The survey is weighted to be representative of the U.S. adult population by gender, race, ethnicity, partisan affiliation, education and other categories. Read more about the ATPs methodology.

Here are the questions used for this analysis, along with responses, and the survey methodology.

Below well look more closely at:

Most Americans still havent used the chatbot, despite the uptick since our July 2023 survey on this topic. But some groups remain far more likely to have used it than others.

Differences by age

Adults under 30 stand out: 43% of these young adults have used ChatGPT, up 10 percentage points since last summer. Use of the chatbot is also up slightly among those ages 30 to 49 and 50 to 64. Still, these groups remain less likely than their younger peers to have used the technology. Just 6% of Americans 65 and up have used ChatGPT.

Differences by education

Highly educated adults are most likely to have used ChatGPT: 37% of those with a postgraduate or other advanced degree have done so, up 8 points since July 2023. This group is more likely to have used ChatGPT than those with a bachelors degree only (29%), some college experience (23%) or a high school diploma or less (12%).

Since March 2023, weve also tracked three potential reasons Americans might use ChatGPT: for work, to learn something new or for entertainment.

The share of employed Americans who have used ChatGPT on the job increased from 8% in March 2023 to 20% in February 2024, including an 8-point increase since July.

Turning to U.S. adults overall, about one-in-five have used ChatGPT to learn something new (17%) or for entertainment (17%). These shares have increased from about one-in-ten in March 2023.

Differences by age

Use of ChatGPT for work, learning or entertainment has largely risen across age groups over the past year. Still, there are striking differences between these groups (those 18 to 29, 30 to 49, and 50 and older).

For example, about three-in-ten employed adults under 30 (31%) say they have used it for tasks at work up 19 points from a year ago, with much of that increase happening since July. These younger workers are more likely than their older peers to have used ChatGPT in this way.

Adults under 30 also stand out in using the chatbot for learning. And when it comes to entertainment, those under 50 are more likely than older adults to use ChatGPT for this purpose.

Differences by education

A third of employed Americans with a postgraduate degree have used ChatGPT for work, compared with smaller shares of workers who have a bachelors degree only (25%), some college (19%) or a high school diploma or less (8%).

Those shares have each roughly tripled since March 2023 for workers with a postgraduate degree, bachelors degree or some college. Among workers with a high school diploma or less, use is statistically unchanged from a year ago.

Using ChatGPT for other purposes also varies by education level, though the patterns are slightly different. For example, a quarter each of postgraduate and bachelors degree-holders have used ChatGPT for learning, compared with 16% of those with some college experience and 11% of those with a high school diploma or less education. Each of these shares is up from a year ago.

With more people using ChatGPT, we also wanted to understand whether Americans trust the information they get from it, particularly in the context of U.S. politics.

About four-in-ten Americans (38%) dont trust the information that comes from ChatGPT about the 2024 U.S. presidential election that is, they say they have not too much trust (18%) or no trust at all (20%).

A mere 2% have a great deal or quite a bit of trust, while 10% have some trust.

Another 15% arent sure, while 34% have not heard of ChatGPT.

Distrust far outweighs trust regardless of political party. About four-in-ten Republicans and Democrats alike (including those who lean toward each party) have not too much or no trust at all in ChatGPTs election information.

Notably, however, very few Americans have actually used the chatbot to find information about the presidential election: Just 2% of adults say they have done so, including 2% of Democrats and Democratic-leaning independents and 1% of Republicans and GOP leaners.

These survey findings come amid growing national attention on chatbots and misinformation. Several tech companies have recently pledged to prevent the misuse of artificial intelligence including chatbots in this years election. But recent reports suggest chatbots themselves may provide misleading answers to election-related questions.

Note: Here are the questions used for this analysis, along with responses, and the survey its methodology.

Continue reading here:

Americans' use of ChatGPT is ticking up, but few trust its election information - Pew Research Center

Nearly a third of employed Americans under 30 used ChatGPT for work: Poll – The Hill

More employed Americans have used the artificial intelligence (AI) tool ChatGPT for work since last year, with the biggest increase among the younger portion of the workforce, according to a Pew Research poll released Tuesday.  

The survey found that 31 percent of employed Americans between 18 and 29 surveyed in February said they have used ChatGPT for tasks at work, up from 12 percent who said the same last March.

The number of employed Americans who said they use ChatGPT for work decreased by age group. Twenty-one percent of employed adults aged 30 to 49 said they use it, up from 8 percent last year, and just 10 percent aged 50 and older said the same, up from only 4 percent last year.

Overall, the share of employed Americans who have used ChatGPT for work rose to double digits in the past year — reaching 20 percent based on the February survey, up from just 8 percent last March. But in general, most Americans still have not used ChatGPT, according to the survey.  

Twenty-three percent of Americans said they have used ChatGPT. That amount is on the rise from July, when 18 percent said the same.  

Use of ChatGPT has particularly spiked among younger adults. Forty-three percent of adults younger than 30 said they have used ChatGPT in the February survey, compared to 27 percent of adults 30 to 49, 17 percent of adults 50 to 64 and 6 percent of adults 65 and older.  

As the tool becomes more popular, OpenAI has also faced scrutiny about risks it presents about the spread of misinformation. OpenAI CEO Sam Altman faced questions about those risks and how it could impact the upcoming election when he testified before the Senate last year.  

Pew found that 38 percent of Americans said they do not trust the information from ChatGPT about the 2024 presidential election. Only 2 percent said they trust it a “great deal” or “quite a bit” and 10 percent said they have “some” trust in ChatGPT.  

The distrust of ChatGPT about information about the 2024 election was fairly evenly split between Republicans and Democrats.  

The survey also found that very few Americans, roughly 2 percent, said they have used the chatbot to find information about the presidential election.  

The survey is based on data from the American Trends Panel created by Pew Research Center and was conducted from Feb. 7-11. A total of 10,133 panelists responded out of 11,117 who were sampled. The margin of error for the full sample of 10,133 respondents is 1.5 percentage points.  

See original here:

Nearly a third of employed Americans under 30 used ChatGPT for work: Poll - The Hill

ChatGPT Use Linked to Memory Loss, Procrastination in Students – Futurism

You won't always have an AI chatbot in your pocket... right? Brain Drain

New research has found a worrying link to memory loss and tanking grades in students who relied on ChatGPT, in an early but fascinating exploration of the swift impact that large language models have had in education.

As detailed in a new studypublished in the International Journal of Educational Technology in Higher Education, the researchers surveyed hundreds of university students ranging from undergrads to doctoral candidates over two phases, using self-reported evaluations. They were spurred on by witnessing more and more of their own students turn to ChatGPT.

"My interest in this topic stemmed from the growing prevalence of generative artificial intelligence in academia and its potential impact on students," study co-author Muhammad Abhas at the National University of Computer and Emerging Sciences in Pakistan told PsyPost. "For the last year, I observed an increasing, uncritical, reliance on generative AI tools among my students for various assignments and projects I assigned."

In the first phase, the researchers collected responses from 165 students who used an eight-item scale to report their degree of ChatGPT reliance. The items ranged from "I use ChatGPT for my course assignments" to "ChatGPT is part of my campus life."

To validate those results, they also conducted a more rigorous "time-lagged" second phase, in which they expanded their scope to nearly 500 students, who were surveyed three times at one to two week intervals.

Perhaps unsurprisingly, the researchers found that students under a heavy academic workload and "time pressure" were much more likely to use ChatGPT. They observed that those who relied on ChatGPT reported more procrastination, more memory loss, and a drop in GPA. And the reason why is quite simple: the chatbot, however good or bad its responses are, is making schoolwork too easy.

"Since ChatGPT can quickly respond to any questions asked by a user," the researchers wrote in the study, "students who excessively use ChatGPT may reduce their cognitive efforts to complete their academic tasks, resulting in poor memory."

There were a few curveballs, however.

"Contrary to expectations, students who were more sensitive to rewards were less likely to use generative AI," Abbas told PsyPost, suggesting that those seeking good grades avoided using the chatbot out of fear of getting caught.

It's possible that the relationship between ChatGPT usage and its negative effects is bidirectional, notes PsyPost. A student may turn to the chatbot because they already have bad grades, and not the other way around. It's also worth considering that the data was self-reported, which comes with its own biases.

That's not to exonerate AI, though. Based on these findings, we should be wary about ChatGPT's role in education.

"The average person should recognize the dark side of excessive generative AI usage," Abbas told Psypost. "While these tools offer convenience, they can also lead to negative consequences such as procrastination, memory loss, and compromised academic performance."

More on AI: Google's AI Search Caught Pushing Users to Download Malware

Read the original:

ChatGPT Use Linked to Memory Loss, Procrastination in Students - Futurism

Saving hours of work with AI: How ChatGPT became my virtual assistant for a data project – ZDNet

David Gewirtz/ZDNET

There's certainly been a lot of golly-wow, gee-whiz press about generative artificial intelligence (AI) over the past year or so. I'm certainly guilty of producing some of it myself. But tools like ChatGPT are also just that: tools. They can be used to help out with projects just like other productivity software.

Today, I'll walk you through a quick project where ChatGPT saved me a few hours of grunt work. While you're unlikely to need to do the same project, I'll share my thinking for the prompts, which may inspire you to use ChatGPT as a workhorse tool for some of your projects.

Also: 4 generative AI tools your enterprise can leverage to boost productivity

This is just the sort of project I would have assigned to a human assistant, back when I had human assistants. I'm telling you this fact because I structured the assignments for ChatGPT similarly to how I would have for someone working for me, back when I was sitting in a cubicle as a managerial cog of a giant corporation.

In a month or so, I'll post what I like to call a "stunt article." Stunt articles are projects I come up with that are fun and that I know readers will be interested in. The article I'm working on is a rundown of how much computer gear I can buy from Temu for under $100 total. I came in at $99.77.

Putting this article together involved looking on the Temu site for items to spotlight. For example, I found an iPad keyboard and mouse that cost about $6.

Also: Is Temu legit? What to know before you place an order

To stay under my $100 budget, I wanted to add all the Temu links to a spreadsheet, find each price, and then move things around until I got the exact total budget I wanted to spend.

The challenge was converting the Temu links into something useful. That's where ChatGPT came in.

The first thing I did was gather all my links. For each product, I copied the link from Temu and pasted it into a Notion page. When pasting a URL, Notion gives you the option to create bookmark blocks that not only contain links but also contain, crucially, product names. Here's a snapshot of that page:

As you can see, I've started selecting the blocks. Once you select all the blocks, you can copy them. I just pasted the entire set into a text editor, which looked like this:

The page looks ugly, but the result is useful.

Let's take a look at one of the data blocks. I switched my editor out of dark mode so it's easier for you to see the data elements in the block:

There are three key elements. The gold text shows the name of the product, surrounded by braces. The green text is the base URL of the product, surrounded by parenthesis. There's a question mark that separates the main page URL from all the random tracking data passed to the Temu page. I just wanted the main URL. The purple sections highlight the delimiters -- this is the data we're going to feed into ChatGPT.

I first fed ChatGPT this prompt:

Accept the following data and await further instructions.

Then I copied all the information from the text editor and pasted it into ChatGPT. At this point, ChatGPT knew to wait for more details.

The next step is where the meat of the project took place. I wanted ChatGPT to pull out the titles and the links, and leave the rest behind. Here's that prompt:

The data above consists of a series of blocks of data. At the beginning of each block is a section within [] brackets. For each block, designate this as TITLE.

Following the [] brackets is an open paren (followed by a web URL). For each block, extract that URL, but dispose of everything following the question mark, and also dispose of the question mark. Most URLs will then end in .html. We will designate this as URL.

For each block, display the TITLE followed by a carriage return, followed by the URL, followed by two newlines.

This process accomplished two things. It allowed me to name the data, so I could refer to it later. The process also allowed me to test whether ChatGPT understood the assignment.

Also: How to use ChatGPT

ChatGPT did the assignment correctly but stopped about two-thirds through when its buffer ran out. I told the bot to continue and got the rest of the data.

Doing this process by hand would have involved lots of annoying cutting and pasting. ChatGPT did the work in less than a minute.

For my project, Temu's titles are just too much. Instead of:

10 Inch LCD Writing Tablet, Electronis Memo With Leather Protective Case, Electronic Drawing Board For Digital Handwriting Pad Doodle Board, Gifts For

I wanted something more like:

LCD writing tablet with case

I gave this assignment to ChatGPT as well. I reminded the tool that it had previously parsed and identified the data. I find that reminding ChatGPT about a previous step helps it more reliably incorporate that step into subsequent steps. Then I told it to give me titles. Here's that prompt:

You just created a list with TITLE and URL. Do you remember? For the above items, please summarize the TITLE items in 4-6 words each. Only capitalize proper words and the first word. Give it back to me in a bullet list.

I got back a list like this, but for all 26 items:

My goal was to copy and paste this list of clickable links into Excel so I could use column math to play around with the items I planned to order, adding and removing items until I got to my $100 budget. I wanted the names clickable in the spreadsheet because it would be much easier to manage and jump back and forth between Temu and my project spreadsheet.

So, my final ChatGPT task was to turn the list above into a set of clickable links. Again, I started by reminding the tool of the work it had completed. Then I told it to create a list with links:

Do you see the bulleted list you just created? That is a list of summarized titles.

Okay, make the same list again, but turn each summarized title into a live web link with its corresponding URL.

And that was that. I got all the links I needed and ChatGPT did all the grunt work. I pasted the results into my spreadsheet, chose the products, and placed the order.

Also: 6 ways ChatGPT can make your everyday life easier

This is the final spreadsheet. There were more products when I started the process, but I added and removed them from the REMAINING column until I got the budget I was aiming for:

This was a project I could have done myself. But it would have required a ton of cutting and pasting, and a reasonable amount of extra thought to summarize all the product titles. It would have taken me two or three hours of grunt work and probably added to my wrist pain.

But by thinking this work through as an assignment that could be delegated, the entire ChatGPT experience took me less than 10 minutes. It probably took me less time to use ChatGPT to do all that grunt work and write this article than it would have taken me to do all that cutting, pasting, and summarizing.

Also:Thanks to my 5 favorite AI tools, I'm working smarter now

This sort of project isn't fancy and it isn't sexy. But it saved me a few hours of work I would have found tedious and unpleasant. Next time you have a data-parsing project, consider using ChatGPT.

Oh, and stay tuned. As soon as Temu sends me their haul, I'll post the detailed article about how much tech gear you can get for under $100. It'll be fun. See you there.

You can follow my day-to-day project updates on social media. Be sure to subscribe to my weekly update newsletter, and follow me on Twitter/X at @DavidGewirtz, on Facebook at Facebook.com/DavidGewirtz, on Instagram at Instagram.com/DavidGewirtz, and on YouTube at YouTube.com/DavidGewirtzTV.

Excerpt from:

Saving hours of work with AI: How ChatGPT became my virtual assistant for a data project - ZDNet

ChatGPT linked to declining academic performance and memory loss in new study – PsyPost

Students tend to turn to ChatGPT, a generative artificial intelligence tool, when faced with increased academic workload and time constraints, according to new research published in the International Journal of Educational Technology in Higher Education. The study also reveals a concerning trend: reliance on ChatGPT is linked to procrastination, memory loss, and a decline in academic performance. These findings shed light on the role of generative AI in education, suggesting both its widespread use and potential drawbacks.

The motivation behind this research stems from the explosive growth of generative AI technologies in educational settings. Despite their potential to assist in learning and research, theres a growing concern among educators about their misuse, especially in relation to academic integrity. Previous studies have largely focused on theoretical discussions without much empirical data to support the claims.

My interest in this topic stemmed from the growing prevalence of generative artificial intelligence in academia and its potential impact on students, explained study author Muhammad Abbas, an associate professor at the FAST School of Management at the National University of Computer and Emerging Sciences in Pakistan. For the last year, I observed an increasing, uncritical, reliance on generative AI tools among my students for various assignments and projects I assigned. This prompted me to delve deeper into understanding the underlying causes and consequences of its usage among them.

To understand these dynamics, the study was conducted in two phases. Initially, the researchers developed and validated a scale to measure university students use of ChatGPT for academic purposes. They began by generating an initial set of 12 items, which was refined to 10 after expert evaluations for content validity. Further refinement through an exploratory factor analysis and reliability testing led to the final selection of eight items that effectively measured the extent of ChatGPTs academic use.

The scale included items such as: I use ChatGPT for my course assignments, I am addicted to ChatGPT when it comes to studies, and ChatGPT is part of my campus life.

In the second phase of the study, the researchers sought to validate the findings from the first phase while also testing specific hypotheses related to ChatGPTs impact. The sample consisted of 494 university students who were surveyed across three timepoints, each separated by a 1-2 week interval.

This time-lagged approach allowed the researchers to first gather data on predictor variables (academic workload, time pressure, sensitivity to rewards, and sensitivity to quality), followed by the measurement of ChatGPT usage, and finally, the assessment of outcomes (procrastination, memory loss, and academic performance).

Abbas and his colleagues found that high levels of academic workload and time pressure were significant predictors of increased ChatGPT usage, suggesting that students under significant academic stress are more likely to turn to generative AI tools for assistance.

Students who were more sensitive to rewards were less inclined to use ChatGPT, indicating a possible concern about the academic integrity and the potential negative consequences of relying on AI for academic tasks.

Moreover, the study uncovered significant adverse effects of ChatGPT usage on students personal and academic outcomes. Increased reliance on ChatGPT was associated with higher levels of procrastination and memory loss, and a negative impact on academic performance, as reflected in students grade point averages. These findings suggest that while ChatGPT can be a valuable resource under certain circumstances, its excessive use might lead to detrimental effects on learning behaviors and outcomes.

One surprising finding was the role of sensitivity to rewards, Abbas told PsyPost. Contrary to expectations, students who were more sensitive to rewards were less likely to use generative AI. Another surprising finding was the positive relationship of generative AI usage with procrastination and self-reported memory loss and negative relationship between generative AI usage and academic performance.

Interestingly, the study did not find a significant relationship between sensitivity to quality and ChatGPT usage, suggesting that concerns over the quality of academic work do not necessarily influence the decision to use AI tools.

The findings highlight the potential dual impact of ChatGPT in academia, serving both as a helpful tool under academic pressure and as a potential risk to academic integrity and student learning outcomes.

The average person should recognize the dark side of excessive generative AI usage, Abbas said. While these tools offer convenience, they can also lead to negative consequences such as procrastination, memory loss, and compromised academic performance. Also, factors like academic workload, sensitivity to rewards, and time pressure play significant roles in influencing students decision to use generative AI.

The study provides important details about ChatGPT usage among university students. But the study, like all research, includes limitations. For example, the time-lagged design, while more robust than cross-sectional designs, does not entirely eliminate the possibility of reciprocal relationships.

The study suggests a one-way impact of ChatGPT usage on students academic workload and personal outcomes, such as procrastination and memory loss. However, its conceivable that these relationships could be bidirectional. For instance, students who are prone to procrastination might be more inclined to use ChatGPT, rather than ChatGPT usage leading to increased procrastination.

The research opens the door to investigating the broader effects of ChatGPT usage on students learning outcomes and health. Future research could delve into how reliance on generative AI tools affects cognitive skills, mental health, and overall learning experiences.

My long-term goals involve expanding this line of research to further explore through other methods, such as experiments, how excessive use of generative AI affects students outcomes, Abbas said.

The study, Is it harmful or helpful? Examining the causes and consequences of generative AI usage among university students, was authored by Muhammad Abbas, Farooq Ahmed Jam, and Tariq Iqbal Khan.

Read this article:

ChatGPT linked to declining academic performance and memory loss in new study - PsyPost

Visualizing the statistical connections behind ChatGPT – FlowingData

To gain a better understanding of how ChatGPT works under the hood, Santiago Ortiz repeatedly passed the prompt Intelligence is to the chatbot. Then he visualized the statistical paths to get to a response using a 3-D network. If you squint, the network kind of looks like a computers brain.

Read more:

Visualizing the statistical connections behind ChatGPT - FlowingData

OpenAI outage reported as ChatGPT users face ‘internal server error’ message – The US Sun

Reports begin to drop

After a major spike in outages just after 9 pm EST, reports of issues began to drop.

Reports fell to under 500 before 10 pm EST, per Downdetector.

There are no official incidents reported on OpenAI's official status page.

Major cities including New York City, Los Angeles, and Phoenix are reporting the most outages with OpenAI.

Approximately 89% of users experiencing problems with OpenAI are reporting issues with the software's chatbot, ChatGPT.

ChatGPT stands for Chat Generative Pre-trained Transformer and is intended to generate responses based on various prompts.

Just 9% of users are reporting problems with OpenAI's website, while 3% are struggling to log in, according to Downdetector.

Visit link:

OpenAI outage reported as ChatGPT users face 'internal server error' message - The US Sun

I tried using ChatGPT to help me move across the country – Mashable

When you hit your 20-somethings, nobody tells you how to adult. So now, four months away from the end of my lease, I need to figure out how to move across the country for the first time by myself. I could ask my parents, but where's the fun in that I am a big boy after all. This means obviously, as a reporter for an esteemed tech outlet, the solution is artificial intelligence.

I mean why not, right?

Big Tech has spent billions of dollars trying to find meaningful ways for us to incorporate generative AI into our lives. So, why not use generative AI as my personal assistant and financial planner during my cross-country move from Austin to Chicago?

In theory, moving to a new city is an ideal test of the tools OpenAI claims ChatGPT to be good at, especially now that it can access the internet, users can upload attachments and photos, and can be custom-built for specific needs. If ChatGPT can't ease some of my burdens when it comes to budgeting, searching for, financing, and driving to a new apartment that's more than 1,100 miles away, then perhaps it's not worth the GPUs it's built with.

Even before we look at apartments, I need ChatGPT to help me save money. On top of paying rent and utilities between now and June 1, I also started paying back my student loans in January, which runs me a cool $200 a month until the 2040s.

My goal is to paint a broad picture of what I need to do financially to have the money to make my move as stress-free as possible. ChatGPT and, thus, this experiment is inherently limited because the AI can't do all the financing for me. As much as I would love, in theory, for this AI to take care of my budgeting fully, it can't; nor do I feel comfortable allowing OpenAI to have access to my sensitive financial data.

Truly, this might be way above ChatGPT's pay grade considering it's a conversational AI with a hallucination problem (and not an arithmetic machine), but I gave both AIs the prompt specifying what I'm trying to achieve and asked it to calculate "how much I need to save from each paycheck to reasonably move to Chicago."

The AI was game to help.

You gotta be as specific as possible. The AI's don't appreciate having to do financial guesswork. Credit: OpenAI via screenshot

The big challenge was getting numbers that were realistic to my current situation especially when it came to dates. Between the time this was written and when my July 1 move-in date arrives, I'll receive 7 paychecks. During testing, if my prompt didn't specify July 1, 2024, ChatGPT would assume I meant July 1, 2025, and calculate for a year's worth of money. And even when I am specific about the dates, the AIs may still just hallucinate random numbers, like when ChatGPT randomly calculated for 10 pay periods instead of the 7 I'd specified.

The math was a little spotty, but with some tweaking to the prompts, ChatGPT gave me the ideal number that I should save based on my income and recurring payments. This matched up with the math I did by hand, which admittedly doesn't mean much that's why my degree is in journalism and not STEM.

Now that I know how much I need to save, I need to get a shortlist of places that fit within my budget. My range for rent for my next apartment is $1,000-$1,500. I'm not looking for anything fancy in the Windy City, but a studio/1 bed with an in-unit washer/dryer would be perfect.

Unfortunately, OpenAI has slowly wound down plugins so we're going to have to rely on CustomGPTs, an evolved version of plugins that allows users to create their own chatbots, to specify our real estate needs for this experiment. I used three different CustomGPTs: Apartment/House Rental Finder, Apartment Scout, and Apartment Finder.

So far, the running trend with these GPTs and this experiment, in general, is I need to be extremely specific which ruins the fun for me, because I'm trying to offload all my thinking to the robot. It's clearly a worse experience than just Googling. For instance, Apartment Scout gives a bulleted list of neighborhoods, and when I tell it which one I like, it goes:

It looks like I'm encountering some technical difficulties accessing the specific rental listings directly from the sources I usually rely on. However, you can find suitable studio or 1-bedroom apartments within your price range and criteria by checking out popular rental websites such as Zillow, Apartments.com, and Realtor.com.

What do you mean go somewhere else? Credit: OpenAI via screenshot

On another attempt, it provides a few links to actual apartment listings.

Now it wants to act right. Credit: OpenAI via screenshot

At least this GPT provides a link for me, unlike Apartment/House Rental Finder, which uses Craigslist to find apartments that specify my needs and then will tell me to go there myself if I ask for links to the apartments it listed.

Very helpful. Credit: OpenAI via screenshot

Ultimately, Apartment Finder was also not of much help and also told me to go somewhere else to find what I needed.

Credit: OpenAI via screenshot

The results (or lack thereof) are not surprising. ChatGPT was designed to tell the user whatever will make the user feel happy or smart, rather than be any sort of leader or authority, so if you are unsure about the broad strokes of where you want to live, then you're not gonna have much luck.

I'll guess I'll stick to late-night scrolling on Zillow.

The final step of this move is the packing and planning of the drive. One of the perks of ChatGPT is that you can upload pictures now, so I decided to see if showing ChatGPT my room would help it give me advice on how to pack up my belongings for the move.

Short answer: It didn't. Long answer: It didn't, and instead gave me generic advice on how to rip apart my room:

First, declutter the room, separating items to keep, donate, or discard. Gather moving supplies: boxes, bubble wrap, packing paper, tape, and markers. Pack items from least used to most used, labeling boxes with contents and destination room. Dismantle furniture, wrapping pieces in moving blankets or bubble wrap. Protect the electronics with bubble wrap and place them in boxes. Roll up the rug and secure with tape. Pack the artwork with bubble wrap and place in picture boxes. Use wardrobe boxes for hanging clothes and suitcases for others. Pack a box of essentials separately. Clean the room once emptied.

This advice, especially that neat tidbit about rolling up the rug in my room, wasn't all that useful (hint: my floor is carpeted).

Don't mind my room, it's a little dirty Credit: OpenAI via screenshot

Okay, well, that was a bust, so what about the drive? Unsurprisingly, this is where ChatGPT was the most helpful. I gave it two scenarios: one where I use a rented U-Haul to drive to Chicago, and another where I keep my car. According to the U-Haul website and the math done by ChatGPT, renting and driving a U-Haul costs around $700+.

The mover's dilemma: Sell all my stuff or spend $700 on a U-Haul? Credit: OpenAI via screenshot

If I drive my car, a 2012 Dodge Durango, the cost of just the gas is only around $240.

After calculating costs, ChatGPT gave me some routes to take. Of course, that information is superfluous since I can use Google Maps anyway, but it's nice to know now where my little adventure will take me. According to my friend from Illinois, the drive from Austin to Chicago is not great, so I'm glad ChatGPT told me to have some podcasts on deck to keep me entertained.

Here's the TL;DR: Don't use ChatGPT to plan a move across the country.

It's not that ChatGPT can't be helpful, cause it can. The chatbot helped me visualize a broad overview of my finances and gave me some useful tips and tricks for packing and route-planning. However, you need to be so hyper-specific with the prompts that all that time tinkering could be spent, you know, planning your move yourself.

Wanna use the CustomGPTs to help find apartments? Sorry, they'll just tell you to use Zillow. Wanna use ChatGPT to pin down how much to save out of your paychecks? Unless you're willing to get into the weeds about your financial security, good luck getting it to not just make shit up and even then it still might. Of course, these chatbots aren't designed to do life for you, but this exercise was somehow more frustrating than I thought it would be.

I guess I'll call my parents for help after all.

See original here:

I tried using ChatGPT to help me move across the country - Mashable

AI is More Than ChatGPT: It is a Ticking Time Bomb for Women – Torch – St. John’s University

Photo Courtesy / Unsplash Melanie Wasser

In recent months, image-based sexual abuse has been on the rise due to alternative intelligence (AI) mainly targeting high-profile women. It also imposes an increased risk to the LGBTQ+ community, sex workers and women everywhere. A 2023 UPenn article on the rise of deepfake porn says, Broadly speaking, minoritized women and femmes are more likely to experience image-based sexual abuse, as are single people and adolescents. LGBTQ populations are also at increased risk of harassment.

There are currently four states that created laws based on image-based sexual abuse. But with the growth of the internet, now more than anything what society needs is protection of the most vulnerable. When 14 year old Mia Janine takes her own life as a result of bullying and her face being placed onto the bodies of porn stars, it makes me fear what AI could do next.

What do we turn to when we see our own faces reflected back at us on the news and social media? When one girl dies or is faced with an inconceivable amount of tragedy, all girls watching stand as testaments to her pain.

We turn on the news and see our politicians arguing for more law enforcement and to lock people up in prisons overflowing with blue-collar criminals. But there is something about the politics of it all that makes my stomach turn and keep me from making eye contact with the girls face staring back at me, especially knowing that the politicians raising their voices only comes from a sense of inherent whiteness and lack of acknowledgement for women of color.

In order to stop these things from happening, the culture around womens existences must shift. Image-based sexual abuse is an example of the continual affects AI pornography can have on generations of people. If boys grow up believing that behavior like this is okay, what will stop them from using it to harm the women that they know? The cycle continues.

This is not a call for more policing or for longer prison sentences when tragedy does strike, this is a call for accountability. For resources available to victims and perpetrators, for laws to be created to catch crimes before they increase.

Resources can include community-led programs about sexual assault prevention and affordable therapy for people dealing with the effects of abuse and assault on their lives. More than anything, this is a call to see one less smiling girls eyes staring into mine, knowing that she died and nothing can be done to save her. Knowing that I cannot reach into my screen and pull her out.

These girls are suspended in time for me as the same age they were when they died. When the boys mugshot appears on the screen, I try to imagine what he was like as a child and what happened down the line for everything to go so wrong for him.

Social media, Deepfake images and an entire world of systemic, personal and institutional oppression fosters a world where the most heinous thoughts are validated. In order to be here for our women, we need to start with our boys.

Continue reading here:

AI is More Than ChatGPT: It is a Ticking Time Bomb for Women - Torch - St. John's University

Universities build their own ChatGPT-like AI tools – Inside Higher Ed

When ChatGPT debuted in November 2022, Ravi Pendse knew fast action was needed. While the University of Michigan formed an advisory group to explore ChatGPTs impact on teaching and learning, Pendse, UMichs chief information officer, took it further.

Months later, before the fall 2023 semester, the university launched U-M GPT, a homebuilt generative AI tool that now boasts between 14,000 to 16,000 daily users.

A report is great, but if we could provide tools, that would be even better, Pendse said, noting that Michigan is very concerned about equity. U-M GPT is all free; we wanted to even the playing field.

Most Popular

The University of Michigan is one of a small number of institutions that have created their own versions of ChatGPT for student and faculty use over the last year. Those include Harvard University, Washington University, the University of California, Irvine and UC San Diego. The effort goes beyond jumping on the artificial intelligence (AI) bandwagonfor the universities, its a way to overcome concerns about equity, privacy and intellectual property rights.

We need to talk about AI for good of course, but lets talk about not creating the next version of the digital divide.

Students can use OpenAIs ChatGPT and similar tools for everything from writing assistance to answering homework questions. The newest version of ChatGPT costs $20 per month, while older versions remain free. The newer models have more up-to-date information, which could give students who can afford it a leg up.

That fee, no matter how small, creates a gap unfair to students, said Tom Andriola, UC Irvines chief digital officer.

Do we think its right, in who we are as an organization, for some students to pay $20 a month to get access to the best [AI] models while others have access to lesser capabilities? Andriola said. Principally, it pushes us on an equity scale where AI has to be for all. We need to talk about AI for good of course, but lets talk about not creating the next version of the digital divide.

UC Irvine publicly announced their own AI chatbotdubbed ZotGPTon Monday. Deployed in various capacities since October 2023, it remains in testing and is only available to staff and faculty. The tool can help them with everything from creating class syllabi to writing code.

Offering their own version of ChatGPT allows faculty and staff to use the technology without the concerns that come with OpenAIs version, Andriola said.

When we saw generative AI, we said, We need to get people learning this as fast as possible, with as many people playing with this that we could, he said. [ZotGPT] lets people overcome privacy concerns, intellectual property concerns, and gives them an opportunity of, How can I use this to be a better version of myself tomorrow?

That issue of intellectual property has been a major concern and a driver behind universities creating their own AI tools. OpenAI has not been transparent in how it trains ChatGPT, leaving many worried about research and potential privacy violations.

Albert Lai, deputy faculty lead for digital transformation at Washington University, spearheaded the launch of WashU GPT last year.

WashUalong with UC Irvine and University of Michiganbuilt their tools using Microsofts Azure platform, which allows users to integrate the work into their institutions applications. The platform uses open source software available for free. In contrast, proprietary platforms like OpenAIs ChatGPT have an upfront fee.

A look at WashU GPT, a version of Washington Universitys own generative AI platform that promises more privacy and IP security than ChatGPT.

Provided/Washington University

There are some downsides when universities train their own models. Because a universitys GPT is based on the research, tests and lectures put in by an institution, it may not be as up-to-date as the commercial ChatGPT.

But thats a price we agreed to pay; we thought about privacy, versus what were willing to give up, Lai said. And we felt the value in maintaining privacy was higher in our community.

To ensure privacy is kept within a universitys GPT, Lai encouraged other institutions to ensure any Microsoft institutional agreements include data protection for IP. UC Irvine and UMichigan also have agreements with Microsoft that any information put into their GPT models will stay within the university and not be publicly available.

Weve developed a platform on top of [Microsofts] foundational models to provide faculty comfort that their IP is protected, Pendse said. Any faculty memberincluding myselfwould be very uncomfortable in putting a lecture and exams in an OpenAI model (such as ChatGPT) because then its out there for the world.

Once you figure out the secret sauce, its pretty straightforward.

It remains to be seen whether more universities will build their own generative AI chatbots.

Consulting firm Ithaka S+R formed a 19-university task force in September dubbed Making AI Generative for Higher Education to further study the use and rise of generative AI. The task force members include Princeton University, Carnegie Mellon University and the University of Chicago.

Lai and others encourage university IT officials to continue experimenting with what is publicly available, which can eventually morph into their own versions of ChatGPT.

I think more places do want to do it and most places havent figured out how to do it yet, he said. But frankly, in my opinion, once you figure out the magic sauce its pretty straightforward.

Visit link:

Universities build their own ChatGPT-like AI tools - Inside Higher Ed

There Might Be No ChatGPT-like Apple Chatbot in iOS 18 – The Mac Observer

The recent months in the tech scene have been all about artificial intelligence and its impact, but one company that has been late to the party is Apple. Apple first hinted about inhouse-AI development during a recent earnings call, which followed the earlier reports of the company reaching out to major publishers to use their data to train its AIs dataset, canceling the Apple Car project and shifting the team to AI. However, according to Bloombergs Mark Gurman, Apple might not debut a ChatGPT-like chatbot, at all. Instead, the company is exploring deals with established tech giants such as Chinas Baidu, OpenAI, and Google about potential partnerships.

That said, Apple might instead focus on licensing already-established chatbots like Googles Gemini (fka Bard) or OpenAIs ChatGPT. They might delay all plans to release an Apple chatbot, internally dubbed Ajax GPT.

Nevertheless, Mark Gurman believes AI will remain in the shows spotlight at the upcoming Worldwide Developers Conference (WWDC), slated for June 10-14, 2024 where we expect to see iOS 18, iPadOS 18, watchOS 11, tvOS 18, macOS 15, and visionOS 2. Although he doesnt delve into details of the upcoming AI feature, he mentions the companys plans to unveil new AI features, which could serve as the backbone of the next iOS 18. This suggests that even if Apple doesnt intend to bring a native AI chatbot to the devices, we might see a popular chatbot pre-installed on the phones or supported natively by the device. For reference, a London-based consumer tech firm, Nothing, recently partnered with the Perplexity AI search engine to power up its latest release, Phone 2(a), and Apple might have similar plans, but with generative AI giants.

CEO Tim Cook recently spoke to investors that the company will disclose its AI plans to the public later this year. Despite Apples overall reticence on the topic, Cook has been notably vocal about the potential of AI, particularly generative AI.

More importantly, according to previous reports, he has indicated that generative AI will improve Siris ability to respond to more complex queries and enable the Messages app to complete sentences automatically. Furthermore, other Apple apps such as Apple Music, Shortcuts, Pages, Numbers, and Keynote are expected to integrate generative AI functionality.

Source

Read the rest here:

There Might Be No ChatGPT-like Apple Chatbot in iOS 18 - The Mac Observer

VIDEO: Using ChatGPT and generative AI tools in journalism – Online Journalism Blog

A few months ago I delivered a webinar for the European Data Journalism Network and DataNinja about the range of ways that journalists can use ChatGPT and other generative AI tools from idea generation and mapping systems to help with spelling and coding and what issues they need to be aware of.

The video is now available online and you can watch it below.

More here:

VIDEO: Using ChatGPT and generative AI tools in journalism - Online Journalism Blog

ChatGPT: student chatbot use ‘increasing loneliness’ – Times Higher Education

Universities should exercise caution as they outsource more functions to artificial intelligence (AI), according to the authors of a study that links student usage of ChatGPT to loneliness and a reduced sense of belonging.

Australian researchers surveyed 387 university students in different parts of the globe to seek to understand the less understood side effects of the rapid uptake of the OpenAI tool since itslaunch in November 2022.

They found evidence that while AI chatbots designed for information provision may be associated with student performance, when social support, psychological well-being, loneliness and sense of belonging are considered it has a net negative effect on achievement, according to the paper published inStudies in Higher Education.

Alongside ChatGPT which is primarily used by students for help with academic tasks universitieshave adopted a range of chatbotsto help with other processes, including in admissions and student support.

It seems students may be seeking out AI help instead of librarians, student advisers and counsellors, and this means universities have no visibility from a whole-of-student continuity of care perspective, said Joseph Crawford, a senior lecturer in management at the University of Tasmania and one of the authors of the study.

Universities could save money deploying these tools at the expense of students spending time building their social skills and social capital.

The study found that students who reported using ChatGPT more displayed some evidence of feeling socially supported by the AI, explained Dr Crawford, who worked on thepaper with Kelly-Ann Allen and Bianca Pani, both of Monash University, and Michael Cowling, based at Central Queensland University.

But the paper also shows that increased chatbot usage led to human relationships weakening possibly without users even realising.

Those who got their support from friends and family reported reduced loneliness, higher grade performance and were less willing to leave universitythan those who reported being socially supported by the AI.

Dr Crawford said it was still not completely clear whether AI use causes lower performance, or whether students experiencing lower performance turn more often to AI.

But he recommended that universities should find ways to promote peer networks, social opportunities for students and other ways of building social connections as a way of insulatingthem from some of the more negative effects of AI use.

tom.williams@timeshighereducation.com

Originally posted here:

ChatGPT: student chatbot use 'increasing loneliness' - Times Higher Education

ChatGPT use linked to sinking academic performance and memory loss – Yahoo News UK

ChatGPT use is linked to bad results and memory loss. (Getty Images)

Using AI software such as ChatGPT is linked to poorer academic performance, memory loss and increased procrastination, a study has shown.

The AI chatbot ChatGPT can generate convincing answers to simple text prompts, and is already used weekly by up to 32% of university students, according to research last year.

The new study found that university students who use ChatGPT to complete assignments find themselves in a vicious circle where they dont give themselves enough time to do their work and are forced to rely on ChatGPT, and over time, their ability to remember facts diminishes.

The research was published in the International Journal of Educational Technology in Higher Education. Scientists conducted interviews with 494 students about their use of ChatGPT, with some admitting to being "addicted" to using the technology to complete assignments.

The researchers wrote: "Since ChatGPT can quickly respond to any questions asked by a user, students who excessively use ChatGPT may reduce their cognitive efforts to complete their academic tasks, resulting in poor memory. Over time, over-reliance on generative AI tools for academic tasks, instead of critical thinking and mental exertion, may damage memory retention, cognitive functioning, and critical thinking abilities."

In the interviews, the researchers were able to pinpoint problems experienced by students who habitually used ChatGPT to complete their assignments.

The researchers surveyed students three times to work out what sort of student is most likely to use ChatGPT, and what effects heavy users experienced.

The researchers then asked questions about the effects of using ChatGPT.

Study author Mohammed Abbas, from the National University of Computer and Emerging Sciences in Pakistan, told PsyPost: "My interest in this topic stemmed from the growing prevalence of generative artificial intelligence in academia and its potential impact on students.

Story continues

"For the last year, I observed an increasing, uncritical, reliance on generative AI tools among my students for various assignments and projects I assigned. This prompted me to delve deeper into understanding the underlying causes and consequences of its usage among them."

The study found that students who were results-focused were less likely to rely on AI tools to do tasks for them.

The research also found that students who relied on ChatGPT were not getting the full benefit of their education - and actually lost the ability to remember facts.

"Our findings suggested that excessive use of ChatGPT can have harmful effects on students personal and academic outcomes. Specifically, those students who frequently used ChatGPT were more likely to engage in procrastination than those who rarely used ChatGPT," Abbas said.

"Similarly, students who frequently used ChatGPT also reported memory loss. In the same vein, students who frequently used ChatGPT for their academic tasks had a poor grade average."

The researchers found that students who felt under pressure were more likely to turn to ChatGPT - but that this then led to worsening academic performance and further procrastination and memory loss.

The researchers suggest that academic institutions should be mindful that heavy workloads can drive students to use ChatGPT.

The researchers also said academics should warn students of the negative impact of using the software.

"Higher education institutions should emphasise the importance of efficient time management and workload distribution while assigning academic tasks and deadlines," they said.

"While ChatGPT may aid in managing heavy academic workloads under time constraints, students must be kept aware of the negative consequences of excessive ChatGPT usage."

Read more from the original source:

ChatGPT use linked to sinking academic performance and memory loss - Yahoo News UK

What is the best generative AI chatbot? ChatGPT, Copilot, Gemini and Claude compared – ReadWrite

The generative AI chatbot market is rapidly growing and while OpenAIs ChatGPT might remain the most mainstream, there are many others on the market competing to be the very best for the general public, creatives businesses and anyone else looking to see how artificial intelligence can improve their day-to-day lives.

But which one is the best? ChatGPT may have been the first to go mainstream, but is it the market leader? Which companies have entered the generative AI chatbot space with a product worthy of taking on OpenAIs offering?

Arguably the most popular on the market, other than ChatGPT, are Microsofts CoPilot, Claude by Anthropic and Gemini, which is owned by Google.

Here we look at all four of these popular generative AI chatbots and consider which one is the best for certain uses.

At this point who hasnt heard of ChatGPT? It was the first AI to go completely mainstream and show just how powerful AI can be to the wider public. It made such a splash, it reached one million active users within weeks of launching and now has over 180 million users worldwide and counting.

Its creator, OpenAI, has worked tirelessly to keep it at the forefront of the market by launching new and improved features, including a Pro Version (GPT-4), web browsing capabilities and image generation, powered by Dall-E. Theres even the option to create your custom-made GPT-powered bot on any subject you want.

The free version, GPT-3.5, is only trained on human-created data up to January 2022, so its restrictive if youre looking to use it for more up-to-date purposes involving real-time information. However, the Pro version, GPT-4, is available for $20 a month and is trained with data up to April 2023. Although thats still relatively time-restrictive, it does also have access to the internet.

Yes, at most taks, although it has had its controversies due to inaccuracies and misinformation, such as lawyers using it for case research and the chatbot fabricating historic cases. However, it remains a good first port of call for anyone just looking for an easy-to-use AI chatbot. It should be noted GPT-4 is significantly more effective than GPT-3.5, but the former is only available to paying users.

CoPilot is Microsofts own generative AI chatbot, originating initially as a chat option on their search engine, Bing. It is now a stand-alone AI chatbot and is naturally built into all of Microsofts productivity and business tools, such as Windows and Microsoft 365.

Interestingly, Microsoft is a key investor in OpenAIs ChatGPT, which was used to launch Bing Chat. GPT-4 continues to power CoPilot today and, like ChatGPT, also uses Dall-E to generate images.

That might sound like its no different to ChatGPT but Microsofts key USP with CoPilot is that it is ingested into all of the Microsoft tools and products billions of people use around the world every single day.

It behaves as an assistant to those who rely on the likes of Microsoft Excel, Microsoft Word and other 365 platforms to perform day-to-day tasks.

The clue is in the name, but CoPilot is good for people who need help when using Microsofts extensive suite of tools, products, and software. It essentially behaves as an assistant, or co-pilot, inside these products.

From spreadsheets, text documents to computer code, CoPilot can help create it all with natural language prompts. Coders on the Microsoft-owned Github find it to be a very popular AI chatbot to use.

Formerly called Bard, Gemini is owned by Google is another generative AI chatbot that is improving rapidly over time to rival GPT-4.

One major plus to Gemini is that it has no limit to the number of responses it can give you, unlike GPT-4 and CoPilot, which both have limits in this area.

That means you can essentially have long discussions with Google Gemini to find the information you require. On top of that, and rather unsurprisingly, Gemini bakes in a lot of the elements were all so used to from Googles search engine. For example, if you ask it to help you plan a trip to a specific country, it will likely provide you with a map of that destination, using Google Maps, and may even dip into Google images to give you some kind of visual representation of the information its giving you.

Users can also add extensions, akin to Chrome extensions, for use in tools such as YouTube, Maps and Workspace.

If youre a big fan of Google products and apps, Gemini is likely the generative AI chatbot for you, but its also perfect if youre looking for speedy interactions and unlimited prompts.

Thats because, while it isnt faster than GPT-4, it has generally been found to be faster than CoPilot and GPT-3.5. But its not flawless and was recently caught up in controversy over the accuracy of its image generator amid claims it was woke.

The creators of Claude, Anthropic, is an AI company started by former OpenAI employees.

Its something of an all-rounder, being a multi-modal chatbot with text, voice and document capabilities.

But the main praise it has had since its launch in early 2023 is the fluency of the conversations it can hold, its ability to understand the nuances in the ways humans communicate and its ability to refuse to generate harmful or unethical content, instead often suggesting alternative ways to accomplish what users are asking of it without breaking its own guidelines.

Claude recently launched Claude 3, which is a family of AI chatbots (Opus, Sonnet and Haiku) that offer varying levels of sophistication depending on what users require, and Anthropic claim its most powerful AI in the family, Opus, is almost 87% trained to undergraduate levels of knowledge and accuracy and 95% common knowledge and accuracy.

Claudes extensive and powerful capabilities, such as being able to rapidly read, analyze and summarize uploaded files, make it a very useful generative AI chatbot for professionals.

It is also trained on real-time data, which undoubtedly speaks to Anthropics impressive claims of accuracy and levels of knowledge.

On Claudes website, Anthropic claims it is a next-generation AI assistant built for work and trained to be safe, accurate and secure.

Featured Image: Ideogram

Read the original here:

What is the best generative AI chatbot? ChatGPT, Copilot, Gemini and Claude compared - ReadWrite

‘Materially better’ GPT-5 could come to ChatGPT as early as this summer – ZDNet

Leon Neal/Getty Images

OpenAI has released several iterations of the large language model (LLM) powering ChatGPT, including GPT-4 and GPT-4 Turbo. Still, sources say the highly anticipated GPT-5 could be released as early as mid-year.

According to reports from Business Insider, GPT-5 is expected to be a major leap from GPT-4 and was described as "materially better" by early testers. The new LLM will offer improvements that have reportedly impressed testers and enterprise customers, including CEOs who've been demoed GPT bots tailored to their companies and powered by GPT-5.

Also: What does GPT stand for? Understanding GPT 3.5, GPT 4, and more

`A customer who got a GPT-5 demo from OpenAI told BI that the company hinted at new, yet-to-be-released GPT-5 features, including its ability to interact with other AI programs that OpenAI is developing. These AI programs, called AI agents by OpenAI, could perform tasks autonomously.

This feature hints at an interconnected ecosystem of AI tools developed by OpenAI, which would allow its different AI systems to collaborate to complete complex tasks or provide more comprehensive services.

The specific launch date for GPT-5 has yet to be released. OpenAI is reportedly training the model and will conduct red-team testing to identify and correct potential issues before its public release.

Also: 3 ways we tried to outwit AI last week: Legislation, preparation, intervention

It's unclear whether GPT-5 will be released exclusively to Plus subscribers, who pay a $20-a-month fee to access GPT-4. GPT-3.5 powers the free tier of ChatGPT, but anyone can access GPT-4 Turbo in Copilot for free by choosing the Creative or Precise conversation styles.

OpenAI has been the target of scrutiny and dissatisfaction from users amid reports of quality degradation with GPT-4, making this a good time to release a newer and smarter model.

Excerpt from:

'Materially better' GPT-5 could come to ChatGPT as early as this summer - ZDNet

Claude 3 vs ChatGPT: Here is How to Find the Best in Data Science – DataDrivenInvestor

5 min read

The question of whether a computer can think is no more interesting than the question of whether a submarine can swim.

Edsger W. Dijkstra

Echoing Edsgers insight, we go into capabilities of two LLMs, contrasting their prowess in the arena of Data Science.

Here are the prompts well use to compare ;

Claude 3, developed by ex-OpenAI employees and supported by a $2 billion investment from Google in October, has quickly gained fame for its exceptional reasoning abilities.

See the original post here:

Claude 3 vs ChatGPT: Here is How to Find the Best in Data Science - DataDrivenInvestor

GPT-5 might arrive this summer as a materially better update to ChatGPT – Ars Technica

When OpenAI launched its GPT-4 AI model a year ago, it created a wave of immense hype and existential panic from its ability to imitate human communication and composition. Since then, the biggest question in AI has remained the same: When is GPT-5 coming out? During interviews and media appearances around the world, OpenAI CEO Sam Altman frequently gets asked this question, and he usually gives a coy or evasive answer, sometimes coupled with promises of amazing things to come.

According to a new report from Business Insider, OpenAI is expected to release GPT-5, an improved version of the AI language model that powers ChatGPT, sometime in mid-2024and likely during the summer. Two anonymous sources familiar with the company have revealed that some enterprise customers have recently received demos of GPT-5 and related enhancements to ChatGPT.

One CEO who recently saw a version of GPT-5 described it as "really good" and "materially better," with OpenAI demonstrating the new model using use cases and data unique to his company. The CEO also hinted at other unreleased capabilities of the model, such as the ability to launch AI agents being developed by OpenAI to perform tasks automatically.

We asked OpenAI representatives about GPT-5's release date and the Business Insider report. They responded that they had no particular comment, but they included a snippet of a transcript from Altman's recent appearance on the Lex Fridman podcast.

Lex Fridman(01:06:13) So when is GPT-5 coming out again? Sam Altman(01:06:15) I dont know. Thats the honest answer. Lex Fridman(01:06:18) Oh, thats the honest answer. Blink twice if its this year. Sam Altman(01:06:30) We will release an amazing new model this year. I dont know what well call it. Lex Fridman(01:06:36) So that goes to the question of, whats the way we release this thing? Sam Altman(01:06:41) Well release in the coming months many different things. I think thatd be very cool. I think before we talk about a GPT-5-like model called that, or not called that, or a little bit worse or a little bit better than what youd expect from a GPT-5, I think we have a lot of other important things to release first.

In this conversation, Altman seems to imply that the company is prepared to launch a major AI model this year, but whether it will be called "GPT-5" or be considered a major upgrade to GPT-4 Turbo (or perhaps an incremental update like GPT-4.5) is up in the air.

Like its predecessor, GPT-5 (or whatever it will be called) is expected to be a multimodal large language model (LLM) that can accept text or encoded visual input (called a "prompt"). And like GPT-4, GPT-5 will be a next-token prediction model, which means that it will output its best estimate of the most likely next token (a fragment of a word) in a sequence, which allows for tasks such as completing a sentence or writing code. When configured in a specific way, GPT models can power conversational chatbot applications like ChatGPT.

OpenAI launched GPT-4 in March 2023 as an upgrade to its most major predecessor, GPT-3, which emerged in 2020 (with GPT-3.5 arriving in late 2022). Last November, OpenAI released GPT-4 Turbo, which lowered inference (running) costs of OpenAI's best AI model dramatically but has been plagued with accusations of "laziness" where the model sometimes refuses to answer prompts or complete coding projects as requested. OpenAI has attempted to fix the laziness issue several times.

LLMs like those developed by OpenAI are trained on massive datasets scraped from the Internet and licensed from media companies, enabling them to respond to user prompts in a human-like manner. However, the quality of the information provided by the model can vary depending on the training data used, and also based on the model's tendency to confabulate information. If GPT-5 can improve generalization (its ability to perform novel tasks) while also reducing what are commonly called "hallucinations" in the industry, it will likely represent a notable advancement for the firm.

According to the report, OpenAI is still training GPT-5, and after that is complete, the model will undergo internal safety testing and further "red teaming" to identify and address any issues before its public release. The release date could be delayed depending on the duration of the safety testing process.

Of course, the sources in the report could be mistaken, and GPT-5 could launch later for reasons aside from testing. So, consider this a strong rumor, but this is the first time we've seen a potential release date for GPT-5 from a reputable source. Also, we now know that GPT-5 is reportedly complete enough to undergo testing, which means its major training run is likely complete. Further refinements will likely follow.

Visit link:

GPT-5 might arrive this summer as a materially better update to ChatGPT - Ars Technica

Meet Kimi AI, The Chinese ChatGPT – Dataconomy

Moonshot AI has introduced a Chinese ChatGPT: Kimi AI, backed by Alibaba, an advanced chatbot designed to revolutionize how we interact with technology, even if you put 2 million characters or tens of thousands to hundreds of thousands of words in the chatbox. But what exactly is Kimi, and what sets it apart in the world of AI? Lets dive in and explore the marvels of this innovative creation.

Kimi is a large language model (LLM) chatbot developed by Moonshot AI, a Beijing-based startup. Essentially, a large language model is an artificial intelligence (AI) system trained on vast amounts of text data to understand and generate human-like text responses. These models have become increasingly sophisticated, enabling them to process and generate natural language with remarkable accuracy. In Kimi AIs case, its like a smart robot that can talk to people using written words on a computer or phone screen. You can connect to the web or upload PDFs on the AI chatbot.

What makes Kimi AI special is that its really good at understanding what people say and responding in a way that makes sense. Recently, the team behind Kimi made it even smarter by teaching it to understand really long messagesup to 2 million Chinese characters in one go!

Kimi Smart Assistant is a versatile tool designed to cater to the diverse needs of various groups of people. Heres how Kimi can lend a helping hand:

Kimi makes life easier for all these people by helping them complete tasks quickly and get things done better, especially in Chinese. However, if you want to use Kimi AI, you should have a WeChat account.

Claude 3 vs ChatGPT: Are Claude 3 models better than ChatGPT?

Moonshot AI is a Beijing-based startup specializing in advanced artificial intelligence (AI) technology. Founded in April 2023, the company has quickly become a leader in AI innovation, particularly in the development of large language models (LLMs) and conversational AI solutions.

Their flagship product, Kimi AI chatbot, has gained widespread acclaim for its ability to understand and generate human-like text responses. Moonshot AIs mission is to harness AIs power to solve real-world problems and enhance human-machine interactions.

Through strategic partnerships like Alibaba and products like Kimi, we probably hear their name more soon.

Featured image credit: Eray Eliak/Bing

Go here to see the original:

Meet Kimi AI, The Chinese ChatGPT - Dataconomy

ChatGPT is here. There is no going back. – The Presbyterian Outlook

Working on a college campus, you must be careful about mentioning the use of AI or the purpose of such a tool. If youre not, you may catch a professor reciting their monolog outlining the evils of AI in the academic world. And while there is some validity to their reaction and concerns about this emerging technological tool, I find it to be just that, a tool.

I think part of what makes AI a challenge for the academic world is that there are no true rules or guides to help navigate this new instrument. Students can use it, and do use it, in ways others might deem harmful to academic integrity. I understand that side. I get the hesitation. We received this tool before we could develop the ethics about its use.

But in my experience, it is never a good practice to shut something out or make it restrictive in a way that will cause pushback and challenge. I try to embrace this tool instead of running away or ignoring it.

I try to embrace this tool instead of running away or ignoring it.

I am currently reworking my future lesson plans with the help of AI and finding ways to integrate its use alongside traditional coursework. To me, this process is fascinating. There is still a lot to learn about AI and plenty of need for ethical reflection on its use. But this much is clear to me: it can be helpful.

Several months ago, my coworkers and I decided to try ChatGPT. We wanted to see what all the fuss from our faculty colleagues was about. We sat together and thought of questions related to our work. We created the parameters for our topics and entered them all into ChatGPT. What resulted was a wild experience: outlines for emails, basic lesson plans, liturgy for worship, prayers and letters to community partners. The list went on and on. And it was captivating to engage in the process.

The items ChatGPT produced were not perfect. There were grammatical errors. There were some oddly worded phrases. All these things indicated that the product was not something created by a human. And that absence is the key to AI ethics for me.

We are just starting to build an ethical framework of AI in the academic world, and I hope the church is also thinking about such a thing. But the key to me is the human element. When working with ChatGPT to craft prayers, it does a decent job. But if you compare an AI prayer to a Chaplain Maggie prayer, the thing missing would be the heart the human element.

ChatGPT has been introduced to our lives. There is no going back. We should find ways to integrate it into our work rather than push back or turn from it. It can offer words when you are having a brain freeze or are too tired to think. It can offer a frame for your writing. It isnt perfect, but it is a tool that we can and should learn how to use just dont forget to add your human uniqueness as you go along.

The Presbyterian Outlook is committed to fostering faithful conversations by publishinga diversity ofvoices.The opinions expressed are the authors and may or may not reflect the opinions and beliefs of the Outlooks editorial staff or the Presbyterian Outlook Foundation.Want to join the conversation?You can write to us or submit your own articlehere.

Read more from the original source:

ChatGPT is here. There is no going back. - The Presbyterian Outlook