Is artificial intelligence combat ready? – Washington Technology

Human soldiers will increasingly share the battlespace with a range of robotic, autonomous, and artificial intelligence-enabled agents. Machine intelligence has the potential to be a decisive factor in future conflicts that the U.S. may face.

The pace of change will be faster than anything seen in many decades, driven by the advances in commercial AI technology and the pressure of a near-peer with formidable technological capabilities.

But are AI and machine learning combat-ready? Or, more precisely, is our military prepared to incorporate machine intelligence into combat effectively?

Creating an AI-Ready Force

The stakes of effective collaboration between AI and combatants are profound.

Human-machine teaming has the potential to reduce casualties dramatically by substituting robots and autonomous drones for human beings in the highest-risk front-line deployments.

It can dramatically enhance situational awareness by rapidly synthesizing data streams across multiple domains to generate a unified view of the battlespace. And it can overwhelm enemy defenses with the swarming of autonomous drones.

In our work with several of the Defense Department research labs working at the cutting edge of incorporating AI and machine learning into combat environments, we have seen that this technology has the potential to be a force multiplier on par with air power.

However, several technological and institutional obstacles must be overcome before AI agents can be widely deployed into combat environments.

Safety and Reliability

The most frequent concern about AI agents and uncrewed systems is whether they can be trusted to take actions with potentially lethal consequences. AI agents have an undeniable speed advantage in processing massive amounts of data to recognize targets of interest. However, there is an inherent tension between conducting war at machine speed and retaining accountability for the use of lethal force.

It only takes one incident of AI weapons systems subjecting their human counterparts to friendly fire to undermine the confidence of warfighters in this technology. Effective human-machine teaming is only possible when machines have earned the trust of their human allies.

Adapting Military Doctrine to AI Combatants

Uncrewed systems are being rapidly developed that will augment existing forces across multiple domains. Many of these systems incorporate AI at the edge to control navigation, surveillance, targeting, and weapons systems.

However, existing military doctrine and tactics have been optimized for a primarily human force. There is a temptation to view AI-enabled weapons as a new tool to be incorporated into existing combat approaches. But doctrine will be transformed by innovations such as the swarming of hundreds or thousands of disposable, intelligent drones capable of overwhelming strategic platforms.

Force structures may need to be reconfigured on the fly to deliver drones where there is the greatest potential impact. Human-centric command and control concepts will need to be modified to accommodate machines and build warfighter trust.

As autonomous agents proliferate and become more powerful, the battlespace will become more expansive, more transparent, and move exponentially faster. The decision on how and if to incorporate AI into the operational kill chain has profound ethical consequences.

An even more significant challenge will be how to balance the pace of action on the AI-enabled battlefield with the limits of human cognition. What are the tradeoffs between ceding a first-strike advantage measured in milliseconds with the loss of human oversight? The outcome of future conflicts may hinge on such questions.

Insatiable Hunger for Data

AI systems are notoriously data-hungry. There is not, and fortunately never will be, enough live operational data from live military conflicts to adequately train AI models to the point where they could be deployed on the battlefield. For this reason, simulations are essential to develop and test AI agents, and they require thousands or even millions of iterations using modern machine learning techniques.

The DoD has existing high-fidelity simulations, such as Joint Semi-Automated Forces (JSAF), but they run essentially in real-time. To unlock the full potential of AI-enabled warfare requires developing simulations with sufficient fidelity to accurately model potential outcomes but compatible with the speed requirements of digital agents.

Integration and Training

AI-enabled mission planning has the potential to vastly expand the situational awareness of combatants and generate novel multi-domain operation alternatives to overwhelm the enemy. Just as importantly, AI can anticipate and evaluate thousands of courses of action that the enemy might employ and suggest countermeasures in real time.

One reason Americas military is so effective is a relentless focus on training. But warfighters are unlikely to embrace tactical directives emanating from an unfamiliar black box when their lives hang in the balance.

As autonomous platforms move from research labs to the field, intensive warfighter training will be essential to create a cohesive, unified human-machine team. To be effective, AI course-of-action agents must be designed to align with existing mission planning practices.

By integrating such AI agents with the training for mission planning, we can build confidence among users while refining the algorithms using the principles of warfighter-centric design.

Making Human-Machine Teaming a Reality

While underlying AI technology has grown exponentially more powerful in the past few years, addressing the challenges posed by human-machine teaming will determine how rapidly these technologies can translate into practical military advantage.

From the level of the squad all the way to the joint command, it is essential that we test the limits of this technology and establish the confidence of decision-makers in its capabilities.

There are several vital initiatives the DoD should consider to accelerate this process.

Embrace the Chaos of War

Building trust in AI agents is the most essential step to effective human-machine teaming. Warfighters will rightly have a low level of confidence in systems that have only been tested under controlled laboratory conditions. The best experiments and training exercises replicate the chaos of war, including unpredictable events, jamming of communications and positioning systems, and mid-course changes to the course of action.

Human warfighters should be encouraged to push autonomous systems and AI agents to the breaking point to see how they perform under adverse conditions. This will result in iterative design improvements and build the confidence that these agents can contribute to mission success.

A tremendous strength of the U.S. military is the flexible command structure that empowers warfighters down to the squad level to rapidly adapt to changing conditions on the ground. AI systems have the potential to provide these units with a far more comprehensive view of the battlespace and generate tactical alternatives. But to be effective in wartime conditions, AI agents must be resilient enough to function under conditions of degraded communications and understand the overall intent of the mission.

Apply AI to Defense Acquisition Process

The rapid evolution of underlying AI and autonomous technologies means that traditional procurement processes developed for large cold-war platforms are doomed to fail. As an example, swarming tactics are only effective when using hundreds or thousands of individual systems capable of intelligent, coordinated action in a dynamic battlespace.

Acquiring such devices at scale will require leveraging a broad supplier base, moving rapidly down the cost curve, and enabling frequent open standards updates. Too often, we have seen weapons vendors using incompatible, proprietary communications standards that render systems unable to share data, much less engage in coordinated, intelligent maneuvers. One solution is to apply AI to revolutionize the acquisition process.

By creating a virtual environment to test systems designs, DoD customers can verify operational concepts and interoperability before a single device is acquired. This will help to reduce waste, promote shared knowledge across the services, and create a more level playing field for the supplier base.

Build Bridges from Labs to Deployment

While a tremendous amount of important work has been done by organizations such as the Navy Research Lab, the Army Research Lab, the Air Force Research Lab, and DARPA, the success of AI-enabled warfare will ultimately be determined by moving this technology from the laboratories and out into the commands. Human-machine teaming will be critical to the success of these efforts.

Just as important, the teaching of military doctrine at the service academies needs to be continuously updated as the technology frontier advances. Incorporating intelligent agents into practical military missions requires both profound changes in doctrine and reallocation of resources.

Military commanders are unlikely to be dazzled by bright and shiny objects unless they see tangible benefits to deploying them. By starting with some easy wins, such as the enhancement of ISR capabilities and automation of logistics and maintenance, we can build early bridges that will instill confidence in the value of AI agents and autonomous systems.

Educating commands about the potential of human-machine teaming to enhance mission performance and then developing roadmaps to the highest potential applications will be essential. Commanders need to be comfortable with the parameters of human-in-the-loop and human-on-the-loop systems as they navigate how much autonomy to grant to AI-at-the-edge weapons systems. Retaining auditability as decision cycles accelerate will be critical to ensuring effective oversight of system development and evolving doctrine.

Summary

Rapid developments in AI and autonomous weapons systems have simultaneously accelerated and destabilized the ongoing quest for military superiority and effective deterrence. The United States has responded to this threat with a range of policies restricting the transfer of underlying technologies. However, the outcome of this competition will depend on the ability to convincingly transfer AI-enabled warfare from research labs to potential theaters of conflict.

Effective human-machine teaming will be critical to make the transition to a joint force that leverages the best capabilities of human warfighters and AI to ensure domination of the battlespace and deter adventurism by foreign actors.

Mike Colony leads Sercos Machine Learning Group, which has helped to support several Department of Defense clients in the area of AI and machine learning, including the Office of Naval Research, the Air Force Research Laboratory, the U.S. Marine Corps, the Electronic Warfare and Countermeasures Office, and others.

Excerpt from:

Is artificial intelligence combat ready? - Washington Technology

Posted in Uncategorized

Gaza war: artificial intelligence is changing the speed of targeting and scale of civilian harm in unprecedented ways – The Conversation

As Israels air campaign in Gaza enters its sixth month after Hamass terrorist attacks on October 7, it has been described by experts as one of the most relentless and deadliest campaigns in recent history. It is also one of the first being coordinated, in part, by algorithms.

Artificial intelligence (AI) is being used to assist with everything from identifying and prioritising targets to assigning the weapons to be used against those targets.

Academic commentators have long focused on the potential of algorithms in war to highlight how they will increase the speed and scale of fighting. But as recent revelations show, algorithms are now being employed at a large scale and in densely populated urban contexts.

This includes the conflicts in Gaza and Ukraine, but also in Yemen, Iraq and Syria, where the US is experimenting with algorithms to target potential terrorists through Project Maven.

Amid this acceleration, it is crucial to take a careful look at what the use of AI in warfare actually means. It is important to do so, not from the perspective of those in power, but from those officers executing it, and those civilians undergoing its violent effects in Gaza.

This focus highlights the limits of keeping a human in the loop as a failsafe and central response to the use of AI in war. As AI-enabled targeting becomes increasingly computerised, the speed of targeting accelerates, human oversight diminishes and the scale of civilian harm increases.

Reports by Israeli publications +927 Magazine and Local Call give us a glimpse into the experience of 13 Israeli officials working with three AI-enabled decision-making systems in Gaza called Gospel, Lavender and Wheres Daddy?.

These systems are reportedly trained to recognise features that are believed to characterise people associated with the military arm of Hamas. These features include membership of the same WhatsApp group as a known militant, changing cell phones every few months, or changing addresses frequently.

The systems are then supposedly tasked with analysing data collected on Gazas 2.3 million residents through mass surveillance. Based on the predetermined features, the systems predict the likelihood that a person is a member of Hamas (Lavender), that a building houses such a person (Gospel), or that such a person has entered their home (Wheres Daddy?).

In the investigative reports named above, intelligence officers explained how Gospel helped them go from 50 targets per year to 100 targets in one day and that, at its peak, Lavender managed to generate 37,000 people as potential human targets. They also reflected on how using AI cuts down deliberation time: I would invest 20 seconds for each target at this stage I had zero added value as a human it saved a lot of time.

They justified this lack of human oversight in light of a manual check the Israel Defense Forces (IDF) ran on a sample of several hundred targets generated by Lavender in the first weeks of the Gaza conflict, through which a 90% accuracy rate was reportedly established. While details of this manual check are likely to remain classified, a 10% inaccuracy rate for a system used to make 37,000 life-and-death decisions will inherently result in devastatingly destructive realities.

But importantly, any accuracy rate number that sounds reasonably high makes it more likely that algorithmic targeting will be relied on as it allows trust to be delegated to the AI system. As one IDF officer told +927 magazine: Because of the scope and magnitude, the protocol was that even if you dont know for sure that the machine is right, you know that statistically its fine. So you go for it.

The IDF denied these revelations in an official statement to The Guardian. A spokesperson said that while the IDF does use information management tools [] in order to help intelligence analysts to gather and optimally analyse the intelligence, obtained from a variety of sources, it does not use an AI system that identifies terrorist operatives.

The Guardian has since, however, published a video of a senior official of the Israeli elite intelligence Unit 8200 talking last year about the use of machine learning magic powder to help identify Hamas targets in Gaza. The newspaper has also confirmed that the commander of the same unit wrote in 2021, under a pseudonym, that such AI technologies would resolve the human bottleneck for both locating the new targets and decision-making to approve the targets.

AI accelerates the speed of warfare in terms of the number of targets produced and the time to decide on them. While these systems inherently decrease the ability of humans to control the validity of computer-generated targets, they simultaneously make these decisions appear more objective and statistically correct due to the value that we generally ascribe to computer-based systems and their outcome.

This allows for the further normalisation of machine-directed killing, amounting to more violence, not less.

While media reports often focus on the number of casualties, body counts similar to computer-generated targets have the tendency to present victims as objects that can be counted. This reinforces a very sterile image of war. It glosses over the reality of more than 34,000 people dead, 766,000 injured and the destruction of or damage to 60% of Gazas buildings and the displaced persons, the lack of access to electricity, food, water and medicine.

It fails to emphasise the horrific stories of how these things tend to compound each other. For example, one civilian, Shorouk al-Rantisi, was reportedly found under the rubble after an airstrike on Jabalia refugee camp and had to wait 12 days to be operated on without painkillers and now resides in another refugee camp with no running water to tend to her wounds.

Aside from increasing the speed of targeting and therefore exacerbating the predictable patterns of civilian harm in urban warfare, algorithmic warfare is likely to compound harm in new and under-researched ways. First, as civilians flee their destroyed homes, they frequently change addresses or give their phones to loved ones.

Such survival behaviour corresponds to what the reports on Lavender say the AI system has been programmed to identify as likely association with Hamas. These civilians, thereby unknowingly, make themselves suspect for lethal targeting.

Beyond targeting, these AI-enabled systems also inform additional forms of violence. An illustrative story is that of the fleeing poet Mosab Abu Toha, who was allegedly arrested and tortured at a military checkpoint. It was ultimately reported by the New York Times that he, along with hundreds of other Palestinians, was wrongfully identified as Hamas by the IDFs use of AI facial recognition and Google photos.

Over and beyond the deaths, injuries and destruction, these are the compounding effects of algorithmic warfare. It becomes a psychic imprisonment where people know they are under constant surveillance, yet do not know which behavioural or physical features will be acted on by the machine.

From our work as analysts of the use of AI in warfare, it is apparent that our focus should not solely be on the technical prowess of AI systems or the figure of the human-in-the-loop as a failsafe. We must also consider these systems ability to alter the human-machine-human interactions, where those executing algorithmic violence are merely rubber stamping the output generated by the AI system, and those undergoing the violence are dehumanised in unprecedented ways.

See the rest here:

Gaza war: artificial intelligence is changing the speed of targeting and scale of civilian harm in unprecedented ways - The Conversation

Posted in Uncategorized

UNM research analyzes government intervention and COVID-19 pandemic – UNM Newsroom

While theres a strong determination worldwide to return to a new normal in a post-COVID world, the pandemic is nearly impossible to forget. A large amount of data also provides insight we may not want to move past just yethow we handled it.

UNM Political Science Associate Professor and Chair Jami Nelson-Nuez is evaluating COVID policies in a research paper titled, Non-pharmaceutical interventions to combat COVID-19 in the Americas described through daily sub-national data, published

Jami Nelson-Nuez

recently in the journal, Scientific Data, a Nature publication. Throughout the COVID-19 pandemic, she and an international team of researchers investigated the impact of Latin American governments' actions and plans.

Like in the U.S., politics at national, state and local levels influenced decisions on COVID-19 mitigation around the world impacting cases, deaths, hospitalizations, jobs and education. Nelson-Nuez specifically looked at the cause and effect of policies and politics in Bolivia through months of data.

Bolivia is a really interesting case because they have a system that's mainly decentralized creating a lot of variation across different regions. Our main focus was looking at the effects of policies and how they evolved, Nelson-Nuez said. We were extremely productive on the project and our work shed light on the political intersection of the pandemic in Bolivia and how that spoke to what was happening in several other countries.

This tied into what was going on in the United States. It was a deeply political time in our country.These political patterns were reflected across several countries, making for an interesting comparative study. UNM Political Science Associate Professor Jami Nelson-Nuez

The pandemic occurred at a time of upheaval in national politics in Bolivia. After an already tumultuous fall 2019 election, Bolivias transitional government lacked legitimacy and the country was deeply divided.

There was a conflict that had been bubbling for a while. The country was in a moment of crisis of contested leadership. It was interesting to follow how the dynamics of the pandemic in Bolivia were shaped by these political complexities throughout the country, Nelson-Nuez said.

When the elections were postponed a second time, tens of thousands of Bolivians across the country rushed to the streets and protested for months. Part of those protests in some places included creating blockades so that medical equipment couldn't get through.

Bolivia has always been an outlier when it comes to protest. Its people have always been highly mobilized and able to overcome those collective action problems to mount significant protests, Nelson-Nuez said. There's a history in Bolivia of people marching for days to the central government and then occupying the streets in the capital.

With that unrest,non-pharmaceutical interventions(NPIs) were few and far between. A large governmental response given the challenge of already weak state capacity in health services made it difficult to make and enforce coherent policies.

The central government was trying to figure out how to steer the ship for the whole country, while regional and local actors were contesting these decisions, Nelson-Nuez said. The politics and the dynamics of multi-level governance can render local communities very vulnerable to these types of events.

Even with the NPIs and guidance issued over time, it was too little too late. The hospital system fully collapsed in the summer of 2020.

Additionally, given the state of leadership, although some NPIs were issued, there needed to be more public trust to abide by government policies and enable governments to make and enforce policies.Her colleagues found similar results in other Latin American countries.

Underlying disparities in health are political. Underneath health realities are important political factors, and if we ignore those, we really don't understand how and why pandemics are occurring or why health disparities are emerging in the way that they are, Nelson-Nuez said.

Nelson-Nuez says this research is one of many ways public health and politics are intertwined.

Sometimes people don't understand political science. As a discipline, we study power, which means we study resources and the distribution of those resources, Nelson-Nuez said. Health is a resource, and access to services is a resource. We can see political factors across the world that shape the ability of diseases to spread and how deadline global health emergencies can be.

Read this article:

UNM research analyzes government intervention and COVID-19 pandemic - UNM Newsroom

The Longest Case of COVID-19 Lasted 613 Days – Healthnews.com

For most, COVID-19 symptoms last for a few weeks before passing. New research from the Netherlands finds a patient suffered from the respiratory virus for nearly two years before his death.

A Dutch man with a poor immune system lived with a high-mutated novel variant of COVID-19 for 613 days, according to the University of Amsterdams Centre for Experimental and Molecular Medicine (CEMM). The case is known as the longest bout of COVID-19.

Healthy patients diagnosed with COVID-19 typically recover from mild cases of the virus within a few weeks. However, immunocompromised individuals may develop a persistent infection with increased adverse effects that can evolve such as the Omicron variant, which originated in a patient with a weakened immune system.

A European Society of Clinical Microbiology and Infectious Diseases release says the study led by Magda Vergouwe of the CEMM describes a male patient who was admitted to the Amsterdam University Medical Center in February 2022 due to COVID-19. He was infected with the Omicron SARS-CoV-2 variant BA.1.17.

The patient suffered from myelodysplastic and myeloproliferative overlap syndrome due to a stem cell transplant. In myelodysplastic diseases, immature blood cells in bone barrow do not mature and become healthy blood cells. Meanwhile, myeloproliferative diseases result in a total number of blood cells slow increasing.

This case underscores the risk of persistent SARS-CoV-2 infections in immunocompromised individuals as unique SARS-CoV-2 viral variants may emerge due to extensive intra-host evolution, study authors said. We emphasise the importance of continuing genomic surveillance of SARS-CoV-2 evolution in immunocompromised individuals with persistent infections given the potential public health threat of possibly introducing viral escape variants into the community.

The 72-year-old patient had previously received multiple COVID-19 vaccinations. He was treated with multiple antibody medications without any response and within 21 days, the man developed a mutation that resisted sotrovimab, one of the antibody medications. In the full genome sequencing of the virus that persisted for 613 days, researchers uncovered it had undergone 50 genetic code mutations.

The ESCMID Global release says study authors note there must be a balance between protecting the masses from new variants and providing care for these terminally ill patients. Also, scientists emphasize while there is an increased chance of novel variants in those with weakened immune systems, it is not the case for each patient.

The duration of SARS-CoV-2 infection in this described case is extreme, but prolonged infections in immunocompromised patients are much more common compared to the general community. Further work by our team includes describing a cohort of prolonged infections in immunocompromised patients from our hospital with infection durations varying between 1 month and 2 years.

The complete research of this unique COVID-19 case will be presented at the ESCMID Global Congress in Barcelona which runs from April 27-30.

The U.S. Centers for Disease Control and Prevention (CDC) updated its COVID-19 guidelines in March, no longer recommending isolation following a positive test. Those who are infected should wear a high-quality mask or respirator when around others, monitor symptoms, and contact a healthcare provider for possible treatments. The CDC reported 6,406 COVID-19 hospitalizations last week, a 13.8% drop.

However, COVID-19 can still be a threat to those with weak immune systems like the 72-year-old Dutch man. The CDC highlights those who are immunocompromised have lesser defenses against infections. Those six months and older who are moderately to severely immunocompromised are recommended to receive at least one dose of the updated 2023-24 COVID-19 vaccine.

The CDC says people with weakened immune symptoms may reach out to their healthcare provider for possible antiviral medications. Recovering from COVID-19 for immunocompromised patients may take longer than the normal few weeks

More here:

The Longest Case of COVID-19 Lasted 613 Days - Healthnews.com

Spotify CEO Daniel Ek surprised at negative impact of laying off 1500 Spotify employees – Fortune

When Spotify announced its largest-ever round of layoffs in December, CEO Daniel Ek hailed a new age of efficiency at the streaming giant. But four months on, it seems he and his executives werent prepared for how tough filling in for 1,500 axed workers would be.

The music streamer enjoyed record quarterly profits of 168 million ($179 million) in the first three months of 2024, enjoying double-digit revenue growth to 3.6 billion ($3.8 billion) in the process.

However, the company failed to hit its guidance on profitability and monthly active user growth.

It didnt seem to put off investors, who sent shares in the group soaring more than 8% in New York after markets opened Tuesday morning.

Still, as he addressed those investors following the latest earnings release, Ek didnt shy away from the obstacles that stopped the streamer from hitting some of its targets this year.

In addition to surprisingly successful 2023 growth to compare against and the impacts of falling marketing spend, Ek blamed operational difficulties linked to staffing for the group missing its earnings target to start the year.

In December, Spotify culled 1,500 jobs, equivalent to 17% of employees, as part of an aggressive efficiency drive as the group strived for profitability.

Staff costs for those employees carried a long tail, as most workers received five-month severance packages when they were let go in December.

At the same time, the footprint left behind by those employees was bigger than Ek and his executives anticipated.

Another significant challenge was the impact of December workforce reduction, Ek said on an investors call following Spotifys Q1 earnings release.

Although theres no question that it was the right strategic decision, it did disrupt our day-to-day operations more than we anticipated.

It took us some time to find our footing, but more than four months into this transition, I think were back on track and I expect to continue improving on our execution throughout the year getting us to an even better place than weve ever been.

Ek didnt elaborate on what aspects of operations were most affected by the layoffs.

Back in December as the platform he founded faced persistent losses and a falling share price, Spotify CEO Ek used a well-trodden path by tech giants to steer the ship around: mass layoffs.

We still have too many people dedicated to supporting work and even doing work around the work, rather than contributing to opportunities with real impact, Ek said in a memo as he announced he would be cutting his workforce by 17%.

Investors initially reacted well to the news, though skeptical voices asked whether the move merely put a sticking plaster over harder-to-solve issues at the group, particularly its low margins thanks to the costs of bumper record deals.

However, it appears to have worked so far. In the four months since the layoff announcements, shares in the group have jumped more than 60%.

Spotify has also recently proved it is able to raise prices in some of its key markets without seeing a flight of listeners to rival services like Apple Music.

In the long run, Spotify and Ek also remain convinced the tough round of layoffs has set Spotify up for long-term profitability.

The apparent collective surprise at how that can affect operations in the short run, though, marks a dash of hubris for the newly bullish streaming group.

Continue reading here:

Spotify CEO Daniel Ek surprised at negative impact of laying off 1500 Spotify employees - Fortune

Letters to the editor: Ignore the populist bait of an uneconomical manufacturing dream – The Australian Financial Review

We need better performance from our politicians, and that requires us to ignore the populist bait on offer. And to reward the politicians who act against their own interests to benefit the country.

Graeme Bennett, Artarmon, NSW

Ed Shann would have us continue business as usual and go with our strengths (Forget Made in Australia, do what we already do well). But what of energy security, and the need to decarbonise?

We have recently witnessed Australias dependence on imports, whether in the 90 per cent of our imported oil supplies at risk due to conflicts elsewhere or a pandemic that isolated the very industries Mr Shann believes are our strengths: education and tourism.

As for subsidies, the Australia Institute reported that in 2022-23, Australian governments provided fossil fuel industries with $11.1billion in spending and tax breaks: This years figure represents a 5 per cent decline on last years, but subsidies in the forward estimates have increased from $55.3 billion to a record $57.1 billion.

The focus on returns to fossil fuel shareholders needs to shift to a focus on making renewables work for the environment and for an equitable transition to cheaper energy sources. This will require investment.

On a micro level, millions of Australians have worked this out already. New home battery installations rose 21 per cent last year. According to the Climate Council, rooftop solar is now providing 11.2 per cent of our nations total power supply after 314,507 households installed solar panels last year, bringing online 2.9GW of new generation.

Fiona Colin, Malvern East, Vic

It was pleasing to read rational commentary detailing criteria necessary to boost investment (Four ways we can lift investment in local manufacturing).

It was all about fundamentals: approvals; build times; competitive input costs; skilled workforce. The only mention of tax was the sensible suggestion for faster depreciation of manufacturing investments.

Improved depreciation rates can apply across all industries. Developing and purchasing computer software is part of everyday business and necessary to drive productivity and revenue. It is like employing people to achieve outcomes. All software expenditures should be able to be written off as incurred rather than depreciated.

Graeme Troy, Wagstaffe, NSW

I offer a counter to the self-serving views of the Minerals Council of Australia as presented by chief executive Tania Constable (Dont make stuff Australia has no edge in, says MCA).

Ms Constable seems to have no idea there is much more to a manufacturing sector than raw inputs and outputs. For a start, there is the whole can do mindset that a thriving manufacturing sector can engender. Ms Constable probably cant conceive that there might be thousands of young Australians who would much rather spend their work time actually making things and gaining practical skills than answering emails and moving numbers around on a screen all day.

Can I suggest she read Identity: The Demand for Dignity and the Politics of Resentment by Francis Fukuyama and pause for a moment to consider the lasting damage to society that can occur when considerations of individual dignity and respect are simply cast aside for the narrow economic benefit of the nation?

Fraser Faithfull, Caulfield South, Vic

The AFR View is right to say the tragedy at Bondi Junction at the weekend would be 10 times worse if the perpetrator had a gun (Bondi Junction tragedy brings out the best). The reason Australians rush to help is that we dont think a gun is involved; the first thing Americans do is run away for fear of being shot.

While it is comforting to see all politicians and media condemn this monstrous act and understand the publics need for full and continuous disclosure, this period will soon end. We will then want to see real change, and real action.

This is where our political leaders could easily let us down. Labor is often criticised for being soft on crime, and time will tell if NSW Premier Chris Minns and Prime Minister Anthony Albanese fall into this category.

As The AFR View rightly points out, drugs are pernicious and a scourge; we need tougher action on drug dealers (especially ice, which seems to be in every suburb, and destroys too many lives), greater focus on reducing male violence towards women, reducing domestic violence, and fast action on mental health. We must support the police use of firearms in these situations, look at technology (artificial intelligence and CCTV), the carrying of mace by women made legal, and stab vests and tasers for security guards (with appropriate training and checks).

It is time for our political leaders to realise that laying flowers and sharing updates is important to help unite people and to grieve, but its 1 per cent of the job; taking effective action is the other 99 per cent.

Glen Frost, Darlington, NSW

The news that KIA is marketing a super-sized diesel vehicle specifically for the Australian market says a lot about our mind set. The world is in a climate emergency yet we are partying like there is no tomorrow.

When I drive to the local shopping centre, my little Corolla is dwarfed by a sea of Rams, Rangers and Range Rovers. They are excessive for doing the weekly shopping and school run.

What does it take to shake Australians from their complacency? We are all in this together, folks.

Barry Lizmore, Ocean Grove, Vic

I had no idea CPI was running at 20 per cent until I bought a copy of AFR Weekend in Adelaide on Saturday with a new cover price of $6, up from $5. No doubt Rear Window will shortly do a forensic analysis. After all, the Fins preoccupation with transparency is legendary.

John Bridgland, Adelaide, SA

We are always interested to hear your views on current topics.

Guidelines for how to write an opinion article arehere.

Guidelines for how to write a letter to the editor arehere. Please send your letter to edletters@afr.com.au

Go here to see the original:

Letters to the editor: Ignore the populist bait of an uneconomical manufacturing dream - The Australian Financial Review

Is There A Poker Face Season 2 Release Date? – The Mary Sue

Im not lying when I say that Rian Johnsons mystery series Poker Face was one of the best new shows of 2023. The series stars Natasha Lyonne (Orange Is the New Black, Russian Doll) as Charlie Cale, a woman with an innate ability to detect lies.

This skill comes in handy as she crisscrosses the country, stumbling into various whodunnits along the way. Although in the case of Poker Face, its more of a whydunnit. Each episode opens with a murder, then introduces Charlie who susses out the crime based on who is lying.

In season one, Charlie encountered shady race car drivers, hippy seniors, and a one-hit wonder metal band while running from casino boss Sterling Frost Sr. (Ron Perlman) and his head of security Cliff (Benjamin Bratt). Along the way, she stumbles into various cases of the week, exploring subcultures and criminals in her travels.

Poker Face debuted to critical acclaim, and it was no surprise when the series was quickly renewed for a second season. Its been over a year since the series was renewed, but in that time the WGA and SAG-AFTRA strikes brought production to a standstill.

Unfortunately for us, we may be waiting awhile for the further adventures of Charlie Cale. In an interview with Deadline in June, Johnson said of the release date, Thats something thats really up in the air. I mean, a lot of it has to do with what happens in terms of the writers strike, and theres so much thats unknown at the moment. Also, right now my priority is getting the next Benoit Blanc movie going.

Lyonne spoke with Variety at the at the Critics Choice Awards, where she joked that she was recruiting future guest stars, saying Its all cooking and its fucking hot. Its gonna be a hot season, Theres some big ideas and Ive been going around all of this award circuit with contracts and so Ive been getting people to sign up for episodes. Its been really helpful Billie Eilish. Ill try to get her to sign one later tonight. You know, I hear Jodie Foster is gonna be here tonight. So Ill try to get a signature, some sort of a blood oath or you know, spitting in a palm and a handshake still holds in this town.

In the meantime, you can catch Lyonne voicing Nurse Tup in Amazons The Second Best Hospital in The Galaxy, which she also produced. Lyonne also directed theNetflix specialJacqueline Novak: Get On Your Knees.

(featured image: Evans Vestal Ward/Peacock)

Have a tip we should know? [emailprotected]

Chelsea was born and raised in New Orleans, which explains her affinity for cheesy grits and Britney Spears. An pop culture journalist since 2012, her work has appeared on Autostraddle, AfterEllen, and more. Her beats include queer popular culture, film, television, republican clownery, and the unwavering belief that 'The Long Kiss Goodnight' is the greatest movie ever made. She currently resides in sunny Los Angeles, with her husband, 2 sons, and one poorly behaved rescue dog. She is a former roller derby girl and a black belt in Judo, so she is not to be trifled with. She loves the word Jewess and wishes more people used it to describe her.

Original post:

Is There A Poker Face Season 2 Release Date? - The Mary Sue

Radiance of the Seas Completes Drydock in the Bahamas – Cruise Industry News

Royal Caribbean Internationals Radiance of the Seas recently resumed service following a routine drydock in the Bahamas.

Now cruising in the Caribbean, the 2001-built cruise ship spent over two weeks at the Grand Bahama Shipyard.

During the period, the Radiance underwent general maintenance, in addition to class work and technical overhaul.

The 2,000-guest ship also saw updates to its hotel side, including the replacement of upholstery and carpets in public areas.

Before resuming service on March 28, the Radiance of the Seas met with the Grandeur of the Seas at the shipyard.

Also operated by Royal Caribbean International, the 1996-built ship is currently undergoing a similar project in drydock.

For its first departure following the work, the Radiance of the Seas is offering a seven-night cruise to the Western Caribbean.

Cruising roundtrip out of Tampa, the itinerary features visits to destinations in Mexico, Honduras and Belize including Cozumel and Costa Maya, Roatn and Belize City.

The 90,000-ton ship is then set to offer a short cruise to Mexico from its Florida homeport before starting a repositioning voyage to the West Coast.

Starting in late April, the Radiance of the Seas offers a series of one-way cruises that sail between Alaska and Canada.

Cruising between Seward and Vancouver, the itineraries feature visits to Skagway, Sitka, Icy Strait Point, Juneau, Ketchikan and Haines. Every sailing also features scenic cruising at the Hubbard Glacier.

See the article here:

Radiance of the Seas Completes Drydock in the Bahamas - Cruise Industry News

Notus robotics team is headed to 2024 FIRST Championship – KTVB.com

Notus Jr/Sr High School robotics team of five students is headed to the 2024 FIRST Championship in Houston, Texas.

BOISE, Idaho A small robotics team from Notus Jr/Sr High School is living the classic underdog story after they qualified to compete at a world championship.

The team of five students will be heading to Houston, Texas to participate in the 2024 FIRST Championship. On Friday, KTVB spoke to the team advisor, Nick Forbes, who said this is the first year the program was introduced to the Notus. But that hasn't stopped them.

In March of 2024,team 9726 received the Rookie of the Year All-Star Award after competing in Boise. A few days later, they were invited to compete on the world stage.

According to the FIRST website, with every new season the game changes, and students will need to build a robot to achieve the goal. This year's game is called 'CRESCENDO.'

While FIRST's rules recommend a team should consist of 10 students, team 9726 won with half that. But, a student told KTVB it hasn't been without some challenges.

"It was entirely made from duct tape, zip ties, and just things that we had to find around," Ezekiel said. "There were sometimes things that we had to improvise through 3-D printings and other things. We're very proud of the work we've done."

He said their robot mainly plays defense, utilizing a wall, which helped them secure a spot at worlds.

The world championships in Houston kicks off on April 16.

See the latest news from around the Treasure Valley and the Gem State in our YouTube playlist:

HERE ARE MORE WAYS TO GET NEWS FROM KTVB:

Download the KTVB News Mobile App

Apple iOS: Click here to download

Google Play:Click here to download

Watch news reports for FREE on YouTube: KTVB YouTube channel

Stream Live for FREE on ROKU:Add the channel from the ROKU store or by searching 'KTVB'.

Stream Live for FREE on FIRE TV: Search KTVB and click Get to download.

View original post here:

Notus robotics team is headed to 2024 FIRST Championship - KTVB.com

Saving hours of work with AI: How ChatGPT became my virtual assistant for a data project – ZDNet

David Gewirtz/ZDNET

There's certainly been a lot of golly-wow, gee-whiz press about generative artificial intelligence (AI) over the past year or so. I'm certainly guilty of producing some of it myself. But tools like ChatGPT are also just that: tools. They can be used to help out with projects just like other productivity software.

Today, I'll walk you through a quick project where ChatGPT saved me a few hours of grunt work. While you're unlikely to need to do the same project, I'll share my thinking for the prompts, which may inspire you to use ChatGPT as a workhorse tool for some of your projects.

Also: 4 generative AI tools your enterprise can leverage to boost productivity

This is just the sort of project I would have assigned to a human assistant, back when I had human assistants. I'm telling you this fact because I structured the assignments for ChatGPT similarly to how I would have for someone working for me, back when I was sitting in a cubicle as a managerial cog of a giant corporation.

In a month or so, I'll post what I like to call a "stunt article." Stunt articles are projects I come up with that are fun and that I know readers will be interested in. The article I'm working on is a rundown of how much computer gear I can buy from Temu for under $100 total. I came in at $99.77.

Putting this article together involved looking on the Temu site for items to spotlight. For example, I found an iPad keyboard and mouse that cost about $6.

Also: Is Temu legit? What to know before you place an order

To stay under my $100 budget, I wanted to add all the Temu links to a spreadsheet, find each price, and then move things around until I got the exact total budget I wanted to spend.

The challenge was converting the Temu links into something useful. That's where ChatGPT came in.

The first thing I did was gather all my links. For each product, I copied the link from Temu and pasted it into a Notion page. When pasting a URL, Notion gives you the option to create bookmark blocks that not only contain links but also contain, crucially, product names. Here's a snapshot of that page:

As you can see, I've started selecting the blocks. Once you select all the blocks, you can copy them. I just pasted the entire set into a text editor, which looked like this:

The page looks ugly, but the result is useful.

Let's take a look at one of the data blocks. I switched my editor out of dark mode so it's easier for you to see the data elements in the block:

There are three key elements. The gold text shows the name of the product, surrounded by braces. The green text is the base URL of the product, surrounded by parenthesis. There's a question mark that separates the main page URL from all the random tracking data passed to the Temu page. I just wanted the main URL. The purple sections highlight the delimiters -- this is the data we're going to feed into ChatGPT.

I first fed ChatGPT this prompt:

Accept the following data and await further instructions.

Then I copied all the information from the text editor and pasted it into ChatGPT. At this point, ChatGPT knew to wait for more details.

The next step is where the meat of the project took place. I wanted ChatGPT to pull out the titles and the links, and leave the rest behind. Here's that prompt:

The data above consists of a series of blocks of data. At the beginning of each block is a section within [] brackets. For each block, designate this as TITLE.

Following the [] brackets is an open paren (followed by a web URL). For each block, extract that URL, but dispose of everything following the question mark, and also dispose of the question mark. Most URLs will then end in .html. We will designate this as URL.

For each block, display the TITLE followed by a carriage return, followed by the URL, followed by two newlines.

This process accomplished two things. It allowed me to name the data, so I could refer to it later. The process also allowed me to test whether ChatGPT understood the assignment.

Also: How to use ChatGPT

ChatGPT did the assignment correctly but stopped about two-thirds through when its buffer ran out. I told the bot to continue and got the rest of the data.

Doing this process by hand would have involved lots of annoying cutting and pasting. ChatGPT did the work in less than a minute.

For my project, Temu's titles are just too much. Instead of:

10 Inch LCD Writing Tablet, Electronis Memo With Leather Protective Case, Electronic Drawing Board For Digital Handwriting Pad Doodle Board, Gifts For

I wanted something more like:

LCD writing tablet with case

I gave this assignment to ChatGPT as well. I reminded the tool that it had previously parsed and identified the data. I find that reminding ChatGPT about a previous step helps it more reliably incorporate that step into subsequent steps. Then I told it to give me titles. Here's that prompt:

You just created a list with TITLE and URL. Do you remember? For the above items, please summarize the TITLE items in 4-6 words each. Only capitalize proper words and the first word. Give it back to me in a bullet list.

I got back a list like this, but for all 26 items:

My goal was to copy and paste this list of clickable links into Excel so I could use column math to play around with the items I planned to order, adding and removing items until I got to my $100 budget. I wanted the names clickable in the spreadsheet because it would be much easier to manage and jump back and forth between Temu and my project spreadsheet.

So, my final ChatGPT task was to turn the list above into a set of clickable links. Again, I started by reminding the tool of the work it had completed. Then I told it to create a list with links:

Do you see the bulleted list you just created? That is a list of summarized titles.

Okay, make the same list again, but turn each summarized title into a live web link with its corresponding URL.

And that was that. I got all the links I needed and ChatGPT did all the grunt work. I pasted the results into my spreadsheet, chose the products, and placed the order.

Also: 6 ways ChatGPT can make your everyday life easier

This is the final spreadsheet. There were more products when I started the process, but I added and removed them from the REMAINING column until I got the budget I was aiming for:

This was a project I could have done myself. But it would have required a ton of cutting and pasting, and a reasonable amount of extra thought to summarize all the product titles. It would have taken me two or three hours of grunt work and probably added to my wrist pain.

But by thinking this work through as an assignment that could be delegated, the entire ChatGPT experience took me less than 10 minutes. It probably took me less time to use ChatGPT to do all that grunt work and write this article than it would have taken me to do all that cutting, pasting, and summarizing.

Also:Thanks to my 5 favorite AI tools, I'm working smarter now

This sort of project isn't fancy and it isn't sexy. But it saved me a few hours of work I would have found tedious and unpleasant. Next time you have a data-parsing project, consider using ChatGPT.

Oh, and stay tuned. As soon as Temu sends me their haul, I'll post the detailed article about how much tech gear you can get for under $100. It'll be fun. See you there.

You can follow my day-to-day project updates on social media. Be sure to subscribe to my weekly update newsletter, and follow me on Twitter/X at @DavidGewirtz, on Facebook at Facebook.com/DavidGewirtz, on Instagram at Instagram.com/DavidGewirtz, and on YouTube at YouTube.com/DavidGewirtzTV.

Excerpt from:

Saving hours of work with AI: How ChatGPT became my virtual assistant for a data project - ZDNet

ChatGPT use linked to sinking academic performance and memory loss – Yahoo News UK

ChatGPT use is linked to bad results and memory loss. (Getty Images)

Using AI software such as ChatGPT is linked to poorer academic performance, memory loss and increased procrastination, a study has shown.

The AI chatbot ChatGPT can generate convincing answers to simple text prompts, and is already used weekly by up to 32% of university students, according to research last year.

The new study found that university students who use ChatGPT to complete assignments find themselves in a vicious circle where they dont give themselves enough time to do their work and are forced to rely on ChatGPT, and over time, their ability to remember facts diminishes.

The research was published in the International Journal of Educational Technology in Higher Education. Scientists conducted interviews with 494 students about their use of ChatGPT, with some admitting to being "addicted" to using the technology to complete assignments.

The researchers wrote: "Since ChatGPT can quickly respond to any questions asked by a user, students who excessively use ChatGPT may reduce their cognitive efforts to complete their academic tasks, resulting in poor memory. Over time, over-reliance on generative AI tools for academic tasks, instead of critical thinking and mental exertion, may damage memory retention, cognitive functioning, and critical thinking abilities."

In the interviews, the researchers were able to pinpoint problems experienced by students who habitually used ChatGPT to complete their assignments.

The researchers surveyed students three times to work out what sort of student is most likely to use ChatGPT, and what effects heavy users experienced.

The researchers then asked questions about the effects of using ChatGPT.

Study author Mohammed Abbas, from the National University of Computer and Emerging Sciences in Pakistan, told PsyPost: "My interest in this topic stemmed from the growing prevalence of generative artificial intelligence in academia and its potential impact on students.

Story continues

"For the last year, I observed an increasing, uncritical, reliance on generative AI tools among my students for various assignments and projects I assigned. This prompted me to delve deeper into understanding the underlying causes and consequences of its usage among them."

The study found that students who were results-focused were less likely to rely on AI tools to do tasks for them.

The research also found that students who relied on ChatGPT were not getting the full benefit of their education - and actually lost the ability to remember facts.

"Our findings suggested that excessive use of ChatGPT can have harmful effects on students personal and academic outcomes. Specifically, those students who frequently used ChatGPT were more likely to engage in procrastination than those who rarely used ChatGPT," Abbas said.

"Similarly, students who frequently used ChatGPT also reported memory loss. In the same vein, students who frequently used ChatGPT for their academic tasks had a poor grade average."

The researchers found that students who felt under pressure were more likely to turn to ChatGPT - but that this then led to worsening academic performance and further procrastination and memory loss.

The researchers suggest that academic institutions should be mindful that heavy workloads can drive students to use ChatGPT.

The researchers also said academics should warn students of the negative impact of using the software.

"Higher education institutions should emphasise the importance of efficient time management and workload distribution while assigning academic tasks and deadlines," they said.

"While ChatGPT may aid in managing heavy academic workloads under time constraints, students must be kept aware of the negative consequences of excessive ChatGPT usage."

Read more from the original source:

ChatGPT use linked to sinking academic performance and memory loss - Yahoo News UK

ChatGPT is here. There is no going back. – The Presbyterian Outlook

Working on a college campus, you must be careful about mentioning the use of AI or the purpose of such a tool. If youre not, you may catch a professor reciting their monolog outlining the evils of AI in the academic world. And while there is some validity to their reaction and concerns about this emerging technological tool, I find it to be just that, a tool.

I think part of what makes AI a challenge for the academic world is that there are no true rules or guides to help navigate this new instrument. Students can use it, and do use it, in ways others might deem harmful to academic integrity. I understand that side. I get the hesitation. We received this tool before we could develop the ethics about its use.

But in my experience, it is never a good practice to shut something out or make it restrictive in a way that will cause pushback and challenge. I try to embrace this tool instead of running away or ignoring it.

I try to embrace this tool instead of running away or ignoring it.

I am currently reworking my future lesson plans with the help of AI and finding ways to integrate its use alongside traditional coursework. To me, this process is fascinating. There is still a lot to learn about AI and plenty of need for ethical reflection on its use. But this much is clear to me: it can be helpful.

Several months ago, my coworkers and I decided to try ChatGPT. We wanted to see what all the fuss from our faculty colleagues was about. We sat together and thought of questions related to our work. We created the parameters for our topics and entered them all into ChatGPT. What resulted was a wild experience: outlines for emails, basic lesson plans, liturgy for worship, prayers and letters to community partners. The list went on and on. And it was captivating to engage in the process.

The items ChatGPT produced were not perfect. There were grammatical errors. There were some oddly worded phrases. All these things indicated that the product was not something created by a human. And that absence is the key to AI ethics for me.

We are just starting to build an ethical framework of AI in the academic world, and I hope the church is also thinking about such a thing. But the key to me is the human element. When working with ChatGPT to craft prayers, it does a decent job. But if you compare an AI prayer to a Chaplain Maggie prayer, the thing missing would be the heart the human element.

ChatGPT has been introduced to our lives. There is no going back. We should find ways to integrate it into our work rather than push back or turn from it. It can offer words when you are having a brain freeze or are too tired to think. It can offer a frame for your writing. It isnt perfect, but it is a tool that we can and should learn how to use just dont forget to add your human uniqueness as you go along.

The Presbyterian Outlook is committed to fostering faithful conversations by publishinga diversity ofvoices.The opinions expressed are the authors and may or may not reflect the opinions and beliefs of the Outlooks editorial staff or the Presbyterian Outlook Foundation.Want to join the conversation?You can write to us or submit your own articlehere.

Read more from the original source:

ChatGPT is here. There is no going back. - The Presbyterian Outlook

4 Reasons to Start Using Claude 3 Instead of ChatGPT – MUO – MakeUseOf

Key Takeaways

In the AI chatbot space, ChatGPT has been the undisputed leader since its launch in November 2022. However, with the release of Claude 3, it is increasingly looking like ChatGPT might be losing that title. Here are four reasons you should consider switching from ChatGPT to Claude.

Besides occasional science homework, programming tasks, and fun games, one of the most popular use cases of AI chatbots is creative writing. Most users use AI chatbots to help draft an email, cover letter, resume, article, or song lyricsbasically one creative write-up or another. While ChatGPT has clearly been the favored option owing mostly to its brand name and publicity, Claude has consistently delivered top-notch results even in earlier iterations of the AI chatbots. But it's not just about providing top-notch results. Claude, especially backed by the latest Claude 3 model, outperforms ChatGPT in a wide range of creative writing tasks.

As someone who has consistently used both chatbots since their launch, Claude, although not necessarily the overall better model, is significantly better at creating write-ups that better mimic human "creativity and imperfections." Putting both chatbots to the test, ChatGPT's write-ups, although grammatically correct, were full of tell-tale signs of an AI-written piece. Claude's write-ups read more naturally and sound human. Although not perfect, they are likely to be more engaging and creative.

Too frequently, ChatGPT falls victim to the use of so many clichs and predictable word choices. Ask ChatGPT to write about some business topics, and there's a good chance you will see words like "In today's business environment," "In recent history," and "In the fast-paced digital landscape" in the starting paragraphs.

Putting our theory to the test, it was just as predicted. ChatGPT (GPT-3.5 and GPT-4) used clich intros in five out of five trials. Here are the first three samples:

Claude, on the other hand, produced varying results four times out of five trials, avoiding the cliche on the first trial:

Besides clich, ChatGPT, more than Claude, tends to fall victim to the sporadic use of joining words like "in conclusion," "as a result," and a tendency for unnecessary emphasis where emphatic words like "undisputed, critical, unquestionable, must" etc., are used.

But besides these flaws, how do write-ups from each chatbot sound from a holistic point of view?

To top off the comparison, I asked both chatbots to produce rhyming rap lyrics on the theme "coconut to wealth." Claude seems the better option, but I'll let you be the judge.

Here's ChatGPT's take:

And here's Claude's take:

Early adopters of ChatGPT probably have a deep-rooted preference for the AI chatbot, but when it comes to creative writing, ChatGPT has some serious catching up to do in many areas.

Besides Google's Gemini AI chatbot, there are hardly any major AI chatbots in the market that offer Claude's multimodal features for free. With the free version of ChatGPT, all you get is text generation abilities, and that's it. No file uploads for analysis, no image processing, nothing else! On the other hand, Claude offers these premium features on its free tier. So, you can use image prompting or upload files for analysis on the chatbot for free if you use the free beta version of the bot.

Context window is the limit of text data an AI chatbot can process at a go. Think of it as how many things you can keep in your memory (and be able to recall) at a time.

Depending on the version of ChatGPT you use, you should get anywhere between 4k, 8k, 16k, 32k, and 128k context windows. For clarity, a 4k context window can accommodate around 3,000 words, while a 32k window can accommodate around 24,000 words. With the ChatGPT free tier, you get the lowest limits of the context window options (4k or 8k), meaning a few pages of text. You can access the 16k and possibly 32k options on ChatGPT Plus or Team plans, while the 128k context window seems to be an exclusive reserve of the ChatGPT Enterprise plans.

Whereas Claude has a 200k context window on its free and premium plansa significant improvement from ChatGPT's 4k or 8k window.

Why does this even matter? Well, the larger the context window, the more text data you can process at a time without the AI chatbot making things up. Claude's 200k context window is equivalent to around 150,000 words. Yep, it means you'll theoretically be able to process 150,000 words simultaneously with Claude, while ChatGPT could cap you out at 24,000 words even on its premium tier. You see? The difference is like night and dayat least in theory.

Rate limits can be a pain. You're in the middle of an interesting prompting session, you get an alert that you've reached your limit and have to wait (sometimes hours!) to get a reset. It's a huge joy killer and can set your work back hours. However, this happens both on ChatGPT and Claude, so it's an even ground on that point.

ChatGPT offers 40 messages every three hours on the Plus plan, while Claude offers 100 messages per eight hours. If you're not lost in the optics and do the math, ChatGPT's message limits are slightly better than Claude's. But there's more to it.

OpenAI dynamically throttles your usage limits. This means the limit you see isn't what you'll always get. It depends on the demand, as per OpenAI. On the other hand, despite having slightly lower usage limits, Claude can actually be more liberal with the limits depending on how much text you use per message.

So, if, for instance, you send around 2,000 words (around 200 English sentences of 1525 words each), you should be able to get "at least" the 100 messages per 8-hour limit. Two thousand words per prompt is a generous number; only a few people get that wordy when doing basic prompting. If you use a lower number of words per prompt, you should be able to get a larger number of messages per hour theoretically.

So, while ChatGPT might seem more generous on the outside if you use both chatbots daily, Claude seems to be the more generous option, although not necessarily at all times.

While early adopters may have a sentimental attachment to ChatGPT, it's becoming increasingly clear that Claude is a force to be reckoned with. As the AI landscape continues to evolve, it will be fascinating to see how these titans of conversational AI push each other to new heights, ultimately benefiting users with ever-improving and more capable chatbots. The future of AI-powered interactions has never been more exciting.

Original post:

4 Reasons to Start Using Claude 3 Instead of ChatGPT - MUO - MakeUseOf

Le Monde and Open AI sign partnership agreement on artificial intelligence – Le Monde

As part of its discussions with major players in the field of artificial intelligence, Le Monde has just signed a multi-year agreement with OpenAI, the company known for its ChatGPT tool. This agreement is historic as it is the first signed between a French media organization and a major player in this nascent industry. It covers both the training of artificial intelligence models developed by the American company and answer engine services such as ChatGPT. It will benefit users of this tool by improving its relevance thanks to recent, authoritative content on a wide range of current topics, while explicitly highlighting our news organization's contribution to OpenAI's services.

This is a long-term agreement, designed as a true partnership. Under the terms of the agreement, our teams will be able to draw on OpenAI technologies to develop projects and functionalities using AI. Within the framework of this partnership, and for the duration of the agreement, the two parties will collaborate on a privileged and recurring basis. A dialogue between the teams of both parties will ensure the monitoring of products and technologies developed by OpenAI.

For the general public, the effects of this agreement will be visible on ChatGPT, which can be described, in simple terms, as an answer engine using established facts or comments expressed by a limited number of references. The engine generates the most plausible and predictive synthetic answer to a given question.

The agreement between Le Monde and OpenAI allows the latter to use Le Monde's corpus, for the duration of the agreement, as one of the major references to establish its answers and make them reliable. It provides for references to Le Monde articles to be highlighted and systematically accompanied by a logo, a hyperlink, and the titles of the articles used as references. Content supplied to us by news agencies and photographs published by Le Monde are expressly excluded.

For Le Monde, this agreement is further recognition of the reliability of the work of our editorial teams, often considered a reference. It is also a first step toward protecting our work and our rights, at a time when we are still at the very beginning of the AI revolution, a wave predicted by many observers to be even more imposing than the digital one. We were among the very first signatories in France of the "neighboring rights" agreements, with Facebook and then Google. Here too, we had to ensure that the rights of press publishers applied to the use of Le Monde content referenced in answers generated by the services developed by OpenAI.

This point is crucial to us. We hope this agreement will set a precedent for our industry. With this first signature, it will be more difficult for other AI platforms to evade or refuse to negotiate. From this point of view, we are convinced that the agreement is beneficial for the entire profession.

Lastly, this partnership enables the Socit Editrice du Monde, Le Monde's holding company, to work with OpenAI to explore advances in this technology, anticipating as far as possible any consequences, negative or favorable. It also has the advantage of consolidating our business model by providing a significant source of additional, multi-year revenue, including a share of neighboring rights. An "appropriate and equitable" portion of these rights, as defined by law, will be paid back to the newsroom.

These discussions with AI players, punctuated by this first signature, are born of our belief that, faced with the scale of the transformations that lie ahead, we need, more than ever, to remain mobile in order to avoid the perils that are taking shape and seize the opportunities for development. The dangers have already been widely identified: the plundering or counterfeiting of our content, the industrial and immediate fabrication of false information that flouts all journalistic rules, the re-routing of our audiences towards platforms likely to provide undocumented answers to every question. Simply put, the end of our uniqueness and the disappearance of an economic model based on revenues from paid distribution.

These risks, which are probably fatal for our industry, do not prevent the existence of historic opportunities: putting the computing power of artificial intelligence at the service of journalism, making it easier to work with data in a shorter timeframe as part of large-scale investigations, translating our written content into foreign languages or producing audio versions to expand our readership and disseminate our information and editorial formats to new audiences.

To take the measure of these challenges, we decided to act in steps. The first was devoted to protecting our content and strengthening our procedures. Last year, we first activated an opt-out clause on our sites, following the example of several other media organizations, prohibiting AI platforms from accessing our data to train their generative intelligence models without our agreement. We also collectively discussed and drew up an appendix to our ethics and deontology charter, devoted specifically to the use of AI within our group. In particular, this text states that generative artificial intelligence cannot be used in our publications to produce editorial content ex-nihilo. Nor can it replace the editorial teams that form the core of our business and our value. Our charter does, however, authorize the use of generative AI as a tool to assist editorial production, under strictly defined conditions.

With this in mind, another phase was opened, dedicated to experimenting with artificial intelligence tools in very specific sectors of our business. Using DeepL, we were able to launch our Le Monde in English website and app, whose articles are initially translated by this AI tool, before being re-read by professional translators and then edited and published by a team of English-speaking journalists. At the same time, we signed an agreement with Microsoft to test the audio version of our articles. This feature, now available on almost all our French-language articles published in our app, opens us up to new audiences, often younger, as well as to new uses, particularly for people on the move. The third step is the one that led us to sign the agreement with OpenAI, which we hope will create a dynamic favorable to independent journalism in the new technological landscape that is taking shape.

At each of these stages, Le Monde has remained true to the spirit that has driven it since the advent of the Internet, and during the major changes in our industry: We have sought to reconcile the desire to discover new territories, while taking care to protect our editorial identity and the high standards of our content. In recent years, this approach has paid off. As the first French media organization to rely on digital subscriptions without ever having recourse to online kiosks, we have for several years been able to claim a significant lead in the hierarchy of national general-interest dailies, thanks to an unprecedented number of over 600,000 subscribers. In the same way, our determination to be a pioneer on numerous social media platforms has given us a highly visible place on all of them, helping to rejuvenate our audience.

The agreement with OpenAI is a continuation of this strategy of reasoned innovation. And we continue to guarantee the total independence of our newsroom: It goes without saying that this new agreement, like the previous ones we have signed, will in no way hinder our journalists' freedom to investigate the artificial intelligence sector in general, and OpenAI in particular. In fact, over the coming months, we will be stepping up our reporting and investigative capabilities in this key area of technological innovation.

This is the very first condition of our editorial independence, and therefore of your trust. As we move forward into the new world of artificial intelligence, we have close to our hearts an ambition that goes back to the very first day of our history, whose 80th anniversary we are celebrating this year: deserving your loyalty.

Le Monde

Louis Dreyfus(Chief Executive Officer of Le Monde) and Jrme Fenoglio(Director of Le Monde)

Translation of an original article published in French on lemonde.fr; the publisher may only be liable for the French version.

Read the original here:

Le Monde and Open AI sign partnership agreement on artificial intelligence - Le Monde

Jen Uchida (AeroEngr BS’05, MS’05) | Ann and H.J. Smead Aerospace Engineering Sciences – University of Colorado Boulder

Jen Uchida graduated from the University of Colorado Boulder in 2005 with a bachelors and masters degree in aerospace engineering sciences. Following graduation, she accepted a position with the Naval Air Systems Command (NAVAIR) as a civilian flight test engineer for the Marine Corps in Patuxent River, MD. There she supported the experimental flight testing of the V-22 Osprey, deploying several new and lifesaving capabilities to the fleet and logging over 100 flight hours of crew time.

Uchida is a graduate of the US Naval Test Pilot School, Class 140.In 2012, she applied to NASAs astronaut program and was one of the top 50 candidates for selection in 2013.

Following her time with NAVAIR, she spent a winter in Big Sky, MT as a ski instructor before heading to Gulfstream Aerospace to help lead the experimental test efforts for FAA type certification on the new G500 and G600 programs. Uchida moved to Seattle in 2020 to be the Manager of Test and Evaluation at AeroTEC where she directedthe work of Flight Test Engineers, Test Pilots, Flight Test Instrumentation Engineers and Software Engineers on various customer projects.

Uchida is now the Senior Test Program Manager for Product Development at Boeing Test and Evaluation. She is responsible for leading the planning and execution for all BCA Product Development test work statements for Lab and Flight Test.

Uchida is the founding president of the Coastal Empire chapter of the Society of Flight Test Engineers. She has also served as Vice President on its international board and in 2022 she was elected President. In addition to the volunteer work she does for SFTE, Uchida is also an executive mentor for the Brooke Owens Fellowship and serves on the External Advisory Board for CU Boulder's Ann and H.J. Smead Department of Aerospace Engineering Sciences.

Read the original here:

Jen Uchida (AeroEngr BS'05, MS'05) | Ann and H.J. Smead Aerospace Engineering Sciences - University of Colorado Boulder

I created an AI app in 10 minutes and I might never think about artificial intelligence in the same way again – TechRadar

Pretty much anything we can do with AI today might have seemed like magic just a year ago, but MindStudio's platform for creating custom AI apps in a matter of minutes feels like a new level of alchemy.

The six-month-old free platform, which you can find right now under youai.ai, is a visual studio for building AI workflows, assistants, and AI chatbots. In its short lifespan it's already been used, according to CEO Dimitry Shapiro, to build more than 18,000 apps.

Yes, he called them "apps", and if you're struggling to understand how or why anyone might want to build AI applications, just look at OpenAI's relatively new GPT apps (aka GPTs). These let you lock the powerful GPT-3.5 into topic-based thinking that you can package up, share, and sell. Shapiro, however, noted the limits of OpenAI's approach.

He likened GPTs to "bookmarking a prompt" within the GPT sphere. MindStudio, on the other hand, is generative model-agnostic. The system lets you use multiple models within one app.

If adding more model options sounds complicated, I can assure you it's not. MindStudio is the AI development platform for non-developers.

To get you started, the company provides an easy-to-follow 18-minute video tutorial. The system also helps by offering a healthy collection of templates (many of them business-focused), or you can choose a blank template. I followed the guide to recreate the demo AI app (a blog post generator), and my only criticism is that the video is slightly out of date, with some interface elements having been moved or renamed. There are some prompts to note the changes, but the video could still do with a refresh.

Still, I had no trouble creating that first AI blog generator. The key here is that you can get a lot of the work done through a visual interface that lets you add blocks along a workflow and then click on them to customize, add details, and choose which AI model you want to use (the list includes GPT- 3.5 turbo, PaLM 2, Llama 2, and Gemini Pro). While you don't necessarily have to use a particular model for each task in your app, it might be that, for example, you should be using GPT-3.5 for fast chatbots or that PaLM would be better for math; however, MindStudio cannot, at least yet, recommend which model to use and when.

Image 1 of 2

The act of adding training data is also simple. I was able to find web pages of information, download the HTML, and upload it to MindStudio (you can upload up to 150 files on a single app). MindStudio uses the information to inform the AI, but will not be cutting and pasting information from any of those pages into your app responses.

Most of MindStudio's clients are in business, and it does hide some more powerful features (embedding on third-party websites) and models (like GPT 4 Turbo) behind a paywall, but anyone can try their hand at building and sharing AI apps (you get a URL for sharing).

Confident in my newly acquired, if limited, knowledge, I set about building an AI app revolving around mobile photography advice. Granted, I used the framework I'd just learned in the AI blog post generator tutorial, but it still went far better than I expected.

One of the nice things about MindStudio is that it allows for as much or as little coding as you're prepared to do. In my case, I had to reference exactly one variable that the model would use to pull the right response.

There are a lot of smart and dead-simple controls that can even teach you something about how models work. MindStudio lets you set, for instance, the 'Temperature' of your model to control the randomness of its responses. The higher the 'temp', the more unique and creative each response. If you like your model verbose, you can drag another slider to set a response size of up to 3,000 characters.

The free service includes unlimited consumer usage and messages, some basic metrics, and the ability to share your AI via a link (as I've done here). Pro users can pay $23 a month for the more powerful models like GPT-4, less MindStudio branding, and, among other things, site embedding. The $99 a-month tier includes all you get with Pro, but adds the ability to charge for access to your AI app, better analytics, API access, full chat transcripts, and enterprise support.

Image 1 of 2

I can imagine small and medium-sized businesses using MindStudio to build customer engagement and content capture on their sites, and even as a tool for guiding users through their services.

Even at the free level, though, I was surprised at the level of customization MindStorm offers. I could add my own custom icons and art, and even build a landing page.

I wouldn't call my little AI app anything special, but the fact that I could take the germ of an idea and turn it into a bespoke chatbot in 10 minutes is surprising even to me. That I get to choose the right model for each job within an AI app is even better; and that this level of fun and utility is free is the icing on the cake.

The rest is here:

I created an AI app in 10 minutes and I might never think about artificial intelligence in the same way again - TechRadar

Why This Brain-Hacking Technology Will Turn Us All Into Cyborgs – The Daily Beast

It felt like magic: As I moved my head and eyes across the computer screen, the cursor moved with me. My goal was to click on pictures of targets on the display. Once the cursor reached a target, I would blink causing it to click on the targetas if it were reading my mind.

Of course, thats essentially what was happening. The headband I was wearing picked on my brain, eye, and facial signals. This data was fed through an AI-software that translated it into commands for the cursor. This allowed me to control what was on the screen, even though I didnt have a mouse or a trackpad. I didnt need them. My mind was doing all of the work.

The brain, eye, and face are great generators of electricity, Naeem Kemeilipoor, the founder of brain-computer interface (BCI) startup AAVAA, told The Daily Beast at the 2024 Consumer Electronics Show. Our sensors pick up the signals, and using AI we can interpret them.

The headband is just one of AAVAAs products that promises to bring non-invasive BCIs to the consumer market. Their other devices include AR glasses, headphones, and earbuds that all essentially accomplish the same function: reading your brain and facial signals to allow you to control your devices.

While BCI technology has largely remained in the research labs of universities and medical institutions, startups like AAVAA are looking for ways to put them in the handsor, rather, on the headsof everyday people. These products go beyond what we typically expect of our smart devices, seamlessly integrating our brain with technology around us. They also offer a lot of hope and promise for people with disabilities or limited mobilityallowing them to interact with and control their computers, smartphones, and even wheelchairs.

However, BCIs also blur the lines between the tech around us and our very minds. Though they can be helpful for people with disabilities, their widespread use and adoption raises questions and concerns about privacy, security, and even a users very personhood. Allowing a device to read our brain signals throws open the doors to these ethical considerations so, as they steadily become more popular, they could become more dangerous as well.

AAVAAs BCI devices on a table at CES 2024. AAVAA is looking for ways to put them in the handsor, rather, on the headsof everyday people.

BCIs loomed large all throughout CES 2024and for good reason. Beyond being able to control your devices, wearables that could read brain signals also promised to provide greater insights into users health, wellness, and productivity habits.

There were also a number of devices targeted at improving sleep quality such as the Frenz Brainband. The headband measures users brainwaves, heart rate, and breathing (among other metrics) to provide AI-curated sounds and music to help them fall asleep.

Every day is different and so every day your brain will be different, a Frenz spokesperson told The Daily Beast. Today, your brain might feel like white noise or nature sounds. Tomorrow, you might want binaural beats. Based on your brains reactions to your audio content, we know whats best for you.

To produce the noises, the headband used bone conduction, which converts audio data into vibrations on the skull that travel to the inner ear producing sound. Though it was difficult to hear clearly on the crowded show floor of CES, the headband managed to produce soothing beats as I wore them in a demo.

When you fall asleep, the audio automatically fades out, the spokesperson said. The headband keeps tracking all night, and if you wake up, you can press a button on the side to start the sounds to put you back to sleep.

However, not all BCIs are quite as helpful as they might appear. For example, there was MW75 Neuro, a pair of headphones from Master and Dynamic that purports to read your brains electroencephalogram (EEG) signals to provide insights on your level of focus. If you become distracted or your focus wanes for whatever reason, it alerts you so you can maintain productivity.

Sure, this might seem helpful if youre a student looking to squeeze in some more quality study time or a writer trying to hit a deadline on a story, but its also a stark and grim example of late-stage capitalism and a culture obsessed with work and productivity. While this technology is relatively new, its not difficult to imagine a future where these headphones are more commonplace andpotentiallyrequired by workplaces.

When most people think about BCIs, they typically think of brain-chip startups like Synchron and Neuralink. However, these technologies require users to undergo invasive surgeries in order to implant the technology. Non-invasive BCIs from the likes of AAVAA, on the other hand, require just a headband or headphones.

Thats what makes them so promising, Kemeilipoor explained. No longer would it be limited to only those users who really need it like those with disability issues. Any user can pop on the headband and start scrolling on their computer or turning their lamps and appliances on and off.

The Daily Beasts intrepid reporter Tony Ho Tran wears AAVAAs headband, which promises to bring non-invasive BCIs to the consumer market.

Its out of the box, he explained. Weve done the training [for the BCI] and now it works. Thats the beauty of what we do. It works right out of the boxand it works for everyone.

However, the fact that it can work for everyone is a top concern for ethical experts. Technology like this creates a minefield of potential privacy issues. After all, these companies may potentially have completely unfettered access to data from our literal brains. This is information that can be bought, sold, and used against consumers in an unprecedented way.

One comprehensive review published in 2017 in the journal BMC Medical Ethics pointed out that privacy is a major concern for potential users for this reason. BCI devices could reveal a variety of information, ranging from truthfulness, to psychological traits and mental states, to attitudes toward other people, creating potential issues such as workplace discrimination based on neural signals, the authors wrote.

To their credit, Kemeilipoor was adamant that AAVAA would and does not have access to individual brain signal data. But the concerns are still there, especially since there are notable examples of tech companies misusing user data. For example, Facebook has been sued multiple times for millions of dollars for storing users biometric data without their knowledge or consent. (Theyre certainly not the only company doing this either.)

These issues arent going to go awayand theyll be further exacerbated by the infusion of technology and the human brain. This is a phenomenon that also brings up concerns about personhood as well. At what point, exactly, does the human end and the computer begin once you are able to essentially control devices as an extension of yourself like your arms or legs?

The questionis it a tool or is it myself?takes on an ethical valence when researchers ask whether BCI users will become cyborgs, the authors wrote. They later added that some ethical experts worry that being more robotic makes one less human.

Yet, the benefits are undeniableespecially for those for whom BCIs could give more autonomy and mobility. Youre no longer limited by what you can do with your hands. Now, you can control the things around you simply by looking in a certain direction or moving your face in a specific way. It doesnt matter if youre in a wheelchair or completely paralyzed. Your mind is the limit.

This type of technology is like the internet of humans, Kemeilipoor said. This is the FitBit of the future. Not only are you able to monitor all your biometrics, it also allows you to control your devicesand its coming to market very soon.

Its promising. Its scary. And its also inevitable. The biggest challenge that we all must face is thatas these devices become more popular and we gradually give over our minds and bodies to technologywe dont lose what makes us human in the first place.

Read more:

Why This Brain-Hacking Technology Will Turn Us All Into Cyborgs - The Daily Beast

Why Argentinians are gambling everything on ‘anarcho-capitalist’ Javier Milei podcast – The Guardian

Hes known as the madman, his hairdresser likens him to Wolverine, while the man himself prefers the term anarcho-capitalist. But this week Javier Milei has a new title: president of Argentina.

By now the world should not be surprised by a far-right TV personality with attention-grabbing hair winning at the polls, but Mileis meteoric rise up the ranks of Argentinian politics still shocked observers. On the election trail, he promised to close the central bank, dollarise the economy and insulted Argentinas biggest trading partners, China and Brazil. But what will he do now that he has power?

The Guardians Latin America correspondent, Tom Phillips, has been in Buenos Aires for Mileis inaugaration. He tells Nosheen Iqbal how he has spoken to everyone from former ministers to astrologers to try to understand Mileis appeal and speculates how Argentina will fare under the former Rolling Stones tribute band member. He explains the toll sky-high inflation is taking on the people of Argentina and why voters would rather risk everything on Milei than prop up the status quo.

Support The Guardian

The Guardian is editorially independent. And we want to keep our journalism open and accessible to all. But we increasingly need our readers to fund our work.

More here:

Why Argentinians are gambling everything on 'anarcho-capitalist' Javier Milei podcast - The Guardian

OpenAI Cofounder Who Pushed Out Sam Altman Is In a Confusing Limbo

After moving to oust Sam Altman, OpenAI cofounder Ilya Sutskever is in a sort of limbo, and nobody seems to know what will happen next.

Do The Limbo

After moving to oust Sam Altman, OpenAI cofounder and chief scientist Ilya Sutskever is in a sort of limbo, and nobody seems to know what will happen next.

As Business Insider reports based on interviews with people in the know — who spoke on the condition that their identities remain anonymous — it remains unclear what role Sutskever will play in the AI firm moving forward after turning on Altman just before OpenAI's Thanksgiving week massacre.

"Ilya is always going to have had an important role," one of those insiders said. "But, you know, there are a lot of other people who are picking up and taking that responsibility that historically Ilya had."

Ouch. Before the incredible failed coup at the company, Sutskever was far from a household name, and fewer still knew who he was before ChatGPT burst onto the scene a year ago.

Known primarily for his outlandish statements about algorithmic sentience, the Russian-born researcher is considered something of an "AI god" by his acolytes — and now is thought of as a traitor to others who think he won't be able to come back from voting alongside two fellow (and now former) OpenAI board members to fire Altman as CEO over vague accusations of dishonesty.

What's Going On

According to two insiders who spoke to BI, Sutskever hasn't been seen in the firm's San Francisco offices all week, and his position within the company is "to be determined," one of those sources said.

This isn't exactly surprising given that Altman hinted pretty explicitly in his note following his re-hiring as CEO that although he has "zero ill will" towards his fellow cofounder, the company is nevertheless "discussing how he can continue his work at OpenAI." In an interview with The Verge, however, the CEO did admit that he was "hurt and angry" that Sutskever had essentially shanked him Brutus-style.

Sutskever, for his part, has also been making some vague statements online suggesting continued tumult at OpenAI.

In one since-deleted tweet, he posted a reference to the memetic phrase "the beatings will continue until morale improves," which he said "applies more often than it has any right to." In another post made on his art Instagram, this one still up, he posted a stern-looking cloud head — though that one, at least, looks more like the artist himself than any of his coworkers.

As BI's sources described, the working relationship between Altman, Sutskever, and Greg Brockman — the other cofounder who resigned in solidarity with the CEO after his ouster, and who was brought back upon his return — has soured tremendously.

"Once trust is broken," one former staffer explained, "it cannot be regained."

More on OpenAI: Sam Altman's Right-Hand Man Says AI Is Overhyped

The post OpenAI Cofounder Who Pushed Out Sam Altman Is In a Confusing Limbo appeared first on Futurism.

See original here:
OpenAI Cofounder Who Pushed Out Sam Altman Is In a Confusing Limbo

Don’t knock the economic value of majoring in the liberal arts | Brookings – Brookings Institution

For years, economists and more than a few worried parents have argued over whether a liberal arts degree is worth the price. The debate now seems to be over, and the answer is 'no.'

Can we please lighten up on knocking the value of a liberal arts education? With a recent spate of bad press for liberal arts departments on university campuses, many commentators conclude that the writing is on the wall. When it comes to economics, I argue the liberal arts still belong on college campuses: The liberal arts pay.

There are many reasons to be legitimately concerned about the direction the humanities and other liberal arts have taken in recent decades. Course enrollments and declared majors have plummeted across many disciplines since the pandemic, ranging from history to foreign language. This is the continuation of a decades-old pattern: According to the American Academy of Arts and Sciences Humanities Project, the share of humanities degrees out of all bachelors degrees peaked in 1967 at 17.2% and by 2018 had fallen to 4.4%.

Research universities also continue to turn out humanities doctorates for whom job prospects are bleak. Liberal arts colleges have been at risk for decades.

Despite arguments that a liberal arts education may be exactly the right preparation for a world in which routine tasks are taken over by AI, students are apparently not yet persuaded. Thus, humanities departments in colleges face very real budget pressures, including sometimes the risk of being eliminated. Indeed, West Virginia University is eliminating all foreign language degrees, and the University of Nebraska at Kearney has also proposed cutting its theater and philosophy programs.

I suspect that part of the political push to eliminate the humanities, especially from off-campus sources, is connected to the myth that the price of college has skyrocketed. In fact, the real price of college attendance has been falling modestly in recent years. Consequently, the share of undergraduates taking out student loans and the loan values are also down slightly.

Since Im an economist, in what follows Im going to stick to earnings numbers. But I also recognize there is more to a career than earnings. The American Academy of Arts and Sciences reports that responses to the statement I am deeply interested in the work I do are about the same for majors in the arts, humanities, engineering, and social sciences, although responses were a little higher in education and the natural sciences. And for a good reminder that careers are not all there is to life, see this article by a former poet laureate of Mississippi who writes, Students who master written and spoken communication can change the world.

Angst notwithstanding, here are two facts that are both true:

Heres a picture that illustrates why both are true.

Using data from the American Community Survey (ACS) collected between 2017 and 2021, Ive looked at graduates falling into one of four categories: education ended with a high school diploma, education ended with an associate degree, education ended with a bachelors degree in a liberal arts field, and education ended with a bachelors degree in a field other than liberal arts. Using the categories provided in the ACS, Ive defined liberal arts majors as Area, Ethnic, and Civilization Studies, Linguistics and Foreign Languages, English Language, Literature, and Composition, Liberal Arts and Humanities, Fine Arts, and History. Everything else Im categorizing as not liberal arts. The figure above gives average annual wage and salary income for each kind of degree. (The latest data is for 2021, so all the figures are in 2021 dollars. The sample is for ages 23 through 65. For a similar analysis with slightly older data but a broader listing of majors, see The College Payoff.)

For fact number one, compare the dark blue liberal arts bachelors bar to the orange bar for other majors. The latter is considerably higher. On average, people with a liberal arts degree earned only $50,000 a year while those with other degrees earned $65,000. Thats a big difference. (Median earnings are lower than average earnings of course, but the gap isnt much different$37,000 versus $50,000.)

For fact number two, compare the dark blue liberal arts bar to the light blue bar for those earning only a high school diploma. The liberal arts bar is much highergetting a liberal arts degree is a good investment compared to not going to college at all. On average, the liberal arts degree led to a $50,000 annual income compared to $28,000 for those stopping at the end of high school. (Median earnings are $37,000 versus $21,000 for high school only.) A $12,000 annual difference in earnings will, over a lifetime, more than pay for a college education. Suppose one worked for 35 years after graduation. The lifetime difference would be $420,000 (ignoring inflation). That way, way more than makes up the cost of tuition plus and foregone earnings from a student not working while in college. Unsurprisingly, pay associated with an associate degree falls in between what liberal arts bachelors degrees earn and what one gets with a high school diploma. Its worth noting that employment rates in the data also follow a similar pattern: strongest for non-liberal-arts bachelors holders (81.9%), followed by liberal arts bachelors holders (78.5%), then associate degree holders (77%), then high school graduates only (64.4%).

An important part of the story is that right out of college, liberal arts majors do not earn much more than high school graduates, though this understates earnings potential over the long term. Earnings for all college graduates rise rapidly after graduation and continue to rise for decades. In contrast, the age-earnings profile of high school graduates is relatively flat. One hidden advantage of majoring in non-STEM fields is that students learn general skills that last a lifetime, where the specific skills in more technical subjects often have a shorter shelf life and differences between majors eventually narrow later down the career path.

The picture above shows average earnings for holders of each credential across different survey respondents ages; this provides a plausible pathway for earnings over the course of ones career (though its possible nobodys career path looks exactly like this). At age 22, the liberal arts line is not much higher than either the high school or associate degree lines. But the liberal arts bachelors line rises very rapidlymuch more so than is true for either high school graduates or those whove earned an associate degree. You can also see that graduates with bachelors degrees outside the liberal arts do begin their careers earning noticeably more than either liberal arts majors or high school graduates, and the gap grows over time. For example, at age 50, the average earnings with a liberal arts degree are $67,000 a year. Thats not as good as a non-liberal arts degree at $81,000, but its quite a bit better than an associate degree at $49,000 or a high school diploma at $33,000.

One hopes that students go to college for more than just the financial value of the degreenot just for their own sake but also because society needs a citizenry equipped to think broadly. But that hope aside, liberal arts degrees do pay. They dont pay as well as other college degrees, but they do pay and policymakers need to be clear-eyed about that before running roughshod across humanities departments. The humanities are indeed in trouble, but its silly to say that a liberal arts degree is not worth the price.

Read this article:

Don't knock the economic value of majoring in the liberal arts | Brookings - Brookings Institution