As Massachusetts leans in on artificial intelligence, AG waves a yellow flag Rhode Island Current – Rhode Island Current

BOSTON While the executive branch of state government touts the competitive advantage to investing energy and money into artificial intelligence across Massachusetts tech, government, health, and educational sectors, the states top prosecutor is sounding warnings about its risks.

Attorney General Andrea Campbell issued an advisory to AI developers, suppliers, and users on Tuesday, reminding them of their obligations under the states consumer protection laws.

AI has tremendous potential benefits to society, Campbells advisory said. It presents exciting opportunities to boost efficiencies and cost-savings in the marketplace, foster innovation and imagination, and spur economic growth.

However, she cautioned, AI systems have already been shown to pose serious risks to consumers, including bias, lack of transparency or explainability, implications for data privacy, and more. Despite these risks, businesses and consumers are rapidly adopting and using AI systems which now impact virtually all aspects of life.

Developers promise that their complex and opaque systems are accurate, fair, effective, and appropriate for certain uses, but Campbell notes that the systems are being deployed in ways that can deceive consumers and the public, citing chatbots used to perpetrate scams or of false computer-generated images and videos called deepfakes that mislead consumers and viewers about a participants identity. Misleading and potentially discriminatory results from these systems can run afoul of consumer protection laws, according to the advisory.

The advisory has echoes of a dynamic in the states enthusiastic embrace of gambling at the executive level, with Campbell cautioning against potential harmful impacts while staying shy of a full-throated objection to expansions like an online Lottery.

Gov. Maura Healey has touted applied artificial intelligence as a potential boon for the state, creating an artificial intelligence strategic task force through executive order in February. Healey is also seeking $100 million in her economic development bond bill the Mass Leads Act to create an Applied AI Hub in Massachusetts.

Massachusetts has the opportunity to be a global leader in Applied AI but its going to take us bringing together the brightest minds in tech, business, education, health care, and government. Thats exactly what this task force will do, Healey said in a statement accompanying the task force announcement. Members of the task force will collaborate on strategies that keep us ahead of the curve by leveraging AI and GenAI technology, which will bring significant benefit to our economy and communities across the state.

The executive order itself makes only glancing references to risks associated with AI, focusing mostly on the task forces role in identifying strategies for collaboration around AI and adoption across life sciences, finance, and higher education. The task force members will recommend strategies to facilitate public investment in AI and promoting AI-related job creation across the state, as well as recommending structures to promote responsible AI development and use for the state.

In conversation with Healey last month, tech journalist Kara Swisher offered a sharp critique of the enthusiastic embrace of AI hype, describing it as just marketing right now and comparing it to the crypto bubble and signs of a similar AI bubble are troubling other tech reporters. Tech companies are seeing the value in pushing whatever were pushing at the moment, and its exhausting, actually, Swisher said, adding that certain types of tasked algorithms like search tools are already commonplace, but the trend now is slapping an AI onto it and saying its AI. Its not.

Eventually, Swisher acknowledged, tech becomes cheaper and more capable at certain types of labor than people as in the case of mechanized farming and its up to officials like Healey to figure out how to balance new technology while protecting the people it impacts.

Mohamad Ali, chief operating officer of IBM Consulting, opined in CommonWealth Beacon that there need to be significant investments in an AI-capable workforce that prioritizes trust and transparency.

Artificial intelligence policy in Massachusetts, as in many states, is a hodgepodge crossing all branches of government. The executive branch is betting big that the technology can boost the states innovation economy, while the Legislature is weighing the risks of deepfakes in nonconsensual pornography and election communications.

Reliance on large language model styles of artificial intelligence melding the feel of a search algorithm with the promise of a competent researcher and writer has caused headaches for courts. Because several widely used AI tools use predictive text algorithms trained on existing work but not always limiting itself to it, large language model AI can hallucinate and fabricate facts and citations that dont exist.

In a February order in the troubling wrongful death and sexual abuse case filed against the Stoughton Police Department, Associate Justice Brian Davis sanctioned attorneys for their reliance on AI systems to prepare legal research and blindly file inaccurate information generated by the systems with the court. The AI hallucinations and the unchecked use of AI in legal filings are disturbing developments that are adversely affecting the practice of law in the Commonwealth and beyond, Davis wrote.

This article first appeared on CommonWealth Beacon and is republished here under a Creative Commons license.

GET THE MORNING HEADLINES DELIVERED TO YOUR INBOX

SUBSCRIBE

Continue reading here:

As Massachusetts leans in on artificial intelligence, AG waves a yellow flag Rhode Island Current - Rhode Island Current

Posted in Uncategorized

Americans’ use of ChatGPT is ticking up, but few trust its election information – Pew Research Center

Its been more than a year since ChatGPTs public debut set the tech world abuzz. And Americans use of the chatbot is ticking up: 23% of U.S. adults say they have ever used it, according to a Pew Research Center survey conducted in February, up from 18% in July 2023.

The February survey also asked Americans about several ways they might use ChatGPT, including for workplace tasks, for learning and for fun. While growing shares of Americans are using the chatbot for these purposes, the public is more wary than not of what the chatbot might tell them about the 2024 U.S. presidential election. About four-in-ten adults have not too much or no trust in the election information that comes from ChatGPT. By comparison, just 2% have a great deal or quite a bit of trust.

Pew Research Center conducted this study to understand Americans use of ChatGPT and their attitudes about the chatbot. For this analysis, we surveyed 10,133 U.S. adults from Feb. 7 to Feb. 11, 2024.

Everyone who took part in the survey is a member of the Centers American Trends Panel (ATP), an online survey panel that is recruited through national, random sampling of residential addresses. This way, nearly all U.S. adults have a chance of selection. The survey is weighted to be representative of the U.S. adult population by gender, race, ethnicity, partisan affiliation, education and other categories. Read more about the ATPs methodology.

Here are the questions used for this analysis, along with responses, and the survey methodology.

Below well look more closely at:

Most Americans still havent used the chatbot, despite the uptick since our July 2023 survey on this topic. But some groups remain far more likely to have used it than others.

Differences by age

Adults under 30 stand out: 43% of these young adults have used ChatGPT, up 10 percentage points since last summer. Use of the chatbot is also up slightly among those ages 30 to 49 and 50 to 64. Still, these groups remain less likely than their younger peers to have used the technology. Just 6% of Americans 65 and up have used ChatGPT.

Differences by education

Highly educated adults are most likely to have used ChatGPT: 37% of those with a postgraduate or other advanced degree have done so, up 8 points since July 2023. This group is more likely to have used ChatGPT than those with a bachelors degree only (29%), some college experience (23%) or a high school diploma or less (12%).

Since March 2023, weve also tracked three potential reasons Americans might use ChatGPT: for work, to learn something new or for entertainment.

The share of employed Americans who have used ChatGPT on the job increased from 8% in March 2023 to 20% in February 2024, including an 8-point increase since July.

Turning to U.S. adults overall, about one-in-five have used ChatGPT to learn something new (17%) or for entertainment (17%). These shares have increased from about one-in-ten in March 2023.

Differences by age

Use of ChatGPT for work, learning or entertainment has largely risen across age groups over the past year. Still, there are striking differences between these groups (those 18 to 29, 30 to 49, and 50 and older).

For example, about three-in-ten employed adults under 30 (31%) say they have used it for tasks at work up 19 points from a year ago, with much of that increase happening since July. These younger workers are more likely than their older peers to have used ChatGPT in this way.

Adults under 30 also stand out in using the chatbot for learning. And when it comes to entertainment, those under 50 are more likely than older adults to use ChatGPT for this purpose.

Differences by education

A third of employed Americans with a postgraduate degree have used ChatGPT for work, compared with smaller shares of workers who have a bachelors degree only (25%), some college (19%) or a high school diploma or less (8%).

Those shares have each roughly tripled since March 2023 for workers with a postgraduate degree, bachelors degree or some college. Among workers with a high school diploma or less, use is statistically unchanged from a year ago.

Using ChatGPT for other purposes also varies by education level, though the patterns are slightly different. For example, a quarter each of postgraduate and bachelors degree-holders have used ChatGPT for learning, compared with 16% of those with some college experience and 11% of those with a high school diploma or less education. Each of these shares is up from a year ago.

With more people using ChatGPT, we also wanted to understand whether Americans trust the information they get from it, particularly in the context of U.S. politics.

About four-in-ten Americans (38%) dont trust the information that comes from ChatGPT about the 2024 U.S. presidential election that is, they say they have not too much trust (18%) or no trust at all (20%).

A mere 2% have a great deal or quite a bit of trust, while 10% have some trust.

Another 15% arent sure, while 34% have not heard of ChatGPT.

Distrust far outweighs trust regardless of political party. About four-in-ten Republicans and Democrats alike (including those who lean toward each party) have not too much or no trust at all in ChatGPTs election information.

Notably, however, very few Americans have actually used the chatbot to find information about the presidential election: Just 2% of adults say they have done so, including 2% of Democrats and Democratic-leaning independents and 1% of Republicans and GOP leaners.

These survey findings come amid growing national attention on chatbots and misinformation. Several tech companies have recently pledged to prevent the misuse of artificial intelligence including chatbots in this years election. But recent reports suggest chatbots themselves may provide misleading answers to election-related questions.

Note: Here are the questions used for this analysis, along with responses, and the survey its methodology.

Continue reading here:

Americans' use of ChatGPT is ticking up, but few trust its election information - Pew Research Center

There Might Be No ChatGPT-like Apple Chatbot in iOS 18 – The Mac Observer

The recent months in the tech scene have been all about artificial intelligence and its impact, but one company that has been late to the party is Apple. Apple first hinted about inhouse-AI development during a recent earnings call, which followed the earlier reports of the company reaching out to major publishers to use their data to train its AIs dataset, canceling the Apple Car project and shifting the team to AI. However, according to Bloombergs Mark Gurman, Apple might not debut a ChatGPT-like chatbot, at all. Instead, the company is exploring deals with established tech giants such as Chinas Baidu, OpenAI, and Google about potential partnerships.

That said, Apple might instead focus on licensing already-established chatbots like Googles Gemini (fka Bard) or OpenAIs ChatGPT. They might delay all plans to release an Apple chatbot, internally dubbed Ajax GPT.

Nevertheless, Mark Gurman believes AI will remain in the shows spotlight at the upcoming Worldwide Developers Conference (WWDC), slated for June 10-14, 2024 where we expect to see iOS 18, iPadOS 18, watchOS 11, tvOS 18, macOS 15, and visionOS 2. Although he doesnt delve into details of the upcoming AI feature, he mentions the companys plans to unveil new AI features, which could serve as the backbone of the next iOS 18. This suggests that even if Apple doesnt intend to bring a native AI chatbot to the devices, we might see a popular chatbot pre-installed on the phones or supported natively by the device. For reference, a London-based consumer tech firm, Nothing, recently partnered with the Perplexity AI search engine to power up its latest release, Phone 2(a), and Apple might have similar plans, but with generative AI giants.

CEO Tim Cook recently spoke to investors that the company will disclose its AI plans to the public later this year. Despite Apples overall reticence on the topic, Cook has been notably vocal about the potential of AI, particularly generative AI.

More importantly, according to previous reports, he has indicated that generative AI will improve Siris ability to respond to more complex queries and enable the Messages app to complete sentences automatically. Furthermore, other Apple apps such as Apple Music, Shortcuts, Pages, Numbers, and Keynote are expected to integrate generative AI functionality.

Source

Read the rest here:

There Might Be No ChatGPT-like Apple Chatbot in iOS 18 - The Mac Observer

Generative AI, Free Speech, & Public Discourse: Why the Academy Must Step Forward | TechPolicy.Press – Tech Policy Press

On Tuesday, Columbia Engineering and the Knight First Amendment Institute at Columbia University co-hosted a well-attended symposium, Generative AI, Free Speech, & Public Discourse. The event combined presentations about technical research relevant to the subject with addresses and panels discussing the implications of AI for democracy and civil society.

While a range of topics were covered across three keynotes, a series of seed funding presentations, and two panelsone on empirical and technological questions and a second on legal and philosophical questionsa number of notable recurring themes emerged, some by design and others more organically:

This event was part of one partnership amongst others in an effort that Columbia University president Manouche Shafik and engineering school dean Shih-Fu Chang referred to as AI+x, where the school is seeking to engage with various other parts of the university outside of computer engineering to better explore the potential impacts of current developments in artificial intelligence. (This event was also a part of Columbias Dialogue Across Difference initiative, which was established as part of a response to campus conflict around the Israel-Gaza conflict.) From its founding, the Knight Institute has focused on how new technologies affect democracy, requiring collaboration with experts in those technologies.

Speakers on the first panel highlighted sectors where they have already seen potential for positive societal impact of AI, outside of the speech issues that the symposium was focussed on. These included climate science, drug discovery, social work, and creative writing. Columbia engineering professor Carl Vondrick suggested that current large language models are optimized for social media and search, a legacy of their creation by corporations that focus on these domains, and the panelists noted that only by working directly with diverse groups can their needs for more customized models be understood. Princeton researcher Arvind Narayanan proposed that domain experts play a role in evaluating models as, in his opinion, the current approach of benchmarking using standardized tests is seriously flawed.

During the conversation between Jameel Jaffer, Director of the Knight Institute, and Harvard Kennedy School security technologist Bruce Schneier, general principles for successful interdisciplinary work were discussed, like humility, curiosity and listening to each other; gathering early in the process; making sure everyone is taken seriously; and developing a shared vocabulary to communicate across technical, legal, and other domains. Jaffer recalled that some proposals have a lot more credibility in the eyes of policymakers when they are interdisciplinary. Cornell Tech law professor James Grimmelman, who specializes in helping lawyers and technologists understand each other, remarked that these two groups are particularly well-equipped to work together, once they can figure out what the other needs to know.

President Shafik declared that if a responsible approach to AIs impact on society requires a +x, Columbia (surely along with other large research universities) has lots of xs. This positions universities as ideal voices for the public good, to balance out the influence of the tech industry that is developing and controlling the new generation of large language models.

Stanfords Tatsunori Hashimoto, who presented his work on watermarking generative AI text outputs, emphasized that the vendors of these models are secretive, and so the only way to develop a public technical understanding of them is to build them within the academy, and take on the same tasks as the commercial engineers, like working on alignment fine-tuning and performing independent evaluations. One relevant and striking finding by his group was that the reinforcement learning from human feedback (RLHF) process tends to push models towards the more liberal opinions common amongst highly-educated Americans.

The engineering panel developed a wishlist of infrastructure resources that universities (and others outside of the tech industry) need to be able to study how AI can be used to benefit and not harm society, such as compute resources, common datasets, separate syntax models so that vetted content datasets can be added for specific purposes, and student access to models. In the second panel, Camille Franois, a lecturer at the Columbia School of International and Public Affairs and presently a senior director of trust & safety at Niantic Labs, highlighted the importance of having spaces, presumably including university events such as the one at Columbia, to discuss how AI developments are impacting civil discourse. On a critical note, Knight Institute executive director Katy Glenn Bass also pointed out that universities often do not value cross-disciplinary work to the same degree as typical research, and this is an obstacle to progress in this area, given how essential collaboration across disciplines is.

Proposals for regulation were made throughout the symposium, a number of which are listed below, but the keynote by Bruce Schneier was itself an argument for government intervention. Schneiers thesis was, in brief, that corporation-controlled development of generative AI has the potential to undermine the trust that society needs to thrive, as chatbot assistants and other AI systems may present as interpersonally trustworthy, but in reality are essentially designed to drive profits for corporations. To restore trust, it is incumbent on governments to impose safety regulations, much as they do for airlines. He proposed a regulatory agency for the AI and robotics industry, and the development of public AI models, created under political accountability and available for academic and new for-profit uses, enabling a freer market for AI innovation.

Specific regulatory suggestions included:

A couple of cautions were also voiced: Narayanan warned that the Liars Dividend could be weaponized by authoritarian governments to crack down on free expression, and Franois noted the focus on watermarking and deepfakes at the expense of unintended harms, such as chatbots giving citizens incorrect voting information.

There was surprisingly little discussion during the symposium of how generative AI specifically influences public discourse, which Jaffer defined in his introductory statement as acts of speaking and listening that are part of the process of democracy and self-governance. Rather, much of the conversation was about online speech generally, and how it can be influenced by this technology. As such, an earlier focus of online speech debates, social media, came up a number of times, with clear parallels in terms of concern over corporate control and a need for transparency.

Hashimoto referenced the notion that social media causes feedback loops that greatly amplify certain opinions. LLMs can develop data feedback loops which may cause a similar phenomenon that is very difficult to identify and unpick without substantial research. As chatbots become more personalized, suggested Vondrick, they may also create feedback on an individual user level, directing them to more and more of the type of content that they have already expressed an affinity for, akin to the social media filter bubble hypothesis.

Another link to social media was drawn in the last panel, during which both Grimmelmann and Franois drew on their expertise in content moderation. They agreed that the most present danger to discourse from generative AI is inauthentic content and behavior overwhelming the platforms that we rely on, and worried that we may not yet have the tools and infrastructure to counter it. (Franois described a key tension between the Musk effect pushing disinvestment in content moderation and the Brussels effect encouraging a ramping up in on-platform enforcement via the DSA.) At the same time, trust and safety approaches like red-teaming and content policy development are proving key to developing LLMs responsibly. The correct lesson to draw from the failures to regulate social media, proposed Grimmelmann, was the danger of giving up on antitrust enforcement, which could be of great value when current AI foundation models are developed and controlled by a few (and in several cases the same) corporations.

One final theme was a framing of the current moment as one of transition. Even though we are grappling with how to adapt to realistic, readily available synthetic content at scale, there will be a point in the future, perhaps even for todays young children, that this will be intuitively understood and accounted for, or at least that media literacy education, or tools (like watermarking) will have caught up.

Several speakers referenced prior media revolutions. Narayanan was one of several who discussed the printing press, pointing out that even this was seen as a crisis of authority: no longer could the written word be assumed to be trusted. Wikipedia was cited by Columbia Engineering professor Kathy McKeown as an example of media that was initially seen as untrustworthy, but whose benefits, shortcomings, and suitable usage are now commonly understood. Franois noted that use of generative AI is far from binary and that we have not yet developed good frameworks to evaluate the range of applications. Grimmelman mentioned both Wikipedia and the printing press as examples of technologies where no one could have accurately predicted how things would shake out in the end.

As the Knight Institutes Glenn Bass stated explicitly, we should not assume that generative AI is harder to work through than previous media crises, or that we are worse equipped to deal with it. However, two speakers flagged that the tech industry should not be the given free rein: USC Annenbergs Mike Ananny warned that those with invested interests may attempt to prematurely push for stabilization and closure, and we should treat this with suspicion; and Princetons Narayanan noted that this technology is producing a temporary societal upheaval and that its costs should be distributed fairly. Returning to perhaps the dominant takeaways from the event, these comments again implied a role for the academy and for the government in guiding the development of, adoption of, and adaptation to the emerging generation of generative AI.

Read more:

Generative AI, Free Speech, & Public Discourse: Why the Academy Must Step Forward | TechPolicy.Press - Tech Policy Press

Analysis: How Nvidia Surpassed Intel In Annual Revenue And Won The AI Crown – CRN

A deep-dive analysis into the market dynamics that allowed Nvidia to take the AI crown and surpass Intel in annual revenue. CRN also looks at what the x86 processor giant could do to fight back in a deeply competitive environment.

Several months after Pat Gelsinger became Intels CEO in 2021, he told me that his biggest concern in the data center wasnt Arm, the British chip designer that is enabling a new wave of competition against the semiconductor giants Xeon server CPUs.

Instead, the Intel veteran saw a bigger threat in Nvidia and its uncontested hold over the AI computing space and said his company would give its all to challenge the GPU designer.

[Related: The ChatGPT-Fueled AI Gold Rush: How Solution Providers Are Cashing In]

Well, theyre going to get contested going forward, because were bringing leadership products into that segment, Gelsinger told me for a CRN magazine cover story.

More than three years later, Nvidias latest earnings demonstrated just how right it was for Gelsinger to feel concerned about the AI chip giants dominance and how much work it will take for Intel to challenge a company that has been at the center of the generative AI hype machine.

When Nvidias fourth-quarter earnings arrived last week, they showed that the company surpassed Intel in total annual revenue for its recently completed fiscal year, mainly thanks to high demand for its data center GPUs driven by generative AI.

The GPU designer finished its 2024 fiscal year with $60.9 billion in revenue, up 126 percent or more than double from the previous year, the company revealed in its fourth-quarter earnings report on Wednesday. This fiscal year ran from Jan. 30, 2023, to Jan. 28, 2024.

Meanwhile, Intel finished its 2023 fiscal year with $54.2 billion in sales, down 14 percent from the previous year. This fiscal year ran concurrent to the calendar year, from January to December.

While Nvidias fiscal year finished roughly one month after Intels, this is the closest well get to understanding how two industry titans compared in a year when demand for AI solutions propped up the data center and cloud markets in a shaky economy.

Nvidia pulled off this feat because the company had spent years building a comprehensive and integrated stack of chips, systems, software and services for accelerated computingwith a major emphasis on data centers, cloud computing and edge computingthen found itself last year at the center of a massive demand cycle due to hype around generative AI.

This demand cycle was mainly kicked off by the late 2022 arrival of OpenAIs ChatGPT, a chatbot powered by a large language model that can understand complex prompts and respond with an array of detailed answers, all offered with the caveat that it could potentially impart inaccurate, biased or made-up answers.

Despite any shortcomings, the tech industry found more promise than concern with the capabilities of ChatGPT and other generative AI applications that had emerged in 2022, like the DALL-E 2 and Stable Diffusion text-to-image models. Many of these models and applications had been trained and developed using Nvidia GPUs because the chips are far faster at computing such large amounts of data than CPUs ever could.

The enormous potential of these generative AI applications kicked off a massive wave of new investments in AI capabilities by companies of all sizes, from venture-backed startups to cloud service providers and consumer tech companies, like Amazon Web Services and Meta.

By that point, Nvidia had started shipping the H100, a powerful data center GPU that came with a new feature called the Transformer Engine. This was designed to speed up the training of so-called transformer models by as many as six times compared to the previous-generation A100, which itself had been a game-changer in 2020 for accelerating AI training and inference.

Among the transformer models that benefitted from the H100s Transformer Engine was GPT-3.5, short for Generative Pre-trained Transformer 3.5. This is OpenAIs large language model that exclusively powered ChatGPT before the introduction of the more capable GPT-4.

But this was only one piece of the puzzle that allowed Nvidia to flourish in the past year. While the company worked on introducing increasingly powerful GPUs, it was also developing internal capabilities and making acquisitions to provide a full stack of hardware and software for accelerated computing workloads such as AI and high-performance computing.

At the heart of Nvidias advantage is the CUDA parallel computing platform and programming model. Introduced in 2007, CUDA enabled the companys GPUs, which had been traditionally designed for computer games and 3-D applications, to run HPC workloads faster than CPUs by breaking them down into smaller tasks and processing those tasks simultaneously. Since then, CUDA has dominated the landscape of software that benefits accelerated computing.

Over the last several years, Nvidias stack has grown to include CPUs, SmartNICs and data processing units, high-speed networking components, pre-integrated servers and server clusters as well as a variety of software and services, which includes everything from software development kits and open-source libraries to orchestration platforms and pretrained models.

While Nvidia had spent years cultivating relationships with server vendors and cloud service providers, this activity reached new heights last year, resulting in expanded partnerships with the likes of AWS, Microsoft Azure, Google Cloud, Dell Technologies, Hewlett Packard Enterprise and Lenovo. The company also started cutting more deals in the enterprise software space with major players like VMware and ServiceNow.

All this work allowed Nvidia to grow its data center business by 217 percent to $47.5 billion in its 2024 fiscal year, which represented 78 percent of total revenue.

This was mainly supported by a 244 percent increase in data center compute sales, with high GPU demand driven mainly by the development of generative AI and large language models. Data center networking, on the other hand, grew 133 percent for the year.

Cloud service providers and consumer internet companies contributed a substantial portion of Nvidias data center revenue, with the former group representing roughly half and then more than a half in the third and fourth quarters, respectively. Nvidia also cited strong demand driven by businesses outside of the former two groups, though not as consistently.

In its earnings call last week, Nvidia CEO Jensen Huang said this represents the industrys continuing transition from general-purpose computing, where CPUs were the primary engines, to accelerated computing, where GPUs and other kinds of powerful chips are needed to provide the right combination of performance and efficiency for demanding applications.

There's just no reason to update with more CPUs when you can't fundamentally and dramatically enhance its throughput like you used to. And so you have to accelerate everything. This is what Nvidia has been pioneering for some time, he said.

Intel, by contrast, generated $15.5 billion in data center revenue for its 2023 fiscal year, which was a 20 percent decline from the previous year and made up only 28.5 percent of total sales.

This was not only three times smaller than what Nvidia earned for total data center revenue in the 12-month period ending in late January, it was also smaller than what the semiconductor giants AI chip rival made in the fourth quarter alone: $18.4 billion.

The issue for Intel is that while the company has launched data center GPUs and AI processors over the last couple years, its far behind when it comes to the level of adoption by developers, OEMs, cloud service providers, partners and customers that has allowed Nvidia to flourish.

As a result, the semiconductor giant has had to rely on its traditional data center products, mainly Xeon server CPUs, to generate a majority of revenue for this business unit.

This created multiple problems for the company.

While AI servers, including ones made by Nvidia and its OEM partners, rely on CPUs for the host processors, the average selling prices for such components are far lower than Nvidias most powerful GPUs. And these kinds of servers often contain four or eight GPUs and only two CPUs, another way GPUs enable far greater revenue growth than CPUs.

In Intels latest earnings call, Vivek Arya, a senior analyst at Bank of America, noted how these issues were digging into the companys data center CPU revenue, saying that its GPU competitors seem to be capturing nearly all of the incremental [capital expenditures] and, in some cases, even more for cloud service providers.

One dynamic at play was that some cloud service providers used their budgets last year to replace expensive Nvidia GPUs in existing systems rather than buying entirely new systems, which dragged down Intel CPU sales, Patrick Moorhead, president and principal analyst at Moor Insights & Strategy, recently told CRN.

Then there was the issue of long lead times for Nvidias GPUs, which were caused by demand far exceeding supply. Because this prevented OEMs from shipping more GPU-accelerated servers, Intel sold fewer CPUs as a result, according to Moorhead.

Intels CPU business also took a hit due to competition from AMD, which grew x86 server CPU share by 5.4 points against the company in the fourth quarter of 2023 compared to the same period a year ago, according to Mercury Research.

The semiconductor giant has also had to contend with competition from companies developing Arm-based CPUs, such as Ampere Computing and Amazon Web Services.

All of these issues, along with a lull in the broader market, dragged down revenue and earnings potential for Intels data center business.

Describing the market dynamics in 2023, Intel said in its annual 10-K filing with the U.S. Securities and Exchange Commission that server volume decreased 37 percent from the previous year due to lower demand in a softening CPU data center market.

The company said average selling prices did increase by 20 percent, mainly due to a lower mix of revenue from hyperscale customers and a higher mix of high core count processors, but that wasnt enough to offset the plummet in sales volume.

While Intel and other rivals started down the path of building products to compete against Nvidias years ago, the AI chip giants success last year showed them how lucrative it can be to build a business with super powerful and expensive processors at the center.

Intel hopes to make a substantial business out of accelerator chips between the Gaudi deep learning processors, which came from its 2019 acquisition of Habana Labs, and the data center GPUs it has developed internally. (After the release of Gaudi 3 later this year, Intel plans to converge its Max GPU and Gaudi road maps, starting with Falcon Shores in 2025.)

But the semiconductor giant has only reported a sales pipeline that grew in the double digits to more than $2 billion in last years fourth quarter. This pipeline includes Gaudi 2 and Gaudi 3 chips as well as Intels Max and Flex data center GPUs, but it doesnt amount to a forecast for how much money the company expects to make this year, an Intel spokesperson told CRN.

Even if Intel made $2 billion or even $4 billion from accelerator chips in 2024, it would amount to a small fraction of what Nvidia made last year and perhaps an even smaller one if the AI chip rival manages to grow again in the new fiscal year. Nvidia has forecasted that revenue in the first quarter could grow roughly 8.6 percent sequentially to $24 billion, and Huang said the conditions are excellent for continued growth for the rest of this year and beyond.

Then theres the fact that AMD recently launched its most capable data center GPU yet, the Instinct MI300X. The company said in its most recent earnings call that strong customer pull and expanded engagements prompted the company to upgrade its forecast for data center GPU revenue this year to more than $3.5 billion.

There are other companies developing AI chips too, including AWS, Microsoft Azure and Google Cloud as well as several startups, such as Cerebras Systems, Tenstorrent, Groq and D-Matrix. Even OpenAI is reportedly considering designing its own AI chips.

Intel will also have to contend with Nvidias decision last year to move to a one-year release cadence for new data center GPUs. This started with the successor to the H100 announced last fallthe H200and will continue with the B100 this year.

Nvidia is making its own data center CPUs, too, as part of the companys expanding full-stack computing strategy, which is creating another challenge for Intels CPU business when it comes to AI and HPC workloads. This started last year with the standalone Grace Superchip and a hybrid CPU-GPU package called the Grace Hopper Superchip.

For Intels part, the semiconductor giant expects meaningful revenue acceleration for its nascent AI chip business this year. What could help the company are the growing number of price-performance advantages found by third parties like AWS and Databricks as well as its vow to offer an open alternative to the proprietary nature of Nvidias platform.

The chipmaker also expects its upcoming Gaudi 3 chip to deliver performance leadership with four times the processing power and double the networking bandwidth over its predecessor.

But the company is taking a broader view of the AI computing market and hopes to come out on top with its AI everywhere strategy. This includes a push to grow data center CPU revenue by convincing developers and businesses to take advantage of the latest features in its Xeon server CPUs to run AI inference workloads, which the company believes is more economical and pragmatic for a broader constituency of organizations.

Intel is making a big bet on the emerging category of AI PCs, too, with its recently launched Core Ultra processors, which, for the first time in an Intel processor, comes with a neural processing unit (NPU) in addition to a CPU and GPU to power a broad array of AI workloads. But the company faces tough competition in this arena, whether its AMD and Qualcomm in the Windows PC segment or Apple for Mac computers and its in-house chip designs.

Even Nvidia is reportedly thinking about developing CPUs for PCs. But Intel does have one trump card that could allow it to generate significant amounts of revenue alongside its traditional chip design business by seizing on the collective growth of its industry.

Hours before Nvidias earnings last Wednesday, Intel launched its revitalized contract chip manufacturing business with the goal of drumming up enough business from chip designers, including its own product groups, to become the worlds second largest foundry by 2030.

Called Intel Foundry, its lofty 2030 goal means the business hopes to generate more revenue than South Koreas Samsung in only six years. This would put it only behind the worlds largest foundry, Taiwans TSMC, which generated just shy of $70 billion last year with many thanks to large manufacturing orders from the likes of Nvidia, Apple and Nvidia.

All of this relies on Intel to execute at high levels across its chip design and manufacturing businesses over the next several years. But if it succeeds, these efforts could one day make the semiconductor giant an AI superpower like Nvidia is today.

At Intel Foundrys launch last week, Gelsinger made that clear.

We're engaging in 100 percent of the AI [total addressable market], clearly through our products on the edge, in the PC and clients and then the data centers. But through our foundry, I want to manufacture every AI chip in the industry, he said.

More:

Analysis: How Nvidia Surpassed Intel In Annual Revenue And Won The AI Crown - CRN

AI productivity tools can help at work, but some make your job harder – The Washington Post

In a matter of seconds, artificial intelligence tools can now generate images, write your emails, create a presentation, analyze data and even offer meeting recaps.

For about $20 to $30 a month, you can have the AI capabilities in many of Microsoft and Googles work tools now. But are AI tools such as Microsoft Copilot and Gemini for Google Workspace easy to use?

The tech companies contend they help workers with their biggest pain points. Microsoft and Google claim their latest AI tools can automate the mundane, help people who struggle to get started on writing, and even aid with organization, proofreading, preparation and creating.

Of all working U.S. adults, 34 percent think that AI will equally help and hurt them over the next 20 years, according to a survey released by Pew Research Center last year. But a close 31 percent arent sure what to think, the survey shows.

So the Help Desk put these new AI tools to the test with common work tasks. Heres how it went.

Ideally, AI should speed up catching up on email, right? Not always.

It may help you skim faster, start an email or elaborate on quick points you want to hit. But it also might make assumptions, get things wrong or require several attempts before offering the desired result.

Microsofts Copilot allows users to choose from several tones and lengths before you start drafting. Users create a prompt for what they want their email to say and then have the AI adjust based on changes they want to see.

While the AI often included desired elements in the response, it also often added statements we didnt ask for in the prompt when we selected short and casual options. For example, when we asked it to disclose that the email was written by Copilot, it sometimes added marketing comments like calling the tech cool or assuming the email was interesting or fascinating.

When we asked it to make the email less positive, instead of dialing down the enthusiasm, it made the email negative. And if we made too many changes, it lost sight of the original request.

They hallucinate, said Ethan Mollick, associate professor at the Wharton School of the University of Pennsylvania, who studies the effects of AI on work. Thats what AI does make up details.

When we used a direct tone and short length, the AI produced fewer false assumptions and more desired results. But a few times, it returned an error message suggesting that the prompt had content Copilot couldnt work with.

Using copilot for email isn't perfect. Some prompts were returned with an error message. (Video: The Washington Post)

If we entirely depended on the AI, versus making major manual edits to the suggestions, getting a fitting response often took multiple if not several tries. Even then, one colleague responded to an AI-generated email with a simple response to the awkwardness: LOL.

We called it Copilot for a reason, said Colette Stallbaumer, general manager of Microsoft 365 and future of work marketing. Its not autopilot.

Googles Gemini has fewer options for drafting emails, allowing users to elaborate, formalize or shorten. However, it made fewer assumptions and often stuck solely to what was in the prompt. That said, it still sometimes sounded robotic.

Copilot can also summarize emails, which can quickly help you catch up on a long email thread or cut through your wordy co-workers mini-novel, and it offers clickable citations. But it sometimes highlighted less relevant points, like reminding me of my own title listed in my signature.

The AI seemed to do better when it was fed documents or data. But it still sometimes made things up, returned error messages or didnt understand context.

We asked Copilot to use a document full of reporter notes, which are admittedly filled with shorthand, fragments and run-on sentences, and asked it to write a report. At first glance, the result seemed convincing that the AI had made sense of the messy notes. But with closer inspection, it was unclear if anything actually came from the document, as the conclusions were broad, overreaching and not cited.

If you give it a document to work off, it can use that as a basis, Mollick said. It may hallucinate less but in more subtle ways that are harder to identify.

When we asked it to continue a story we started writing, providing it a document filled with notes, it summarized what we had already written and produced some additional paragraphs. But, it became clear much of it was not from the provided document.

Fundamentally, they are speculative algorithms, said Hatim Rahman, an assistant professor at Northwestern Universitys Kellogg School of Management, who studies AIs impact on work. They dont understand like humans do. They provide the statistically likely answer.

Summarizations were less problematic, and the clickable citations made it easy to confirm each point. Copilot was also helpful in editing documents, often catching acronyms that should be spelled out, punctuation or conciseness, much like a beefed-up spell check.

With spreadsheets, the AI can be a little tricky, and you need to convert data to a table format first. Copilot more accurately produced responses to questions about tables with simple formats. But for larger spreadsheets that had categories and subcategories or other complex breakdowns, we couldnt get it to find relevant information or accurately identify the trends or takeaways.

Microsoft says one of users top places to use Copilot is in Teams, the collaboration app that offers tools including chat and video meetings. Our test showed the tool can be helpful for quick meeting notes, questions about specific details, and even a few tips on making your meetings better. But typical of other meeting AI tools, the transcript isnt perfect.

First, users should know that their administrator has to enable transcriptions so Copilot can interact with the transcript during and after the meeting something we initially missed. Then, in the meeting or afterward, users can use Copilot to ask questions about the meeting. We asked for unanswered questions, action items, a meeting recap, specific details and how we couldve made the meeting more efficient. It can also pull up video clips that correspond to specific answers if you record the meeting.

The AI was able to recall several details, accurately list action items and unanswered questions, and give a recap with citations to the transcript. Some of its answers were a little muddled, like when it confused the name of a place with the location and ended up with something that looked a little like word salad. It was able to identify the tone of the meeting (friendly and casual with jokes and banter) and censored curse words with asterisks. And it provided advice for more efficient meetings: For us that meant creating a meeting agenda and reducing the small talk and jokes that took the conversation off topic.

Copilot can be used during a Teams meeting and produce transcriptions, action items, and meeting recaps. (Video: The Washington Post)

Copilot can also help users make a PowerPoint presentation, complete with title pages and corresponding images, based off a document in a matter of seconds. But that doesnt mean you should use the presentation as is.

A documents organization and format seem to play a role in the result. In one instance, Copilot created an agenda with random words and dates from the document. Other times, it made a slide with just a persons name and responsibility. But it did better documents with clear formats (think an intro and subsections).

Google's Gemini can generate images like this robot. (Video: The Washington Post)

While Copilots image generation for slides was usually related, sometimes its interpretation was too literal. Googles Gemini also can help create slides and generate images, though more often than not when trying to create images, we received a message that said, for now were showing limited results for people. Try something else.

AI can aid with idea generation, drafting from a blank page or quickly finding a specific item. It also may be helpful for catching up on emails, meetings and summarizing long conversations or documents. Another nifty tip? Copilot can gather the latest chats, emails and documents youve worked on with your boss before your next meeting together.

But all results and content need careful inspection for accuracy, some tweaking or deep edits and both tech companies advise users verify everything generated by the AI. I dont want people to abdicate responsibility, said Kristina Behr, vice president of product management for collaboration apps at Google Workspace. This helps you do your job. It doesnt do your job.

And as is the case with AI, the more details and direction in the prompt, the better the output. So as you do each task, you may want to consider whether AI will save you time or actually create more work.

The work it takes to generate outcomes like text and videos has decreased, Rahman said. But the work to verify has significantly increased.

Continued here:

AI productivity tools can help at work, but some make your job harder - The Washington Post

care.ai, Virtua Health partner to expand the hybrid care providers’ virtual care offerings – Mobihealth News

AI-powered care facility automation platform care.ai announced an enterprise-wide partnership with New Jersey-based not-for-profit hybrid care provider Virtua Health, where Virtua will leverage care.ai's virtual care offerings, including its Smart Care Facility Platform and Always-Aware ambient sensors.

care.ai's Smart Care Facility Platform includes a network of sensors spread through a care facility that monitors patients using AI, allowing the facility to collect real-time behavior data for clinical and operational insights.

The Florida-based company's AI-powered offerings will initially be utilized at Virtua Our Lady of Lourdes Hospital in Camden, New Jersey, then eventually implemented in all of Virtua Health's acute care settings.

The announcement comes approximately two months after the partners launched a pilot Virtual Nurse program in a medical-surgical unit that allows remote and bedside nurses to work in tandem.

Patients could also communicate with a nurse via a two-way optical camera, and their family members could participate in the calls remotely.

"Our focus is not just on integrating cutting-edge technologies but on enhancing the human aspects of healthcare. By swiftly adopting optical cameras and ambient sensors, we're poised to markedly enhance the patient and care team experience, ensuring a safer, more efficient, and empathically connected healthcare experience," Michael Capriotti, senior vice president of integration and strategic operations at Virtua Health, said in a statement.

THE LARGER TREND

In 2022, care.ai scored $27 million in funding led by multi-asset investment firm Crescent Cove Advisors.

Last year, the company announced it was partnering with Colorado-based remote patient monitoring company BioIntelliSense to integrate BioIntelliSense's BioButton wearablea product used for continuous vital-sign monitoring for 60 days that captures temperature, respiratory rate and heart rate at rest into its Smart Care Facility Platform.

care.ai also announced a partnership with the Texas Hospital Association to create statewide adoption of AI-powered patient monitoring and a partnership with patient engagement platform Get Well, which allows patients to connect with care teams via the interactive TV platform already present in patient rooms.

In June, care.ai announced it was partnering with multinational electronics company Samsung to integrate its Smart Care Facility Platform into the tech giant's displays for use by health systems, allowing for AI-powered patient monitoring.

Clinical care teams could also attend virtual visits over care.ai devices paired with Samsung's displays.

Follow this link:

care.ai, Virtua Health partner to expand the hybrid care providers' virtual care offerings - Mobihealth News

Jacksonville Beach admits city tech services have been hacked – Florida Politics

Jacksonville Beach joins 2 other Florida cities that have had their tech services hacked within the past 5 years.

It appears Jacksonville Beach is the latest Florida municipality to suffer a cyberattack that hobbled city services.

The coastal community in Duval County shut down many of its city services and closed City Hall after information technology systems for the city of about 25,000 people mysteriously shut down.

Effective immediately, the City of Jacksonville Beach will shut down due to Information Systems issues, a statement said on the citys website.

Now, city officials have confirmed there was a breach of security for the Northeast Florida citys tech services.

We recently confirmed the issues are the result of a cybersecurity event. We are working to restore our systems and services as quickly as possible. As our investigation into this matter is ongoing, we are unable to provide further details at this time, said a statement on the Jacksonville Beach website just after 4 p.m. Tuesday.

This isnt the first time a Florida city had its municipal services interrupted by aggressive hackers. Two cities sustained cyberattacks within one month in 2019.

Lake City and Riviera Beach both had their services corrupted after aggressive hackers targeted their technological infrastructure five years ago. Both paid more than six figure ransom payments to hackers to get their cyber data returned to them.

Jacksonville Beach officials acknowledged they have contacted law enforcement officials and are conducting an investigation.

The development literally led to most city services coming to a halt in Jacksonville Beach. City Hall, all recreation and parks services and other associated services have been put on hold. Emergency services, waste collection and first responder services remain operational along with Beaches Energy, the electrical service.

Jacksonville Beach officials didnt estimate when full city services will return.

Post Views: 0

See the rest here:

Jacksonville Beach admits city tech services have been hacked - Florida Politics

Beyond Cloud Nine: 3 Cutting-Edge Tech Stocks Shaping the Future of Computing – InvestorPlace

Source: Peshkova / Shutterstock

Cloud computing has helped millions of companies save time and money. Businesses dont have to worry about hardware costs and can access data quickly. Also, cloud computing companies offer cybersecurity resources to keep data safe from hackers.

Many stocks in the sector have outperformed the market over several years and can generate more gains in the years ahead. Therefore, these cutting-edge tech stocks look poised to expand and shape the future of cloud computing.

Source: Sundry Photography / Shutterstock.com

ServiceNow(NYSE:NOW) boasts a high retention rate for its software and continues to attract customers with deep pockets. The company has over 7,700 customers and almost 2,000 of them haveannual contract values that exceed $1 million.

Further, NOWs remaining performance obligations are more than triple the companys Q3 revenue. The platform allows businesses to runmore efficient help desksand streamline repetitive tasks with built-in chatbots. Also, ServiceNow offers high-level security to protect sensitive data.

Additionally, the company has been a reliable pick for investors who want to outperform the market. Shares are up by 74% over the past year and have gained 284% over the past five years. The stock is trading at a 58-forward P/E ratio. The companys net income growth can lead to a better valuation in the future. And, ServiceNow more than tripled its profits year over year (YOY) in thethird quarter. Revenue grew at a nice 25% clip YOY.

Source: IgorGolovniov / Shutterstock.com

Alphabet(NASDAQ:GOOG, NASDAQ:GOOGL) makes most of its revenue from advertising and cloud computing. Google Cloud has become a popular resource for business owners, boasting over 500,000 customers. Also, Alphabet stands at the forefront of AI , enhancing the tech giants future product offerings.

Notably, the companys cloud segment remains a leading growth driver. Revenue for Google Cloud increased by 22.5% YOY in thethird quarter. And, Alphabets entire business achieved 11% YOY revenue growth, which is an acceleration from the previous period.

Also, Google Cloud reported a profitable quarter, swinging from a $440 million net loss in Q3 2022 to $266 million net income in Q3 2023. Alphabet investors positive response to the news helped the stock rally by 57% over the past year. The stock has gained 163% over the past five years.

Alphabet currently trades at a 22-forward P/E ratio and has a $1.8 trillion market cap. Finally, the companys vast advertising network gives them plenty of capital to reinvest in Google Cloud and the companys smaller business segments.

Source: Karol Ciesluk / Shutterstock.com

Datadog(NASDAQ:DDOG) helps companies improve their cybersecurity across multiple cloud computing solutions. Cloud spending is still in its early innings and is expected to reach$1 trillion in annual spending in 2026. The company is projected to have a $62 billion total addressable market (TAM) in that year.

Specifically, Datadog removes silos and friction associated with keeping cloud applications safe from hackers. Over 26,000 customers use Datadogs software including approximately 3,130 customers with annual contract values exceeding $100,000. The companys revenue growth over the trailing twelve months is currently 31%. Further, operating margins have improved significantly to help the company secure a net profit in the third quarter.

In fact, DDOG has a good relationship with many cloud computing giants, including Alphabet. The two corporationsexpanded their partnership to close out 2023.

Investors have been rushing to accumulate Datadog stock in recent years. Shares have gained 68% over the past year and are up by 240% over the past five years. DDOG is still more than 35% removed from its all-time high. However, continued revenue growth and profit margin expansion can help the stock reclaim its all-time high.

On this date of publication, Marc Guberti held a long position in NOW. The opinions expressed in this article are those of the writer, subject to theInvestorPlace.com Publishing Guidelines.

Marc Guberti is a finance freelance writer at InvestorPlace.com who hosts the Breakthrough Success Podcast. He has contributed to several publications, including the U.S. News & World Report, Benzinga, and Joy Wallet.

Read the original post:

Beyond Cloud Nine: 3 Cutting-Edge Tech Stocks Shaping the Future of Computing - InvestorPlace

Reddit CEO Steve Huffman Takes on Big Tech for AI and Ad $$ – Variety

While the big question everyone wants answered about Reddit these days is whether theres an initial public offering in the works, theres a lot more the industry is wondering about this unique hub for digital conversation.

One of its co-founders, Steve Huffman, returned to run Reddit eight years ago, and in that time has presided over a period of dramatic, if somewhat turbulent, evolution for the platform. He sat down with Variety Intelligence Platform president and chief media analyst Andrew Wallenstein on Jan. 10 at the Variety Entertainment Summit at CES in Las Vegas to discuss how Reddit holds its own for ad dollars against the tech juggernauts that also want to mine the companys intellectual property for AI training purposes.

Andrew Wallenstein: May I be so bold as to ask if well be seeing an IPO anytime soon? Steve Huffman: I cant talk about that topic. I have a PR-proofed sentence: We are working toward building a sustainable business.

Wallenstein: Alright, well, lets talk about that sustainable business, starting with advertising. Look at this chart (see below). Its saturated with the biggest digital players worldwide. How are you able to differentiate what youve got to compete with the Metas and Alphabets of the world? Huffman: First, I think theres a bug on your slide Reddit is misspelled as Other.

Wallenstein: Youre all that gray?! [Joking.] Huffman: Our business is growing nicely. Were outgrowing the market right now, which wed expect to do. Reddit is unique in a number of ways. I think its important to understand that Reddit is not social media. It is communities. Brands can connect the communities of people who love those brands on Reddit in a different way, and so its also a fair amount of what we would call unduplicated reach people who are on Reddit who arent on other platforms.

Wallenstein: You guys were out with some research this week talking about the power of recommendations. Huffman: The nature of Reddit is its a place where people go for recommendations or advice. Sometimes its life advice, but many times its actually products. In fact, a lot of Reddit is people talking about stuff theyre going to buy. Every second, two people ask for a products recommendation on Reddit, and they get, on average, 19 responses. I just went through this: I bought an E ink tablet, so I was deciding which one to buy for notetaking. And Reddit has tons of communities for that stuff. That sort of advice, just from other consumers, is really special and valuable. I ended up with the Supernote, for what its worth.

Wallenstein: This recommendation-centric strategy ... how does that play in this world were in now, in the end-of-the-cookie era? Huffman: On Reddit, we target with first-party data. We see your behavior, and we use that so we dont have to cookie you all over the internet and watch what youre browsing and reading and searching for and all those things. Its just your explicitly expressed interests on Reddit. And so I think the cookie transition the industry is going to go throughpresents some challenges, but the platforms that will do best will be the ones that rely on first-party data, when were one of those.

Wallenstein: The data that is in these Reddit communities is a goldmine, which is great because the tech giants want in on that. But it also is something of a control issue with these Redditors, so how do you navigate the balance there between what you can license to tech giants but also placate the Redditors? Huffman: Yeah, theres a balance there. Were learning how to walk that line and where the line is. Reddit is a valuable source of data for training potentially, and were open to licensing it for people, you know, for that purpose. For non-commercial use, its very straightforward. You can apply to Reddit and just get access to that sort of thing.

For commercial use, wed like to have some sort of arrangement or deal so we're not just subsidizing some of the largest companies on Earth. But for our user point of view, I think, that openness and that commitment, the privacy and making sure users are in control of their own identity thats kind of the bedrock of that. So no matter, you know, whether your data is on Reddit or, for example, on another platform, like a search engine, its all kind of transparent where its going and what its being used for.

Wallenstein: I would imagine, then, that you must be watching the New York Times-versus-OpenAI case with some interest. Is it relevant to the situation at Reddit? Huffman: We are watching that case, of course. Reddit is one of the largest corpuses of human-like authentic human conversation. And its not available for free, you know, to train these models. And so well work through that with all of these companies, right? Whether they want to use Reddit data or not.

But I think many IP holders share our view there, which is you have this IP, whether youre us or The New York Times or another big IP holder, and the intention is never to just give that information away wholesale for free so somebody else can use it for their gain.

I do think the industry will find a balance here over time. I think some people in the space are being more cooperative than others. But were right in the thick of it. I think we all are, and were all taking different approaches.

Originally posted here:

Reddit CEO Steve Huffman Takes on Big Tech for AI and Ad $$ - Variety

Google has laid off hundreds of staff. What now for the tech market? – Euronews

Tech giants have been increasingly laying off their employees, reaching a peak in January of last year. With Google now announcing hundreds of job cuts, what is the tech market outlook for 2024?

Google has laid off hundreds of employeesin hardware, voice assistance, and engineeringas it continues to cut costs.

"Throughout second-half of 2023, a number of our teams made changes to become more efficient and work better, and to align their resources to their biggest product priorities,"a spokesperson for Google told Reuters in a statement.

"Some teams are continuing to make these kinds of organisational changes, which include some role eliminations globally," the spokesperson said without specifying the number of affected roles.

Google last year announced plans to make its virtual assistant smarter by adding generativeartificial intelligence (AI) that would be able to assist with tasks such as planning a trip or catching up on emails.

Concerns about the implications and usage of AI for job cuts are not new. A survey of 750 business leaders utilising AI conducted by ResumeBuilder revealed 37% of respondents stated the technology had replaced workers in 2023, while 44% anticipated layoffs in 2024 due to AI efficiency.

Meanwhile, several other tech giants have recently announced significant job cuts.

Amazon.com Inc. is laying off hundreds of employees in content creation divisions, including Prime Video and the live-streaming site, Twitch.

Unity Software Inc., the company behind the technology used in popular mobile games such as Pokemon Go, has also announced a 25% workforce reduction, about 1,800 job cuts.

Layoffs.fyi, a platform monitoring job reductions across the industry, reports the number of tech employees laid off reached its highest point in the first quarter of 2023 and has been consistently decreasing since then.

More than 262,600 employees were last year laid off by 1,186 tech companies, including Spotify and Salesforce, with the peak occurring in January 2023.

However, despite initial concerns, the same data indicates that the job market is now stabilising.

See the rest here:

Google has laid off hundreds of staff. What now for the tech market? - Euronews

All the big tech layoffs of 2023 and 2024 – Engadget

The tech industry has been reeling from the combination of a rough economy, the COVID-19 pandemic and some obvious business missteps. And while that led to job cuts in 2022, the headcount reductions unfortunately ramped up in 2023 and so far, seem to be accelerating in 2024. It can be tough to keep track of these moves, so weve compiled all the major layoffs in one place and will continue to update this story as the situation evolves.

Duolingo cut 10 percent of its contractors, and said that it is instead able to use generative AI to accomplish some of the tasks that its human workers used to perform.

Unity laid off 1,800 people, or a quarter of its workforce. This is in addition to more than 1,110 other layoffs at the company over the past two years.

Humane cut 4 percent of its workforce even before its flagship product, the Ai pin, hit the market.

Amazon-owned Twitch is laying off a sobering 35 percent of its workforce, just over 500 people. In a note to staff, CEO Dan Clancy said "our organization is still meaningfully larger than it needs to be given the size of our business."

On the same day that Amazon-owned Twitch confirmed it would be laying off 500 workers, Variety reported that Amazon itself would lay off "several hundred" people at Prime Video and MGM Studios.

Meta's layoffs are continuing into 2024. The company has reportedly let go 60 technical program managers at Instagram.

In another round of belt tightening, Google has reportedly laid off hundreds of workers in its Assistant and hardware divisions, among other departments. Alongside the cuts, Google is said to have reorganized its Pixel, Nest and Fitbit divisions, which led to Fitbit's co-founders departing the company.

Discord has reportedly laid off 170 workers, or 17 percent of its workforce. In a memo first reported by The Verge, CEO Jason Citron said the company had hired too many people back in 2020.

Spotify layoffs

Spotify is laying off 17 percent of its workforce, CEO Daniel Ek announced in a pre-holiday press release.

New World Interactive

The developer behind the Insurgency series and Day of Infamy laid off an undisclosed number of employees in December.

Tinybuild

Indie game developer Tinybuild also laid off an undisclosed number of employees, citing cost restructuring.

Codemasters

The EA-owned studio cut some jobs in December. Here, too, it is unclear how many employees lost their jobs.

Tidal

The music streamer announced in December that it is laying off 10 percent of its workforce. This follows an announcement in November from parent company Block Inc. that it would cap its workforce at 12,000 employees.

Etsy

Etsy is laying off 11 percent of its staff, or around 225 employees. The company is also reshuffling its c-suite, with two executives departing in early 2024.

Ubisoft Montreal layoffs

In early November, Ubisoft laid off 98 people from its Montreal office, considered the home of the company's biggest in-house development team. The majority of those who lost their jobs were in business administration and IT. Overall, the company said in its latest quarterly earnings report that it had cut about 1,000 jobs over the last 12 months, including layoffs and not replacing employees who left voluntarily.

Cruise layoffs

Cruise, General Motors' driverless car subsidiary, reportedly told employees in November that it plans to lay off some employees. The news came the same week that GM recalled Cruise's entire fleet of 950 robotaxis following a pedestrian collision. Cruise confirmed in December that the layoffs would include about 900 employees, or 24 percent of its workforce.

Snap layoffs

Snap laid off 20 product managers in a move it claims will enable faster decision making.

Amazon layoffs

Amazon cut 180 jobs from its gaming division, according to several reputable news outlets including Reuters and Bloomberg. The cuts included the entire staff working on Crown, an Amazon-backed Twitch channel. Separately, later in November Amazon laid off several hundred employees working on Alexa. On AI, the company is widely perceived to have fallen behind competitors such as OpenAI, the parent company of ChatGPT.

ByteDance layoffs

ByteDance, TikTok's parent company, has reportedly eliminated hundreds of roles across its gaming division. Nuverse, the publisher it acquired back in 2017, was said to be gutted in the process.

Unity layoffs

Unity Software cut 265 jobs, or 3.8 percent of its workforce, as part of a company "reset."

LinkedIn layoffs

In its second round of layoffs this year, LinkedIn said it is letting go around 668 workers from across its engineering, product, talent and finance teams. In May, LinkedIn said it would lay off 716 people and close its job search app in China. Between the two rounds of layoffs, LinkedIn will have cut nearly 1,400 jobs in 2023.

Epic Games laid off 16 percent of its employees, or about 830 employees. In an open letter to employees, CEO Tim Sweeney said the company was spending "way more money" than it earns, and that "we concluded that layoffs are the only way." Previously, the company had attempted to reduce costs by freezing hiring and cutting its marketing spending.

Roku's second round of 2023 layoffs is seeing another 300 people leaving the company, on top of 200 it let go in March and another 200 folks it dismissed in late 2022. Roku is once again looking to reduce costs and, along with lowering its headcount, it's trying to do that by axing shows and movies from its platform, consolidating office space and spending less on outside services.

Google drew attention in July when is contracting partner Accenture laid off 80 Help subcontractors who voted to form the Alphabet Workers Union-CWA the month before. Accenture attributed the move to cost-cutting. While the company said it respected the subcontractors' right to join a union, the former teams accused Google of retaliating against labor organizers.

The creator of Cyberpunk 2077 isn't immune to business challenges. CD Projekt Red warned in July that it would lay off about 100 people over the next several months, or about nine percent of the workforce. Employees will be let go as late as the first quarter of 2024. CEO Adam Kiciski was frank about the reasoning: CDPR was "overstaffed" for a reorganization meant to better handle the game developer's widening product roadmap, which includes new Cyberpunk and Witcher titles.

Spotify followed up its January layoff plans with word in June that it would cut 200 jobs in its podcast unit. The move is part of a more targeted approach to fostering podcasts with optimized resources for creators and shows. The company is also combining its Gimlet and Parcast production teams into a renewed Spotify Studios division.

GrubHub has faced intense pressure from both the economy and competitors like Uber, and that led it to lay off 15 percent of its workforce in June, or roughly 400 staff. This came just weeks after outgoing CEO Adam DeWitt officially left the food delivery service. New chief executive Howard Migdal claims the job cuts will help the company remain "competitive."

Game publishing giant Embracer Group announced plans for layoffs in June as part of a major restructuring effort meant to cut costs. The company didn't say how many of its 17,000 employees would be effected, but expected the overhaul to continue through March. The news came soon after Embracer revealed that it lost a $2 billion deal with an unnamed partner despite a verbal agreement.

Sonos has struggled to turn a profit as of late, and it's cutting costs to get back on track. The company said in June that it would lay off 7 percent of staff, or roughly 130 jobs. It also planned to offload real estate and rethink program spending. CEO Patrick Spence said there were "continued headwinds" that included shrinking sales.

Plex may be many users' go-to app for streaming both local and online media, but that hasn't helped its fortunes. The company laid off roughly 20 percent of employees in June, or 37 people. The cuts affect all areas. Plex is reportedly feeling the blow from an ad market slowdown, and is eager to cut costs and turn a profit.

Shopify's e-commerce platform played an important role at the height of the pandemic, but the Canadian company is scaling back now that the rush is over. In May, the company laid off 20 percent of its workforce and sold its logistics business to Flexport. Founder Tobi Ltke characterized the job cuts as necessary to "pay unshared attention" to Shopify's core mission, and an acknowledgment that the firm needed to be more efficient now that the "stable economic boom times" were over.

Polestar delayed production of its first electric SUV (the Polestar 3) in May, and that had repercussions for its workforce. The Volvo spinoff brand said in May that it would cut 10 percent of its workforce to lower costs as it faced reduced manufacturing expectations and a rough economy. Volvo needed more time for software development and testing that also pushed back the EX90, Polestar said.

SoundCloud followed up last year's extensive layoffs with more this May. The streaming audio service said it would shed 8 percent of its staff in a bid to become profitable in 2023. Billboard sources claim the company hopes to be profitable by the fourth quarter of the year.

Lyft laid off 13 percent of staff in November 2022, but took further steps in April. The ridesharing company said it was laying off 1,072 workers, or about 26 percent of its headcount. It comes just weeks after an executive shuffle that replaced CEO Logan Green with former Amazon exec David Risher, who said the company needed to streamline its business and refocus on drivers and passengers. Green previously said Lyft needed to boost its spending to compete with Uber.

Cloud storage companies aren't immune to the current financial climate. In April, Dropbox said it would lay off 500 employees, or roughly 16 percent of its team. Co-founder Drew Houston pinned the cuts on the combination of a rough economy, a maturing business and the "urgency" to hop on the growing interest in AI. While the company is profitable, its growth is slowing and some investments are "no longer sustainable," Houston said.

Roku shed 200 jobs at the end of 2022, but it wasn't done. The streaming platform creator laid off another 200 employees in March 2023. As before, the company argued that it needed to curb growing expenses and concentrate on those projects that would have the most impact. Roku has been struggling with the one-two combination of a rough economy and the end of a pandemic-fueled boom in streaming video.

If you thought luxury EV makers would be particularly susceptible to economic turmoil, you guessed correctly. Lucid Motors said in March that it would lay off 18 percent of its workforce, or about 1,300 people. The marque is still falling short of production targets, and these cuts reportedly help deal with "evolving business needs and productivity improvements." The cuts are across the board, too, and include both executives as well as contractors.

Meta slashed 11,000 jobs in fall 2022, but it wasn't finished. In March 2023, the company unveiled plans to lay off another 10,000 workers in a further bid to cut costs. The first layoffs affected its recruiting team, but it shrank its technology teams in late April and its business groups in late May. The Facebook owner is hoping to streamline its operations by reducing management layers and asking some leaders to take on work previously reserved for the rank and file. It may take a while before Meta's staff count grows again it doesn't expect to lift a hiring freeze until sometime after it completes its restructuring effort in late 2023.

Rivian conducted layoffs in 2022, but that wasn't enough to help the fledgling EV brand's bottom line. The company laid off another six percent of its employees in February, or about 840 workers. It's still fighting to achieve profitability, and the production shortfall from supply chain issues hasn't helped matters. CEO RJ Scaringe says the job cuts will help Rivian focus on the "highest impact" aspects of its business.

Zoom was a staple of remote work culture at the pandemic's peak, so it's no surprise that the company is cutting back now that people are returning to offices. The video calling firm said in February it was laying off roughly 1,300 employees, or 15 percent of its personnel. As CEO Eric Yuan put it, the company didn't hire "sustainably" as it dealt with its sudden success. The layoffs are reportedly necessary to help survive a difficult economy. The management team is offering more than just apologies, too. Yuan is cutting his salary by 98 percent for the next fiscal year, while all other executives are losing 20 percent of their base salaries as well as their fiscal 2023 bonuses.

Engadget's parent company Yahoo isn't immune to layoffs. The internet brand said in February that it would lay off over 20 percent of its workforce throughout 2023, or more than 1,600 people. Most of those cuts, or about 1,000 positions, took place immediately. CEO Jim Lanzone didn't blame the layoffs on economic conditions, however. He instead pitched it as a restructuring of the advertising technology unit as it shed an unprofitable business in favor of a successful one. Effectively, Yahoo is bowing out of direct competition in with Google and Meta in the ad market.

The pandemic recovery and a grim economy have hit PC makers particularly hard, and Dell is feeling the pain more than most. It laid off five percent of its workforce in early February, or about 6,650 employees, after a brutal fourth quarter where computer shipments plunged an estimated 37 percent. Past cost-cutting efforts weren't enough, Dell said the layoffs and a streamlined organization were reportedly needed to get back on track.

Food delivery services flourished while COVID-19 kept people away from restaurants, and at least some are feeling the sting now that people are willing to dine out again. Deliveroo is laying off about 350 workers, or nine percent of its workforce. "Redeployments" will bring this closer to 300, according to founder Will Shu. The justification is familiar: Deliveroo hired rapidly to handle "unprecedented" pandemic-related growth, according to Shu, but reportedly has to cut costs as it deals with a troublesome economy.

DocuSign may be familiar to many people who've signed documents online, but that hasn't spared it from the impact of a harsh economic climate. The company said in mid-February that it was laying off 10 percent of its workforce. While it didn't disclose how many people that represented, the company had 7,461 employees at the start of 2022. Most of those losing their jobs work in DocuSign's worldwide field organization.

You may not know GitLab, but its DevOps (development and operations) platform underpins work at tech brands like NVIDIA and T-Mobile and shrinking business at its clients is affecting its bottom line. GitLab is laying off seven percent of employees, or roughly 114 people. Company chief Sid Sijbrandij said the problematic economy meant customers were taking a "more conservative approach" to software investment, and that his company's previous attempts to refocus spending weren't enough to counter these challenges.

GoDaddy conducted layoffs early in the pandemic, when it cut over 800 workers for its retail-oriented Social platform. In February this year, however, it took broader action. The web service provider laid off eight percent of its workforce, or more than 500 people, across all divisions. Chief Aman Bhutani claimed other forms of cost-cutting hadn't been enough to help the company navigate an "uncertain" economy, and that this reflected efforts to further integrate acquisitions like Main Street Hub.

Twilio eliminated over 800 jobs in September 2022, but it made deeper cuts as 2023 got started. The cloud communications brand laid off 17 percent of staff, or roughly 1,500 people, in mid-February. Like so many other tech firms, Twillio said that past cost reduction efforts weren't enough to endure an unforgiving environment. It also rationalized the layoffs as necessary for a streamlined organization.

Google's parent company Alphabet has been cutting costs for a while, including shutting down Stadia, but it took those efforts one step further in late January when it said it would lay off 12,000 employees. CEO Sundar Pichai wasn't shy about the reasoning: Alphabet had been hiring for a "different economic reality," and was restructuring to focus on the internet giant's most important businesses. The decision hit the company's Area 120 incubator particularly hard, with the majority of the unit's workers losing their jobs. Sub-brands like Intrinsic (robotics) and Verily (health) also shed significant portions of their workforce in the days before the mass layoffs. Waymo has conducted two rounds of layoffs that shed 209 people, or eight percent of its force.

Amazon had already outlined layoff plans last fall, but expanded those cuts in early January when it said it would eliminate 18,000 jobs, most of them coming from retail and recruiting teams. It added another 9,000 people to the layoffs in March, and in April said over 100 gaming employees were leaving. To no one's surprise, CEO Andy Jassy blamed both an "uncertain economy" and rapid hiring in recent years. Amazon benefited tremendously from the pandemic as people shifted to online shopping, but its growth is slowing as people return to in-person stores.

Coinbase was one of the larger companies impacted by the crypto market's 2022 downturn, and that carried over into the new year. The cryptocurrency exchange laid off 950 people in mid-January, just months after it slashed 1,100 roles. This is one of the steepest proportionate cuts among larger tech brands Coinbase offloaded about a fifth of its staff. Chief Brian Armstrong said his outfit needed the layoffs to shrink operating expenses and survive what he previously described as a "crypto winter," but that also meant canceling some projects that were less likely to succeed.

Layoffs sometimes stem more from corporate strategy shifts than financial hardship, and IBM provided a classic example of this in 2023. The computing pioneer axed 3,900 jobs in late January after offloading both its AI-driven Watson Health business and its infrastructure management division (now Kyndryl) in the fall. Simply put, those employees had nothing to work on as IBM pivoted toward cloud computing.

Microsoft started its second-largest wave of layoffs in company history when it signaled it would cut 10,000 jobs between mid-January and the end of March. Like many other tech heavyweights, it was trimming costs as customers scaled back their spending (particularly on Windows and devices) during the pandemic recovery. The reductions were especially painful for some divisions they reportedly gutted the HoloLens and mixed reality teams, while 343 Industries is believed to be rebooting Halo development after losing dozens of workers. GitHub is cutting 10 percent of its team, or roughly 300 people.

PayPal has been one of the healthier large tech companies, having beaten expectations in its third quarter last year. Still, it hasn't been immune to a tough economy. The online payment firm unveiled plans at the end of January to lay off 2,000 employees, or seven percent of its total worker base. CEO Dan Schulman claimed the downsizing would keep costs in check and help PayPal focus on "core strategic priorities."

Salesforce set the tone for 2023 when it warned it would lay off 8,000 employees, or about 10 percent of its workforce, just four days into the new year. While the cloud software brand thrived during the pandemic with rapidly growing revenue, it admitted that it hired too aggressively during the boom and couldn't maintain that staffing level while the economy was in decline.

Business software powerhouse SAP saw a steep 68 percent drop in profit at the end of 2022, and it started 2023 by laying off 2,800 staff to keep its business healthy. Unlike some big names in tech, though, SAP didn't blame excessive pandemic-era hiring for the cutback. Instead, it characterized the initiative as a "targeted restructuring" for a company that still expected accelerating growth in 2023.

Spotify spent aggressively in recent years as it expanded its podcast empire, but it quickly put a stop to that practice as 2023 began. The streaming music service said in late January that it would lay off 6 percent of its workforce (9,800 people worked at Spotify as of the third quarter) alongside a restructuring effort that included the departure of content chief Dawn Ostroff. While there were more Premium subscribers than ever in 2022, the company also suffered steep losses CEO Daniel Ek said he was "too ambitious" investing before the revenue existed to support it.

Amazon isn't the only major online retailer scaling back in 2023. Wayfair said in late January that it would lay off 1,750 team members, or 10 percent of its global headcount. About 1,200 of those were corporate staff cut in a bid to "eliminate management layers" and otherwise help the company become leaner and nimbler. Wayfair had been cutting costs since August 2022 (including 870 positions), but saw the layoffs as helping it reach break-even earnings sooner than expected.

Follow this link:

All the big tech layoffs of 2023 and 2024 - Engadget

Amazon Discounts Apple AirTags; UK PM Impersonated on Social Media; Tech Giants Make Waves at CES 2024 – BNN Breaking

Amazon Discounts Apple AirTags; UK PM Impersonated on Social Media; Tech Giants Make Waves at CES 2024

In a notable move, Amazon is currently offering a substantial deal on a four-pack of Apple AirTags, marking a 10 percent discount on the original price of $99. This offer is fortified by an additional $10 coupon, which further slashes the price down to a mere $79. These AirTags, slightly larger than a quarter, are designed with precision, aiding Apple device owners in keeping track of their possessions effortlessly.

These Bluetooth trackers, a product of Apples innovation, operate in coordination with Apples Find My network, providing location information rapidly and efficiently. They do not require charging, boasting a life span of about a year before the battery necessitates replacing. Capable of tracking up to 32 items, AirTags carry an IP67 rating, ensuring robust resistance against water and dust.

In a startling revelation, a communications firm recently uncovered 143 different ads impersonating the UK Prime Minister on social media in the previous month. This raises serious questions about the security measures in place on these platforms.

The tech landscape continues to evolve, with the new Vision Pro headset requiring a Face ID scan to ensure a precise band fit. Pre-orders for this tech marvel commence on January 19. Meanwhile, the focus at CES 2024 saw giants like Nvidia, LG, Sony, and Samsung making significant announcements, reshaping the technological future.

Adding to the tech narrative, Microsoft momentarily overtook Apple as the most valuable company, sparking a wave of discussions about their investments and advancements in AI. This event also shed light on the implications of the declining iPhone demand in China.

On the international front, a historic decision unfolded in Victoria as Robert Farquharson, convicted of murdering his three young sons in 2005, was stripped of the right to his childrens gravesite. Concurrently, Ukrainian Air Force spokesperson Yuriy Ihnat made a statement on national television regarding President Volodymyr Zelenskyys claim about the destruction of 26 Russian helicopters and 12 planes.

See more here:

Amazon Discounts Apple AirTags; UK PM Impersonated on Social Media; Tech Giants Make Waves at CES 2024 - BNN Breaking

Why This Brain-Hacking Technology Will Turn Us All Into Cyborgs – The Daily Beast

It felt like magic: As I moved my head and eyes across the computer screen, the cursor moved with me. My goal was to click on pictures of targets on the display. Once the cursor reached a target, I would blink causing it to click on the targetas if it were reading my mind.

Of course, thats essentially what was happening. The headband I was wearing picked on my brain, eye, and facial signals. This data was fed through an AI-software that translated it into commands for the cursor. This allowed me to control what was on the screen, even though I didnt have a mouse or a trackpad. I didnt need them. My mind was doing all of the work.

The brain, eye, and face are great generators of electricity, Naeem Kemeilipoor, the founder of brain-computer interface (BCI) startup AAVAA, told The Daily Beast at the 2024 Consumer Electronics Show. Our sensors pick up the signals, and using AI we can interpret them.

The headband is just one of AAVAAs products that promises to bring non-invasive BCIs to the consumer market. Their other devices include AR glasses, headphones, and earbuds that all essentially accomplish the same function: reading your brain and facial signals to allow you to control your devices.

While BCI technology has largely remained in the research labs of universities and medical institutions, startups like AAVAA are looking for ways to put them in the handsor, rather, on the headsof everyday people. These products go beyond what we typically expect of our smart devices, seamlessly integrating our brain with technology around us. They also offer a lot of hope and promise for people with disabilities or limited mobilityallowing them to interact with and control their computers, smartphones, and even wheelchairs.

However, BCIs also blur the lines between the tech around us and our very minds. Though they can be helpful for people with disabilities, their widespread use and adoption raises questions and concerns about privacy, security, and even a users very personhood. Allowing a device to read our brain signals throws open the doors to these ethical considerations so, as they steadily become more popular, they could become more dangerous as well.

AAVAAs BCI devices on a table at CES 2024. AAVAA is looking for ways to put them in the handsor, rather, on the headsof everyday people.

BCIs loomed large all throughout CES 2024and for good reason. Beyond being able to control your devices, wearables that could read brain signals also promised to provide greater insights into users health, wellness, and productivity habits.

There were also a number of devices targeted at improving sleep quality such as the Frenz Brainband. The headband measures users brainwaves, heart rate, and breathing (among other metrics) to provide AI-curated sounds and music to help them fall asleep.

Every day is different and so every day your brain will be different, a Frenz spokesperson told The Daily Beast. Today, your brain might feel like white noise or nature sounds. Tomorrow, you might want binaural beats. Based on your brains reactions to your audio content, we know whats best for you.

To produce the noises, the headband used bone conduction, which converts audio data into vibrations on the skull that travel to the inner ear producing sound. Though it was difficult to hear clearly on the crowded show floor of CES, the headband managed to produce soothing beats as I wore them in a demo.

When you fall asleep, the audio automatically fades out, the spokesperson said. The headband keeps tracking all night, and if you wake up, you can press a button on the side to start the sounds to put you back to sleep.

However, not all BCIs are quite as helpful as they might appear. For example, there was MW75 Neuro, a pair of headphones from Master and Dynamic that purports to read your brains electroencephalogram (EEG) signals to provide insights on your level of focus. If you become distracted or your focus wanes for whatever reason, it alerts you so you can maintain productivity.

Sure, this might seem helpful if youre a student looking to squeeze in some more quality study time or a writer trying to hit a deadline on a story, but its also a stark and grim example of late-stage capitalism and a culture obsessed with work and productivity. While this technology is relatively new, its not difficult to imagine a future where these headphones are more commonplace andpotentiallyrequired by workplaces.

When most people think about BCIs, they typically think of brain-chip startups like Synchron and Neuralink. However, these technologies require users to undergo invasive surgeries in order to implant the technology. Non-invasive BCIs from the likes of AAVAA, on the other hand, require just a headband or headphones.

Thats what makes them so promising, Kemeilipoor explained. No longer would it be limited to only those users who really need it like those with disability issues. Any user can pop on the headband and start scrolling on their computer or turning their lamps and appliances on and off.

The Daily Beasts intrepid reporter Tony Ho Tran wears AAVAAs headband, which promises to bring non-invasive BCIs to the consumer market.

Its out of the box, he explained. Weve done the training [for the BCI] and now it works. Thats the beauty of what we do. It works right out of the boxand it works for everyone.

However, the fact that it can work for everyone is a top concern for ethical experts. Technology like this creates a minefield of potential privacy issues. After all, these companies may potentially have completely unfettered access to data from our literal brains. This is information that can be bought, sold, and used against consumers in an unprecedented way.

One comprehensive review published in 2017 in the journal BMC Medical Ethics pointed out that privacy is a major concern for potential users for this reason. BCI devices could reveal a variety of information, ranging from truthfulness, to psychological traits and mental states, to attitudes toward other people, creating potential issues such as workplace discrimination based on neural signals, the authors wrote.

To their credit, Kemeilipoor was adamant that AAVAA would and does not have access to individual brain signal data. But the concerns are still there, especially since there are notable examples of tech companies misusing user data. For example, Facebook has been sued multiple times for millions of dollars for storing users biometric data without their knowledge or consent. (Theyre certainly not the only company doing this either.)

These issues arent going to go awayand theyll be further exacerbated by the infusion of technology and the human brain. This is a phenomenon that also brings up concerns about personhood as well. At what point, exactly, does the human end and the computer begin once you are able to essentially control devices as an extension of yourself like your arms or legs?

The questionis it a tool or is it myself?takes on an ethical valence when researchers ask whether BCI users will become cyborgs, the authors wrote. They later added that some ethical experts worry that being more robotic makes one less human.

Yet, the benefits are undeniableespecially for those for whom BCIs could give more autonomy and mobility. Youre no longer limited by what you can do with your hands. Now, you can control the things around you simply by looking in a certain direction or moving your face in a specific way. It doesnt matter if youre in a wheelchair or completely paralyzed. Your mind is the limit.

This type of technology is like the internet of humans, Kemeilipoor said. This is the FitBit of the future. Not only are you able to monitor all your biometrics, it also allows you to control your devicesand its coming to market very soon.

Its promising. Its scary. And its also inevitable. The biggest challenge that we all must face is thatas these devices become more popular and we gradually give over our minds and bodies to technologywe dont lose what makes us human in the first place.

Read more:

Why This Brain-Hacking Technology Will Turn Us All Into Cyborgs - The Daily Beast

The Twitter CEO ousted by Elon Musk has resurfaced with an AI startup – Quartz

Parag Agrawal spent 11 months at the helm of Twitter, now known as X. Photo: Brendan McDermid ( Reuters )

Parag Agrawal, who was briefly CEO of Twitter before Elon Musk took over the social media platform, has reportedly raised about $30 million in funding for an AI startup.

Is the Apple Heart the next great innovation? | Whats next for Apple?

His company is building software for developers of large language models (LLMs), according to The Information, which cites unnamed sources. LLMs power generative AI tools like ChatGPT.

Agrawals AI venture marks the start of a new journey for him. After joining Twitter in 2011, he served as a software engineer before being promoted to chief technology officer and replacing Jack Dorsey as CEO. He then spent 11 tumultuous months at the helm before being ousted after Elon Musk closed the $44 billion acquisition of Twitter, now known as X, in April 2022.

Agrawal is yet another tech executive to jump on the bandwagon of pivoting to AI as venture capital keeps flowing into the space. For example, former Twitter board chair Bret Taylor was named chairman of the new OpenAI board late last year. Even X boss Musk has launched his own AI startup, xAI.

This rush by tech execs comes as global funding for AI startups hit nearly $50 billion in 2023, up 9% from the previous year, according to market research firm Crunchbase. Leading players OpenAI, Anthropic, and Inflection collectively raised $18 billion last year. AI is still a bright spot for launching a new company, even as overall startup funding remains lackluster.

More:

The Twitter CEO ousted by Elon Musk has resurfaced with an AI startup - Quartz

Google Admits Gemini AI Demo Was at Least Partially Faked

Google misrepresented the way its Gemini Pro can recognize a series of images and admitted to speeding up the footage.

Google has a lot to prove with its AI efforts — but it can't seem to stop tripping over its own feet.

Earlier this week, the tech giant announced Gemini, its most capable AI model to date, to much fanfare. In one of a series of videos, Google showed off the mid-level range of the model dubbed Gemini Pro by demonstrating how it could recognize a series of illustrations of a duck, describing the changes a drawing went through at a conversational pace.

But there's one big problem, as Bloomberg columnist Parmy Olson points out: Google appears to have faked the whole thing.

In its own description of the video, Google admitted that "for the purposes of this demo, latency has been reduced, and Gemini outputs have been shortened for brevity." The video footage itself is also appended with the phrase "sequences shortened throughout."

In other words, Google misrepresented the speed at which Gemini Pro can recognize a series of images, indicating that we still don't know what the model is actually capable of.

In the video, Gemini wowed observers by using its multimodal thinking chops to recognize illustrations at what appears to be a drop of a hat. The video, as Olson suggests, also offered us "glimmers of the reasoning abilities that Google’s DeepMind AI lab have cultivated over the years."

That's indeed impressive, considering any form of reasoning has quickly become the next holy grail in the AI industry, causing intense interest in models like OpenAI's rumored Q*.

In reality, the demo wasn't just significantly sped up to make it seem more impressive, but Gemini Pro is likely still stuck with the same old capabilities that we've already seen many times before.

"I think these capabilities are not as novel as people think," Wharton professor Ethan Mollick tweeted, showing how ChatGPT was effortlessly able to identify the simple drawings of a duck in a series of screenshots.

Did Google actively try to deceive the public by speeding up the footage? In a statement to Bloomberg Opinion, a Google spokesperson said it was made by "using still image frames from the footage, and prompting via text."

In other words, Gemini was likely given plenty of time to analyze the images. And its output may have then been overlaid over video footage, giving the impression that it was much more capable than it really was.

"The video illustrates what the multimode user experiences built with Gemini could look like," Oriol Vinyals, vice president of research and deep learning lead at Google’s DeepMind, wrote in a post on X.

Emphasis on "could." Perhaps Google should've opted to show the actual capabilities of its Gemini AI instead.

It's not even the first time Google has royally screwed up the launch of an AI model. Earlier this year, when the company announced its ChatGPT competitor, a demo infamously showed Bard making a blatantly false statement, claiming that NASA's James Webb Space Telescope took the first image of an exoplanet.

As such, Google's latest gaffe certainly doesn't bode well. The company came out swinging this week, claiming that an even more capable version of its latest model called Gemini Ultra was able to outsmart OpenAI's GPT-4 in a test of intelligence.

But from what we've seen so far, we're definitely going to wait and test it out for ourselves before we take the company's word.

More on Gemini: Google Shows Off "Gemini" AI, Says It Beats GPT-4

The post Google Admits Gemini AI Demo Was at Least Partially Faked appeared first on Futurism.

Continue reading here:
Google Admits Gemini AI Demo Was at Least Partially Faked