Daily Archives: April 27, 2023

The AI Arms Race: Investing in the Future of Artificial Intelligence … – The Motley Fool

Posted: April 27, 2023 at 2:53 pm

The release of ChatGPT, a generative chatbot developed by the company OpenAI, caused quite a stir. It moved the artificial intelligence (AI) conversation from the tech world to the mainstream seemingly overnight. AI is making headlines, and investors wonder which companies have the upper hand in the arms race.

ChatGPT is innovative because of its ability to communicate using natural language processing and because it is generative, capable of producing various types of content. You've probably experienced basic customer service bots that can give canned responses to limited queries. But generative chatbots can develop original responses. Its capabilities include answering questions, assisting with composition, summarizing content, and more. This is why Microsoft (MSFT 2.80%) has made a multiyear and multibillion-dollar investment in OpenAI.

The reason is simple. Microsoft is eying the vast search advertising market currently dominated by Alphabet's (GOOG 4.29%) (GOOGL 4.33%) Google Search, as shown below. It is using ChatGPT tech to get there.

The chasm between Google Search and Microsoft Bing is vast, so Microsoft has everything to gain. After all, Google Search brought in $160 billion in revenue for Alphabet in 2022, 80% of Microsoft's total fiscal 2022 sales.

Bing isn't Microsoft's only AI initiative. The company's comprehensive cybersecurity offerings leverage AI to fight against bad actors, and Microsoft CoPilot embeds into Microsoft Office apps to generate presentations, draft emails, and summarize texts. CEO Satya Nadella appears to be all-in on AI.

Microsoft's results for the fiscal third quarter of 2023 are simply outstanding: $52.9 billion in sales on 7% growth. Operating income for the quarter was $22.4 billion (up 10%), with a fantastic 42% margin.

The stock does not come cheap, as you'd probably expect. It trades near its 52-week high, and its price-to-earnings (P/E) ratio over 32 is higher than its single-year and three-year averages. Because of this, it might behoove new Microsoft investors to keep an eye out for a pullback in the stock price.

Microsoft is making dynamic moves in AI, but don't write off Alphabet just yet.

Some were quick to declare Microsoft the AI leader with its investment in ChatGPT, but this is like declaring a winner after the first inning of a baseball game. For years Alphabet has developed its own AI tools, including its answer to ChatGPT, named Bard. I tested Bard to inquire about Alpabet's other AI initiatives, like better translation services, search by photo, speech recognition, and others.

Google Lens is an excellent example of a practical application of AI. This allows the user to search from a cellphone camera. For example, users can translate a menu written in another language just by pointing their camera at it. Other applications include copying text or identifying unknown objects.

Alphabet just announced it is combining its Google Brain and DeepMind research programs into one entity called Google DeepMind. Both have been studying AI for years with some of the most brilliant minds in the business. The push from Microsoft might create urgency for Alphabet to kick these initiatives into high gear.

The slowing economy has investors concerned that Alphabet's advertising revenue will suffer. But first-quarter earnings announced on April 25 had many breathing a sigh of relief. Revenue rose to $69.8 billion on 3% growth (6% growth in constant currency). Operating income fell from 20.1 billion to $17.4 billion; however, $2.6 billion of the dip is due to one-time charges relating to layoffs and office space reductions. CEO Sundar Pichai expressed on the earnings conference call a commitment to reining in costs moving forward.

Alphabet's stock is more than 10% off its 52-week high and more than 25% below where it stood at the beginning of 2022.

GOOG data by YCharts

The company uses the share price reduction to benefit stockholders by aggressively repurchasing shares. A total of $73.8 billion of shares (5.5% of the current market cap) was retired in 2022 and Q1 2023. And another $70 billion was authorized with this earnings release.

The encouraging results do not mean the company is out of the woods. The economy is an ongoing headwind, YouTube sales were down in Q1 year over year, and Microsoft's search competition will be a test. But investors don't beat the market by buying only when everything is rosy. They need to look beyond current challenges to identify long-term potential. This potential is why Alphabet's beaten-down stock could make investors higher long-term profits.

Suzanne Frey, an executive at Alphabet, is a member of The Motley Fool's board of directors. Bradley Guichard has positions in Alphabet and Microsoft. The Motley Fool has positions in and recommends Alphabet and Microsoft. The Motley Fool has a disclosure policy.

Original post:

The AI Arms Race: Investing in the Future of Artificial Intelligence ... - The Motley Fool

Posted in Artificial Intelligence | Comments Off on The AI Arms Race: Investing in the Future of Artificial Intelligence … – The Motley Fool

Can Compute-In-Memory Bring New Benefits To Artificial … – SemiEngineering

Posted: at 2:53 pm

While CIM can speed up multiplication operations, it comes with added risk and complexity.

Compute-in-memory (CIM) is not necessarily an Artificial Intelligence (AI) solution; rather, it is a memory management solution. CIM could bring advantages to AI processing by speeding up the multiplication operation at the heart of AI model execution. However, for that to be successful, an AI processing system would need to be explicitly architected to use CIM. The change would entail a shift from all-digital design workflows into a mixed-signal approach which would require deep design expertise and specialized semiconductor fabrication processes.

Compute-in-memory eliminates weight coefficient buffers and streamlines the primitive multiply operations, striving for increased AI inference throughput. However, it does not perform neural network processing by itself. Other functions like input data streaming, sequencing, accumulation buffering, activation buffering, and layer organization may become more vital factors in overall performance as model hardware mapping unfolds and complexity increases more robust NPUs (Neural Processing Units) incorporate all those functions.

Fundamentally, compute-in-memory embeds a multiplier unit in a memory unit. A conventional digital multiplier takes two operands as digital words and produces a digital result, handling signing and scaling. Compute-in-memory uses a different approach, storing a weight coefficient as analog values in a specially designed transistor cell sub-array with rows and columns. The incoming digital data words enter the rows of the array, triggering analog voltage multiplies, then analog current summations occur along columns. An analog-to-digital converter creates the final digital word outputs from the summed analog values.

An individual memory cell can be straightforward in theory, such as these candidates:

Still, operating these cells presents mixed-signal challenges and a technology gap that is not closing anytime soon. So, why the intense interest in compute-in-memory for AI inference chips?

First, it can be fast. This is because analog multiplication happens quickly as part of the memory read cycle, transparent to the rest of the surrounding digital logic. It can also be lower power since fewer transistors switch at high frequencies. But there are some limitations from a system viewpoint. Additional steps needed for programming the analog values into the memory cells are a concern. Inaccuracy of the analog voltages, which may change over time, can inject bit errors into results showing up as detection errors or false alarm rates.

Aside from its analog nature, the biggest concern for compute-in-memory may be bit precision and AI training requirements. Researchers seem confident in 4-bit implementations; however, more training cycles must be run for reliable inference at low precision. Raising the precision to 8-bit lowers training demands. It also increases the complexity of the arrays and the analog-to-digital converter for each array, offsetting area and power savings and worsening the chance for bit errors in the presence of system noise.

So is compute-in-memory worthy of consideration? There likely are niche applications where it could speed up AI inference. A more critical question: is the added risk and complexity of compute-in-memory worth the effort? A well-conceived NPU strategy and implementation may nullify any advantage of moving to compute-in-memory. We can contrast the tradeoffs for AI inference in four areas: power/performance/area (PPA), flexibility, quantization, and memory technology.

PPA

Flexibility

Quantization

Memory Technology

The answer to the original question might be that designers should consider CIM only if other, more established AI inference platforms (NPUs) cannot meet their requirements. Since CIM is riskier, costlier, and harder to implement, many should only consider it a last-resort solution.

Expedera explores this topic in much more depth in a recent white paper, which can be found at: https://www.expedera.com/architectural-considerations-for-compute-in-memory-in-ai-inference/

Read this article:

Can Compute-In-Memory Bring New Benefits To Artificial ... - SemiEngineering

Posted in Artificial Intelligence | Comments Off on Can Compute-In-Memory Bring New Benefits To Artificial … – SemiEngineering

What Can ChatGPT Tell Us About the Evolution of Artificial … – Unite.AI

Posted: at 2:53 pm

In the last decade, artificial intelligence (AI) has elicited both dreams of a massive transformation in the tech industry and a deep anxiety surrounding its potential ramifications. Elon Musk, a leading voice in the tech industry, has demonstrated this duality. He simultaneously is promising a world of autonomous AI-powered cars while warning us of the risks associated with AI, even calling for a pause in the development of AI. This is especially ironic considering Musk was an early investor in OpenAI, founded in 2015.

One of the most exciting and concerning developments riding the current wave of AI research is autonomous AI. Autonomous AI systems can perform tasks, make decisions, and adapt to new situations on their own, without continual human oversight or task-by-task programming. One of the best-known examples at the moment is ChatGPT, a major milestone in the evolution of artificial intelligence. Lets look at how ChatGPT came about, where its headed, and what the technology can tell us about the future of AI.

The tale of artificial intelligence is a captivating one of progress and collaboration across disciplines. It began in the early 20th century with the pioneering efforts of Santiago Ramn y Cajal, a neuroscientist who used his understanding of the human brain to create the concept of neural networks, a cornerstone of modern AI. Neural networks are computer systems that emulate the structure of the human brain and nervous system to produce machine-based intelligence. Some time later, Alan Turing was busy developing the modern computer and proposing the Turing Test, a means of evaluating if a machine could display human-like intelligent behavior. These developments spurred a wave of interest in AI.

As a result, the 1950s saw John McCarthy, Marvin Minsky, and Claude Shannon explore the prospects of AI, and Frank Rosenblatt coined the term artificial intelligence. The following decades saw two major breakthroughs. The first was expert systems, which are AI systems that are individually designed to perform niche, industry-specific tasks. The second were natural language processing applications, like early chatbots. With the arrival of large datasets and ever-improving computing power in the 2000s and 2010s, machine learning techniques flourished, leading us to autonomous AI.

This significant step enables AI systems to perform complex tasks without the need of case-by-case programming, opening them to a wide range of uses. One such autonomous system Chat GPT from OpenAI has of course recently become widely known for its amazing ability to learn from vast amounts of data and generate coherent, human-like responses.

So what is the basis of ChatGPT? We humans have two basic capabilities that enable us to think. We possess knowledge, whether its about physical objects or concepts, and we possess an understanding of those things in relation to complex structures like language, logic, etc. Being able to transfer that knowledge and understanding to machines is one of the toughest challenges in AI.

With knowledge alone, OpenAIs GPT-4 model couldnt handle more than a single piece of information. With context alone, the technology couldnt understand anything about the objects or concepts it was contextualizing. But combine both, and something remarkable happens. The model can become autonomous. It can understand and learn. Apply that to text, and you have ChatGPT. Apply it to cars, and you have autonomous driving, and so on.

OpenAI isnt alone in its field, and many companies have been developing machine learning algorithms and utilizing neural networks to produce algorithms that can handle both knowledge and context for decades. So what changed when ChatGPT came to the market? Some people have pointed to the staggering amount of data provided by the internet as the big change that fueled ChatGPT. However, if that were all that was needed, its likely that Google would have beaten OpenAI because of Googles dominance over all of that data. So how did OpenAI do it?

One of OpenAIs secret weapons is a new tool called reinforcement learning from human feedback (RLHF). OpenAI used RHLF to train the OpenAI algorithm to understand both knowledge and context. OpenAI didnt create the idea of RLHF, but the company was among the first to rely on it so wholly for the development of a large language model (LLM) like ChatGPT.

RLHF simply allowed the algorithm to self-correct based on feedback. So while ChatGPT is autonomous in how it produces an initial response to a prompt, it has a feedback system that lets it know whether its response was accurate or in some way problematic. That means it can constantly get better and better without significant programming changes. This model resulted in a fast-learning chat system that quickly took the world by storm.

The new age of autonomous AI has begun. In the past, we had machines that could understand various concepts to a degree, but only in highly specific domains and industries. For example, industry-specific AI software has been used in medicine for some time. But the search for autonomous or general AI meaning AI that could function on its own to perform a wide variety of tasks in various fields with a degree of human-like intelligence finally produced globally noteworthy results in 2022, when Chat GPT handily and decisively passed the Turing test.

Understandably, some people are starting to fear that their expertise, jobs, and even uniquely human qualities may get replaced by intelligent AI systems like ChatGPT. On the other hand, passing the Turing test isnt an ideal indicator for how human-like a particular AI system may be.

For example, Roger Penrose, who won the Nobel Prize in Physics in 2020, argues that passing the Turing test does not necessarily indicate true intelligence or consciousness. He argues that there is a fundamental difference between the way that computers and humans process information and that machines will never be able to replicate the type of human thought processes that give rise to consciousness.

So passing the Turing test is not a true measure of intelligence, because it merely tests a machines ability to imitate human behavior, rather than its ability to truly understand and reason about the world. True intelligence requires consciousness and the ability to understand the nature of reality, which cannot be replicated by a machine. That means that, far from replacing us, ChatGPT and other similar software will simply provide tools to help us improve and increase efficiency in a variety of fields.

So, machines will be able to complete many tasks autonomously, in ways we never thought possible from understanding and writing content, to securing vast amounts of information, performing delicate surgeries, and driving our cars. But, for now, at least in this current age of technology, capable workers neednt fear for their jobs. Even autonomous AI systems dont have human intelligence. They can just understand and perform better than us humans at certain tasks. They arent more intelligent than us overall, and they dont pose a significant threat to our way of life; at least, not in this wave of AI development.

Read the original:

What Can ChatGPT Tell Us About the Evolution of Artificial ... - Unite.AI

Posted in Artificial Intelligence | Comments Off on What Can ChatGPT Tell Us About the Evolution of Artificial … – Unite.AI

New artificial intelligence feature comes to Snapchat – Inklings News

Posted: at 2:53 pm

Im here to chat with you and keep you company! Is there anything youd like to talk about? my Artificial Intelligence asked me. The second I questioned its arrival, I didnt enjoy the idea of this robot keeping me company. The first thing I did was try to get rid of this pest. Thats when I realized, I couldnt. This bot was stuck at the top of my feed, the genetically colorful alien-like Bitmoji always glaring back at me.

Artificial intelligence, or AI, used to be only a feature for Snapchat+ users who choose to either pay $3.99 per month or $29.99 per year allowing them to have access to more features. However, beginning on April 19, all Snapchat users were greeted with a new user at the top of the screen as they opened the app. Users never got the option to add their AI back, yet there it was, automatically pinned to the top of the screen.

You are able to communicate with your AI the same ways you can with your friends, like sending pictures that appear to delete immediately. Once you send this image, the AI will send a chat, attempting to guess what your image is of. Sending a picture of a car window would result in the message, Looks like youre on the move! Hope youre having a safe and fun journey.

Its very futuristic, I feel like Im being stalked or hacked by Snapchat.

Avery Johnson 25

Staples students have mixed feelings regarding the new Snapchat feature.

Its very futuristic, Avery Johnson 25 said. I feel like Im being stalked or hacked by Snapchat.

Though many would like to eliminate it, some feel it is not worth the hassle.

I dont think I would consider paying to remove this feature, Noah Wolff 25 said. It doesnt bother me enough.

Elijah Debrito 25 originally enjoyed the new feature, but after a couple of days, it got old. I dont like it anymore, Debrito said. Its creepy. It says it cant see your photos but then you send it a snap and it can tell what youre doing.

In order to improve the functionality of the AI for all, students have other recommendations for the new technology.

I would recommend it develops more diverse responses, Johnson said. Sometimes I just want to strangle the AI from the screen when it doesnt understand what Im saying.

According to Snapchats support page, Just like real friends, the more you interact with My AI the better it gets to know you, and the more relevant the responses will be. On the same page, it also states that, You should also avoid sharing confidential or sensitive information with My AI.

Overall, having the option to decline this new friend would be preferred by many.

I think that they should make it so you can get rid of it, Julia Coda 25 said.

Original post:

New artificial intelligence feature comes to Snapchat - Inklings News

Posted in Artificial Intelligence | Comments Off on New artificial intelligence feature comes to Snapchat – Inklings News

Artificial intelligence poised to hinder, not help, access to justice – Reuters

Posted: at 2:53 pm

April 25 (Reuters) - The advent of ChatGPT, the fastest-growing consumer application in history, has sparked enthusiasm and concern about the potential for artificial intelligence to transform the legal system.

From chatbots that conduct client intake, to tools that assist with legal research, document management, even writing legal briefs, AI has been touted for its potential to increase efficiency in the legal industry. It's also been recognized for its ability to help close the access-to-justice gap by making legal help and services more broadly accessible to marginalized groups.

Most low-income U.S. households deal with at least one civil legal problem a year, concerning matters like housing, healthcare, child custody and protection from abuse, according to the Legal Services Corp. They dont receive legal help for 92% of those problems.

Moreover, our poorly-funded public defense system for criminal matters has been a broken process for decades.

AI and similar technologies show promise in their ability to democratize legal services, including applications such as online dispute resolution and automated document preparation.

For example, A2J Author uses decision trees, a simplistic kind of AI, to build document preparation tools for complex filings in housing law, public benefits law and more. The non-profit JustFix provides online tools that help with a variety of landlord-tenant issues. And apps have been developed to help people with criminal expungement, to prepare for unemployment hearings, and even to get divorced.

Still, there's more reason to be wary rather than optimistic about AIs potential effects on access to justice.

Much of the existing technology and breakneck momentum in the industry is simply not geared toward the interests of underserved populations, according to several legal industry analysts and experts on the intersection of law and technology. Despite the technology's potential, some warned that the current trajectory actually runs the risk of exacerbating existing disparities.

Rashida Richardson, an assistant professor at Northeastern University School of Law, told me that AI has lots of potential, while stressing that there hasnt been enough public discussion of the many limitations of AI and of data itself." Richardson has served as technology adviser to the White House and Federal Trade Commission.

"Fundamentally, problems of access to justice are about deeper structural inequities, not access to technology," Richardson said.

It's critical to recognize that the development of AI technology is overwhelmingly unregulated and is driven by market forces, which categorically favor powerful, wealthy actors. After all, tech companies are not developing AI for free, and their interest is in creating a product attractive to those who can pay for it.

Your ability to enjoy the benefits of any new technology corresponds directly to your ability to access that technology, said Jordan Furlong, a legal industry analyst and consultant, noting that ChatGPT Plus costs $20-a-month, for example.

Generative AI has fueled a new tech gold rush in "big law" and other industries, and those projects can sometimes cost millions, Reuters reported on April 4.

Big law firms and legal service providers are integrating AI search tools into their workflows and some have partnered with tech companies to develop applications in-house.

Global law firm Allen & Overy announced in February that its lawyers are now using chatbot-based AI technology from a startup called Harvey to automate some legal document drafting and research, for example. Harvey received a $5 million investment last year in a funding round, Reuters reported in February. Last month, PricewaterhouseCoopers said 4,000 of its legal professionals will also begin using the generative AI tool.

Representatives of PricewaterhouseCoopers and Allen & Overy did not respond to requests for comment.

But legal aid organizations, public defenders and civil rights lawyers who serve minority and low-income groups simply dont have the funds to develop or co-develop AI technology nor to contract for AI applications at scale.

The resources problem is reflected in the contours of the legal market itself, which is essentially two distinct sectors: one that represents wealthy organizational clients, and another that works for consumers and individuals, said William Henderson, a professor at the Indiana University Maurer School of Law.

Americans spent about $84 billion on legal services in 2021, according to Henderson's research and U.S. Census Bureau data. By contrast, businesses spent $221 billion, generating nearly 70% of legal services industry revenue.

Those disparities seem to be reflected in the development of legal AI thus far.

A 2019 study of digital legal technologies in the U.S. by Rebecca Sandefur, a sociologist at Arizona State University, identified more than 320 digital technologies that assist non-lawyers with justice problems. But Sandefur's research also determined that the applications don't make a significant difference in terms of improving access to legal help for low-income and minority communities. Those groups were less likely to be able to use the tools due to fees charged, limited internet access, language or literacy barriers, and poor technology design.

Sandefur's report identified other hurdles to innovation, including the challenges of coordination among innumerable county, state and federal court systems, and "the legal professions robust monopoly on the provision of legal advice" -- referring to laws and rules restricting non-lawyer ownership of businesses that engage in the practice of law.

Drew Simshaw, a Gonzaga University School of Law professor, told me that many non-lawyers are "highly-motivated" to develop in this area but are concerned about crossing the line into unauthorized practice of law. And there isn't a uniform definition of what constitutes unauthorized practice across jurisdictions, Simshaw said.

On balance, it's clear that AI certainly has great potential to disrupt and improve access-to-justice. But it's much less clear that we have the infrastructure or political will to make that happen.

Our Standards: The Thomson Reuters Trust Principles.

Opinions expressed are those of the author. They do not reflect the views of Reuters News, which, under the Trust Principles, is committed to integrity, independence, and freedom from bias.

Thomson Reuters

Hassan Kanu writes about access to justice, race, and equality under law. Kanu, who was born in Sierra Leone and grew up in Silver Spring, Maryland, worked in public interest law after graduating from Duke University School of Law. After that, he spent five years reporting on mostly employment law. He lives in Washington, D.C. Reach Kanu at hassan.kanu@thomsonreuters.com

Original post:

Artificial intelligence poised to hinder, not help, access to justice - Reuters

Posted in Artificial Intelligence | Comments Off on Artificial intelligence poised to hinder, not help, access to justice – Reuters

How to play the artificial intelligence boom – Investors Chronicle

Posted: at 2:53 pm

A general-purpose technology is one that impacts the whole economy. The invention of the steam engine, for example, changed what people consumed, how they travelled and where they lived. Centuries later, after a variety of modern technologies have had significant impacts of their own, todayit is artificial intelligence (AI) for which the biggest promises are beingmade.

Strange as it may sound, there are potential parallels, in terms of both efficiency gains and investment maniasbetween steam power and AI. In 1698, the original steam pump was patented by Thomas Savery but it wasnt until James Watt patented the improved Boulton-Watt engine in 1769 that steam started to transform the economy.

Factories had previously been powered by horses, wind or water, which imposed physical restrictions on where they could be located. Theclustering enabled by the use of steam power, combined with the invention of the train, allowed the efficient transfer of materials, goodsand ideas between hubs. A flywheel effect was set in motion. Between 1800 and 1900, UK gross domestic product (GDP) per capita rose 109 per cent from 2,331 to 4,930 (the figures are adjusted for 2013 prices).

Visit link:

How to play the artificial intelligence boom - Investors Chronicle

Posted in Artificial Intelligence | Comments Off on How to play the artificial intelligence boom – Investors Chronicle

Artificial Intelligence: Use and misuse – WSPA 7News

Posted: at 2:53 pm

(WSPA) Artificial Intelligence is now at your fingertips like never before.

From ChatGPT to Microsofts Bing, to MidJourney or Snapchats new My AI, the number of interactive AI programs is growing.

With that, the average user is starting to understand the endless possibilities of what AI can do.

Still, along with that comes some serious words of warning.

7NEWS looked into how the technology is already being misused and how it could affect everything from safety online to the job market.

Darren Hick, a professor at Furman University has seen firsthand, how the technology comes with some cautionary tales.

We always thought it was just over the horizon, and always just over the horizon, and last year it arrived, and we werent ready for it.

As an Assistant Professor of Philosophy, Hick was one of the first to catch a new type of plagiarism using AI two weeks after ChatGPT was launched to the public when one of his students turned in a final paper.

Normally when a student plagiarizes a paper this is a last-minute panic and so it sort of screams that it has been thrown together in the last second, according to Hick.This wasnt that. This was clean, this was really nicely put together but had all these other factors. And I had heard about ChatGPT, and it finally dawned on me that maybe this was that.

Thats when Hick started testing out ChatGPT.

The open-source program interacts a lot like a confident, well-spoken human.

You can ask it any question like, Describe AI so a 5-year-old can understand, and it spits out appropriate answers like: Its like having a smart robot that can learn and think like a human.

No matter how many times you ask the question, the answer is always slightly different, again, just like a human.

You can even ask it to write in different styles from Shakespeare to poetry.

However, with it still in its infancy, ChatGPT is full of inaccuracies.

When we asked the AI to tell us about Diane Lee from WSPA, it said she is a former journalist who may have left the station.

Kylan Cleveland, with the IT firm Cyber Solutions in Anderson, is quick to point out that right now programs like ChatGPT dont scour the web for information, which is why they are not up to date.

Programs like ChatGPT are only working with what is fed into them and have limited knowledge past 2021, as is stated on ChatGPTs home page.

Cleveland embraces the technology but also is leery of the day these programs gain access to up-to-date information.

When we can get to the point where AI has current data, I think thats when we should really take a step back and see what type of security measures, we can put in place to prevent it from being almost predictive, he said.

AI could also pose a major shakeup to the job market, with some experts who study the technology, like Thomas Fellows, predicting many white-collar jobs from accounting to marketing will be displaced.

If you dont have a job that has true human judgment and vault, it could be taken away, plain and simple, Fellows said.

Fellows has worked in software jobs where the main goal was to automate tasks. He believes AI will be akin to what machines have been to some factory jobs.

The jobs he said are most at risk are:

Still, no matter the warnings, educators and businesses alike said not embracing the many benefits of the technology would be like rejecting the internet in the 90s.

AI is a huge time saver, making tasks that used to take hours last only minutes or even seconds.

If youre scared of something then youre likely more dangerous with it than someone who is educated on it, according to Cleveland.

Fellows added, those who dont embrace the technology, from educators to companies, will lose out on a tool that is changing virtually every industry.

Fortunately, with the development of AI comes AI detectors, which is one way Hick was able to decipher plagiarism.

Despite a warning in his syllabus this Spring semester, the first case wasnt the last.

I went through exactly the same process. My first thought was not oh well this is ChatGPT, my first thought was well thats a weird way to put this, and eventually it clicked, Oh, its AI again.

Two students, two semesters, two Fs for the final grade.

Hick has a warning for all educators, no matter the school, no matter the grade level.

If we dont get used to the way plagiarism looks like now, then more and more of it is going to sneak by.

Follow this link:

Artificial Intelligence: Use and misuse - WSPA 7News

Posted in Artificial Intelligence | Comments Off on Artificial Intelligence: Use and misuse – WSPA 7News

US states most and least likely to use Artificial Intelligence revealed – Digital Journal

Posted: at 2:53 pm

Artificial intelligence, or AI, has been increasingly present in everyday life for decades, but the launch of the conversational robot ChatGPT marked a turning point in its perception AFP/File Camille LAFFONT

An assessment of public attitudes to artificial intelligence (AI) has revealed Utah to be the state most likely to use AI over any other state in the U.S. Oregon and Washington are the second and third states most interested in using AI.

At the other end of the scale, Mississippi is the state least likely to use AI (followed by Louisiana and Alabama).

The research carried out by AI-driven website builder YACSS, who examined Google Keywords data of search terms frequently used by people interested in AI over the past 12 months. These terms were combined to find each states average monthly search volume for AI-related terms per 100,000 people and each states most common uses.

By placing Utah at the top, this was based on 202.9 searches per 100,000 people for AI and AI-related tools.

The full list is:

Although it is in its infancy, artificial intelligence is becoming more common to computer systems. In terms of the main applications, AI is being used for art most commonly across all fifty states, with voice generator being its second most popular use

With the above data, the fourth most AI-interested state is Vermont. Vermont uses AI the most for art, voice generator, music, text-to-video and animation. Searches for AI and AI-related terms average 173.08 per 100,000 people per month. Colorado is the fifth state most interested in AI, with an average of 170.03 searches for AI and AI-powered tools per month per 100,000 people. The state uses AI the most for art, followed by voice generator, music, animation and resume writing.

In a message sent to Digital Journal, YACSS comment: The use of artificial intelligence in the U.S. is on the rise, and its clear to see why. It is frequently used to reduce time spent on tedious tasks as well as provide users with endless creative possibilities, and this is all available at the touch of a button.

The rest is here:

US states most and least likely to use Artificial Intelligence revealed - Digital Journal

Posted in Artificial Intelligence | Comments Off on US states most and least likely to use Artificial Intelligence revealed – Digital Journal

U.S. Patent filed by BYND Cannasoft to Expand its Artificial Intelligence and Machine Learning Algorithm to a Male Treatment Device – Yahoo Finance

Posted: at 2:53 pm

BYND Cannasoft Enterprises, Inc.

BYND Cannasoft Subsidiary Zigi Carmel Initiatives & Investments LTD. filed U.S. Provisional Patent Application 63461609 on April 25, 2023 covering the mechanical structure, operation, and controlling aspects of a treatment device monitored by sensors and capable of stimulating the male sexual organs based on user preferences

ASHKELON, Israel and VANCOUVER, British Columbia, April 27, 2023 (GLOBE NEWSWIRE) -- BYND Cannasoft Enterprises Inc. (Nasdaq: BCAN) (CSE: BYND) ("BYND Cannasoft" or the "Company") announced today that its Zigi Carmel Initiatives & Investments LTD. subsidiary filed U.S. Provisional Patent Application 63461609 on April 25, 2023, covering the mechanical structure, operation, and controlling aspects of a male treatment device for external use capable of gathering information and creating custom programs according to the collected data from the sensors and uploading the data to the cloud. This U.S. Provisional Patent Application marks BYND Cannasoft's third potential candidate that could introduce new advanced haptic experiences to the fast-growing sexual wellness and sextech market.

The male treatment device utilizes artificial intelligence and machine learning algorithms to control its operational parameters based on the user's physiological parameters. The user, or a partner, can control the device with a smartphone app. Data collected by the device's sensors can be uploaded to the cloud where it will be stored to remember user preferences to create a custom experience for the user.

BYND Cannasoftannounced on March 8, 2023that its Zigi Carmel Initiatives & Investments LTD. subsidiary filed U.S. Provisional Patent Application number 63450503 covering the mechanical structure, operation, and controlling aspects of its smart female treatment device. OnApril 25, 2023, the company announcedit received a positive opinion from the Patent Cooperation Treaty (PCT) for its A.I.-based Female Treatment Device. The Patent Cooperation Treaty (PCT) assists applicants in seeking patent protection internationally for their inventions and currently has 157 contracting states. BYND Cannasoft intends to file a similar application with the PCT for its male treatment device.

Story continues

AnApril 2023 industry reportby Market Research Future projects the Sexual Wellness Market size could grow to $115.92 billion by 2030 from $84.89 billion in 2022. The report cites the growing prevalence of Sexually Transmitted Diseases (STDs), HIV infection, increasing government initiatives, and NGOs promoting contraceptives as the key market drivers dominating the market growth.According to Forbes, the Sextech Market is expected to grow to $52.7 billion by 2026 from its current $30 billion as online sales continue to grow. BYND Cannasoft plans to develop this A.I.-based smart treatment device for men, its A.I.-based smart treatment device for women, and its EZ-G device.

Yftah Ben Yaackov, CEO and Director of BYND Cannasoft, said, "As the multi-billion-dollar sexual wellness and sextech market continues to grow, the industry is undergoing tremendous changes in consumer preferences as devices are increasingly connected online and enabled with interactive content. In this market, A.I., machine learning, and haptic technology have the potential to personalize the operational parameters of sexual wellness devices based on the physiological parameters of the user." Mr. Ben Yaackov continued, "As a corporate lawyer, I recognize the value of licensing our potential A.I. and machine learning patent portfolio to customers in the sexual wellness market and producing innovative new products. The Board of BYND Cannasoft is committed to protecting the company's I.P. covering this potentially lucrative market and bringing this innovative technology to market."

About BYND Cannasoft Enterprises Inc.

BYND Cannasoft Enterprises is an Israeli-based integrated software and cannabis company. BYND Cannasoft owns and markets "Benefit CRM," a proprietary customer relationship management (CRM) software product enabling small and mediumsized businesses to optimize their daytoday business activities such as sales management, personnel management, marketing, call center activities, and asset management. Building on our 20 years of experience in CRM software, BYND Cannasoft is developing an innovative new CRM platform to serve the needs of the medical cannabis industry by making it a more organized, accessible, and price-transparent market. The Cannabis CRM System will include a Job Management (BENEFIT) and a module system (CANNASOFT) for managing farms and greenhouses with varied crops. BYND Cannasoft owns the patent-pending intellectual property for the EZ-G device. This therapeutic device uses proprietary software to regulate the flow of low concentrations of CBD oil, hemp seed oil, and other natural oils into the soft tissues of the female reproductive system to potentially treat a wide variety of women's health issues. The EZ-G device includes technological advancements as a sex toy with a more realistic experience and the prototype utilizes sensors to determine what enhances the users' pleasure. The user can control the device through a Bluetooth app installed on a smartphone or other portable device. The data will be transmitted and received from the device to and from the secure cloud using artificial intelligence (AI). The data is combined with other antonymic user preferences to improve its operation by increasing sexual satisfaction.

For Further Information please refer to information available on the Companys website: http://www.cannasoft-crm.com, the CSEs website: http://www.thecse.com/en/listings/life-sciences/bynd-cannasoft-enterprises-inc and on SEDAR: http://www.sedar.com.

Gabi KabazoChief Financial OfficerTel: (604) 833-6820email: ir@cannasoft-crm.com

For Media and Investor Relations, please contact:

David L. Kugelman(866) 692-6847 Toll Free - U.S. & Canada(404) 281-8556 Mobile and WhatsAppdk@atlcp.comSkype: kugsusa

Cautionary Note Regarding Forward-Looking Statements

This press release contains forward-looking statements within the meaning of the Private Securities Litigation Reform Act of 1995 involving risks and uncertainties, which may cause results to differ materially from the statements made. We intend such forward-looking statements to be covered by the safe harbor provisions for forward-looking statements contained in Section 27A of the Securities Act of 1933, as amended, and Section 21E of the Securities Exchange Act of 1934, as amended. When used in this document, the words "may," "would," "could," "will," "intend," "plan," "anticipate," "believe," "estimate," "expect," "potential," "continue," "strategy," "future," "project," "target," and similar expressions are intended to identify forward-looking statements, though not all forward looking statements use these words or expressions. All statements contained in this press release other than statements of historical fact, including, without limitation, statements regarding our male treatment device, our Cannabis CRM platform, our expanded EZ-G patent application, our market growth, and our objectives for future operations, are forward looking statements. Additional regulatory standards may be required, including FDA approval or any other approval for the purpose of manufacturing, marketing, and selling the devices under therapeutic indications. There is no certainty that the aforementioned approvals will be received, and all the information in this release is forward-looking. Such statements reflect the company's current views with respect to future events and are subject to such risks and uncertainties. Many factors could cause actual results to differ materially from the statements made, including unanticipated regulatory requests and delays, final patents approval, and those factors discussed in filings made by the company with the Canadian securities regulatory authorities, including (without limitation) in the company's management's discussion and analysis for the year ended December 31, 2022 and annual information form dated March 31, 2023, which are available under the company's profile atwww.sedar.com, and in filings made with the U.S. Securities and Exchange Commission. Should one or more of these factors occur, or should assumptions underlying the forward-looking statements prove incorrect, actual results may vary materially from those described herein as intended, planned, anticipated, or expected. We do not intend and do not assume any obligation to update these forwardlooking statements, except as required by law. Any such forward-looking statements represent management's estimates as of the date of this press release. While we may elect to update such forward-looking statements at some point in the future, we disclaim any obligation to do so, even if subsequent events cause our views to change. Shareholders are cautioned not to put undue reliance on such forwardlooking statements.

Link:

U.S. Patent filed by BYND Cannasoft to Expand its Artificial Intelligence and Machine Learning Algorithm to a Male Treatment Device - Yahoo Finance

Posted in Artificial Intelligence | Comments Off on U.S. Patent filed by BYND Cannasoft to Expand its Artificial Intelligence and Machine Learning Algorithm to a Male Treatment Device – Yahoo Finance

Artificial intelligence is infiltrating health care. We shouldnt let it make all the decisions. – MIT Technology Review

Posted: at 2:53 pm

This article is from The Checkup, MIT Technology Reviews weekly biotech newsletter. To receive it in your inbox every Thursday, sign up here.

Would you trust medical advice generated by artificial intelligence? Its a question Ive been thinking over this week, in view of yet more headlines proclaiming that AI technologies can diagnose a range of diseases. The implication is often that theyre better, faster, and cheaper than medically trained professionals.

Many of these technologies have well-known problems. Theyre trained on limited or biased data, and they often dont work as well for women and people of color as they do for white men. Not only that, but some of the data these systems are trained on are downright wrong.

Dont settle for half the story.Get paywall-free access to technology news for the here and now.

Theres another problem. As these technologies begin to infiltrate health-care settings, researchers say were seeing a rise in whats known as AI paternalism. Paternalism in medicine has been problematic since the dawn of the profession. But now, doctors may be inclined to trust AI at the expense of a patients own lived experiences, as well as their own clinical judgment.

AI is already being used in health care. Some hospitals use the technology to help triage patients. Some use it to aid diagnosis, or to develop treatment plans. But the true extent of AI adoption is unclear, says Sandra Wachter, a professor of technology and regulation at the University of Oxford in the UK.

Sometimes we dont actually know what kinds of systems are being used, says Wachter. But we do know that their adoption is likely to increase as the technology improves and as health-care systems look for ways to reduce costs, she says.

Research suggests that doctors may already be putting a lot of faith in these technologies. In a study published a few years ago, oncologists were asked to compare their diagnoses of skin cancer with the conclusions of an AI system. Many of them accepted the AIs results, even when those results contradicted their own clinical opinion.

Theres a very real risk that well come to rely on these technologies to a greater extent than we should. And heres where paternalism could come in.

Paternalism is captured by the idiom the doctor knows best, write Melissa McCradden and Roxanne Kirsch of the Hospital for Sick Children in Ontario, Canada, in a recent scientific journal paper. The idea is that medical training makes a doctor the best person to make a decision for the person being treated, regardless of that persons feelings, beliefs, culture, and anything else that might influence the choices any of us make.

Paternalism can be recapitulated when AI is positioned as the highest form of evidence, replacing the all-knowing doctor with the all-knowing AI, McCradden and Kirsch continue. They say there is a rising trend toward algorithmic paternalism. This would be problematic for a whole host of reasons.

For a start, as mentioned above, AI isnt infallible. These technologies are trained on historical data sets that come with their own flaws. Youre not sending an algorithm to med school and teaching it how to learn about the human body and illnesses, says Wachter.

As a result, AI cannot understand, only predict, write McCradden and Kirsch. An AI could be trained to learn which patterns in skin cell biopsies have been associated with a cancer diagnosis in the past, for example. But the doctors who made those past diagnoses and collected that data might have been more likely to miss cases in people of color.

And identifying past trends wont necessarily tell doctors everything they need to know about how a patients treatment should continue. Today, doctors and patients should collaborate in treatment decisions. Advances in AI use shouldnt diminish patient autonomy.

So how can we prevent that from happening? One potential solution involves designing new technologies that are trained on better data. An algorithm could be trained on information about the beliefs and wishes of various communities, as well as diverse biological data, for instance. Before we can do that, we need to actually go out and collect that dataan expensive endeavor that probably wont appeal to those who are looking to use AI to cut costs, says Wachter.

Designers of these AI systems should carefully consider the needs of the people who will be assessed by them. And they need to bear in mind that technologies that work for some groups wont necessarily work for others, whether thats because of their biology or their beliefs. Humans are not the same everywhere, says Wachter.

The best course of action might be to use these new technologies in the same way we use well-established ones. X-rays and MRIs are used to help inform a diagnosis, alongside other health information. People should be able to choose whether they want a scan, and what they would like to do with their results. We can make use of AI without ceding our autonomy to it.

Philip Nitschke, otherwise known as Dr. Death, is developing an AI that can help people end their own lives. My colleague Will Douglas Heaven explored the messy morality of letting AI make life-and-death decisions in this feature from the mortality issue of our magazine.

In 2020, hundreds of AI tools were developed to aid the diagnosis of covid-19 or predict how severe specific cases would be. None of them worked, as Will reported a couple of years ago.

Will has also covered how AI that works really well in a lab setting can fail in the real world.

My colleague Melissa Heikkil has explored whether AI systems need to come with cigarette-pack-style health warnings in a recent edition of her newsletter, The Algorithm.

Tech companies are keen to describe their AI tools as ethical. Karen Hao put together a list of the top 50 or so words companies can use to show they care without incriminating themselves.

Scientists have used an imaging technique to reveal the long-hidden contents of six sealed ancient Egyptian animal coffins. They found broken bones, a lizard skull, and bits of fabric. (Scientific Reports)

Genetic analyses can suggest targeted treatments for people with colorectal cancerbut people with African ancestry have mutations that are less likely to benefit from these treatments than those with European ancestry. The finding highlights how important it is for researchers to use data from diverse populations. (American Association for Cancer Research)

Sri Lanka is considering exporting 100,000 endemic monkeys to a private company in China. A cabinet spokesperson has said the monkeys are destined for Chinese zoos, but conservationists are worried that the animals will end up in research labs. (Reuters)

Would you want to have electrodes inserted into your brain if they could help treat dementia? Most people who have a known risk of developing the disease seem to be open to the possibility, according to a small study. (Brain Stimulation)

A gene therapy for a devastating disease that affects the muscles of some young boys could be approved following a decision due in the coming weeksdespite not having completed clinical testing. (STAT)

See original here:

Artificial intelligence is infiltrating health care. We shouldnt let it make all the decisions. - MIT Technology Review

Posted in Artificial Intelligence | Comments Off on Artificial intelligence is infiltrating health care. We shouldnt let it make all the decisions. – MIT Technology Review