Page 10«..9101112..2030..»

Category Archives: Artificial Intelligence

What Can ChatGPT Tell Us About the Evolution of Artificial … – Unite.AI

Posted: April 27, 2023 at 2:53 pm

In the last decade, artificial intelligence (AI) has elicited both dreams of a massive transformation in the tech industry and a deep anxiety surrounding its potential ramifications. Elon Musk, a leading voice in the tech industry, has demonstrated this duality. He simultaneously is promising a world of autonomous AI-powered cars while warning us of the risks associated with AI, even calling for a pause in the development of AI. This is especially ironic considering Musk was an early investor in OpenAI, founded in 2015.

One of the most exciting and concerning developments riding the current wave of AI research is autonomous AI. Autonomous AI systems can perform tasks, make decisions, and adapt to new situations on their own, without continual human oversight or task-by-task programming. One of the best-known examples at the moment is ChatGPT, a major milestone in the evolution of artificial intelligence. Lets look at how ChatGPT came about, where its headed, and what the technology can tell us about the future of AI.

The tale of artificial intelligence is a captivating one of progress and collaboration across disciplines. It began in the early 20th century with the pioneering efforts of Santiago Ramn y Cajal, a neuroscientist who used his understanding of the human brain to create the concept of neural networks, a cornerstone of modern AI. Neural networks are computer systems that emulate the structure of the human brain and nervous system to produce machine-based intelligence. Some time later, Alan Turing was busy developing the modern computer and proposing the Turing Test, a means of evaluating if a machine could display human-like intelligent behavior. These developments spurred a wave of interest in AI.

As a result, the 1950s saw John McCarthy, Marvin Minsky, and Claude Shannon explore the prospects of AI, and Frank Rosenblatt coined the term artificial intelligence. The following decades saw two major breakthroughs. The first was expert systems, which are AI systems that are individually designed to perform niche, industry-specific tasks. The second were natural language processing applications, like early chatbots. With the arrival of large datasets and ever-improving computing power in the 2000s and 2010s, machine learning techniques flourished, leading us to autonomous AI.

This significant step enables AI systems to perform complex tasks without the need of case-by-case programming, opening them to a wide range of uses. One such autonomous system Chat GPT from OpenAI has of course recently become widely known for its amazing ability to learn from vast amounts of data and generate coherent, human-like responses.

So what is the basis of ChatGPT? We humans have two basic capabilities that enable us to think. We possess knowledge, whether its about physical objects or concepts, and we possess an understanding of those things in relation to complex structures like language, logic, etc. Being able to transfer that knowledge and understanding to machines is one of the toughest challenges in AI.

With knowledge alone, OpenAIs GPT-4 model couldnt handle more than a single piece of information. With context alone, the technology couldnt understand anything about the objects or concepts it was contextualizing. But combine both, and something remarkable happens. The model can become autonomous. It can understand and learn. Apply that to text, and you have ChatGPT. Apply it to cars, and you have autonomous driving, and so on.

OpenAI isnt alone in its field, and many companies have been developing machine learning algorithms and utilizing neural networks to produce algorithms that can handle both knowledge and context for decades. So what changed when ChatGPT came to the market? Some people have pointed to the staggering amount of data provided by the internet as the big change that fueled ChatGPT. However, if that were all that was needed, its likely that Google would have beaten OpenAI because of Googles dominance over all of that data. So how did OpenAI do it?

One of OpenAIs secret weapons is a new tool called reinforcement learning from human feedback (RLHF). OpenAI used RHLF to train the OpenAI algorithm to understand both knowledge and context. OpenAI didnt create the idea of RLHF, but the company was among the first to rely on it so wholly for the development of a large language model (LLM) like ChatGPT.

RLHF simply allowed the algorithm to self-correct based on feedback. So while ChatGPT is autonomous in how it produces an initial response to a prompt, it has a feedback system that lets it know whether its response was accurate or in some way problematic. That means it can constantly get better and better without significant programming changes. This model resulted in a fast-learning chat system that quickly took the world by storm.

The new age of autonomous AI has begun. In the past, we had machines that could understand various concepts to a degree, but only in highly specific domains and industries. For example, industry-specific AI software has been used in medicine for some time. But the search for autonomous or general AI meaning AI that could function on its own to perform a wide variety of tasks in various fields with a degree of human-like intelligence finally produced globally noteworthy results in 2022, when Chat GPT handily and decisively passed the Turing test.

Understandably, some people are starting to fear that their expertise, jobs, and even uniquely human qualities may get replaced by intelligent AI systems like ChatGPT. On the other hand, passing the Turing test isnt an ideal indicator for how human-like a particular AI system may be.

For example, Roger Penrose, who won the Nobel Prize in Physics in 2020, argues that passing the Turing test does not necessarily indicate true intelligence or consciousness. He argues that there is a fundamental difference between the way that computers and humans process information and that machines will never be able to replicate the type of human thought processes that give rise to consciousness.

So passing the Turing test is not a true measure of intelligence, because it merely tests a machines ability to imitate human behavior, rather than its ability to truly understand and reason about the world. True intelligence requires consciousness and the ability to understand the nature of reality, which cannot be replicated by a machine. That means that, far from replacing us, ChatGPT and other similar software will simply provide tools to help us improve and increase efficiency in a variety of fields.

So, machines will be able to complete many tasks autonomously, in ways we never thought possible from understanding and writing content, to securing vast amounts of information, performing delicate surgeries, and driving our cars. But, for now, at least in this current age of technology, capable workers neednt fear for their jobs. Even autonomous AI systems dont have human intelligence. They can just understand and perform better than us humans at certain tasks. They arent more intelligent than us overall, and they dont pose a significant threat to our way of life; at least, not in this wave of AI development.

Read the original:

What Can ChatGPT Tell Us About the Evolution of Artificial ... - Unite.AI

Posted in Artificial Intelligence | Comments Off on What Can ChatGPT Tell Us About the Evolution of Artificial … – Unite.AI

New artificial intelligence feature comes to Snapchat – Inklings News

Posted: at 2:53 pm

Im here to chat with you and keep you company! Is there anything youd like to talk about? my Artificial Intelligence asked me. The second I questioned its arrival, I didnt enjoy the idea of this robot keeping me company. The first thing I did was try to get rid of this pest. Thats when I realized, I couldnt. This bot was stuck at the top of my feed, the genetically colorful alien-like Bitmoji always glaring back at me.

Artificial intelligence, or AI, used to be only a feature for Snapchat+ users who choose to either pay $3.99 per month or $29.99 per year allowing them to have access to more features. However, beginning on April 19, all Snapchat users were greeted with a new user at the top of the screen as they opened the app. Users never got the option to add their AI back, yet there it was, automatically pinned to the top of the screen.

You are able to communicate with your AI the same ways you can with your friends, like sending pictures that appear to delete immediately. Once you send this image, the AI will send a chat, attempting to guess what your image is of. Sending a picture of a car window would result in the message, Looks like youre on the move! Hope youre having a safe and fun journey.

Its very futuristic, I feel like Im being stalked or hacked by Snapchat.

Avery Johnson 25

Staples students have mixed feelings regarding the new Snapchat feature.

Its very futuristic, Avery Johnson 25 said. I feel like Im being stalked or hacked by Snapchat.

Though many would like to eliminate it, some feel it is not worth the hassle.

I dont think I would consider paying to remove this feature, Noah Wolff 25 said. It doesnt bother me enough.

Elijah Debrito 25 originally enjoyed the new feature, but after a couple of days, it got old. I dont like it anymore, Debrito said. Its creepy. It says it cant see your photos but then you send it a snap and it can tell what youre doing.

In order to improve the functionality of the AI for all, students have other recommendations for the new technology.

I would recommend it develops more diverse responses, Johnson said. Sometimes I just want to strangle the AI from the screen when it doesnt understand what Im saying.

According to Snapchats support page, Just like real friends, the more you interact with My AI the better it gets to know you, and the more relevant the responses will be. On the same page, it also states that, You should also avoid sharing confidential or sensitive information with My AI.

Overall, having the option to decline this new friend would be preferred by many.

I think that they should make it so you can get rid of it, Julia Coda 25 said.

Original post:

New artificial intelligence feature comes to Snapchat - Inklings News

Posted in Artificial Intelligence | Comments Off on New artificial intelligence feature comes to Snapchat – Inklings News

Artificial intelligence poised to hinder, not help, access to justice – Reuters

Posted: at 2:53 pm

April 25 (Reuters) - The advent of ChatGPT, the fastest-growing consumer application in history, has sparked enthusiasm and concern about the potential for artificial intelligence to transform the legal system.

From chatbots that conduct client intake, to tools that assist with legal research, document management, even writing legal briefs, AI has been touted for its potential to increase efficiency in the legal industry. It's also been recognized for its ability to help close the access-to-justice gap by making legal help and services more broadly accessible to marginalized groups.

Most low-income U.S. households deal with at least one civil legal problem a year, concerning matters like housing, healthcare, child custody and protection from abuse, according to the Legal Services Corp. They dont receive legal help for 92% of those problems.

Moreover, our poorly-funded public defense system for criminal matters has been a broken process for decades.

AI and similar technologies show promise in their ability to democratize legal services, including applications such as online dispute resolution and automated document preparation.

For example, A2J Author uses decision trees, a simplistic kind of AI, to build document preparation tools for complex filings in housing law, public benefits law and more. The non-profit JustFix provides online tools that help with a variety of landlord-tenant issues. And apps have been developed to help people with criminal expungement, to prepare for unemployment hearings, and even to get divorced.

Still, there's more reason to be wary rather than optimistic about AIs potential effects on access to justice.

Much of the existing technology and breakneck momentum in the industry is simply not geared toward the interests of underserved populations, according to several legal industry analysts and experts on the intersection of law and technology. Despite the technology's potential, some warned that the current trajectory actually runs the risk of exacerbating existing disparities.

Rashida Richardson, an assistant professor at Northeastern University School of Law, told me that AI has lots of potential, while stressing that there hasnt been enough public discussion of the many limitations of AI and of data itself." Richardson has served as technology adviser to the White House and Federal Trade Commission.

"Fundamentally, problems of access to justice are about deeper structural inequities, not access to technology," Richardson said.

It's critical to recognize that the development of AI technology is overwhelmingly unregulated and is driven by market forces, which categorically favor powerful, wealthy actors. After all, tech companies are not developing AI for free, and their interest is in creating a product attractive to those who can pay for it.

Your ability to enjoy the benefits of any new technology corresponds directly to your ability to access that technology, said Jordan Furlong, a legal industry analyst and consultant, noting that ChatGPT Plus costs $20-a-month, for example.

Generative AI has fueled a new tech gold rush in "big law" and other industries, and those projects can sometimes cost millions, Reuters reported on April 4.

Big law firms and legal service providers are integrating AI search tools into their workflows and some have partnered with tech companies to develop applications in-house.

Global law firm Allen & Overy announced in February that its lawyers are now using chatbot-based AI technology from a startup called Harvey to automate some legal document drafting and research, for example. Harvey received a $5 million investment last year in a funding round, Reuters reported in February. Last month, PricewaterhouseCoopers said 4,000 of its legal professionals will also begin using the generative AI tool.

Representatives of PricewaterhouseCoopers and Allen & Overy did not respond to requests for comment.

But legal aid organizations, public defenders and civil rights lawyers who serve minority and low-income groups simply dont have the funds to develop or co-develop AI technology nor to contract for AI applications at scale.

The resources problem is reflected in the contours of the legal market itself, which is essentially two distinct sectors: one that represents wealthy organizational clients, and another that works for consumers and individuals, said William Henderson, a professor at the Indiana University Maurer School of Law.

Americans spent about $84 billion on legal services in 2021, according to Henderson's research and U.S. Census Bureau data. By contrast, businesses spent $221 billion, generating nearly 70% of legal services industry revenue.

Those disparities seem to be reflected in the development of legal AI thus far.

A 2019 study of digital legal technologies in the U.S. by Rebecca Sandefur, a sociologist at Arizona State University, identified more than 320 digital technologies that assist non-lawyers with justice problems. But Sandefur's research also determined that the applications don't make a significant difference in terms of improving access to legal help for low-income and minority communities. Those groups were less likely to be able to use the tools due to fees charged, limited internet access, language or literacy barriers, and poor technology design.

Sandefur's report identified other hurdles to innovation, including the challenges of coordination among innumerable county, state and federal court systems, and "the legal professions robust monopoly on the provision of legal advice" -- referring to laws and rules restricting non-lawyer ownership of businesses that engage in the practice of law.

Drew Simshaw, a Gonzaga University School of Law professor, told me that many non-lawyers are "highly-motivated" to develop in this area but are concerned about crossing the line into unauthorized practice of law. And there isn't a uniform definition of what constitutes unauthorized practice across jurisdictions, Simshaw said.

On balance, it's clear that AI certainly has great potential to disrupt and improve access-to-justice. But it's much less clear that we have the infrastructure or political will to make that happen.

Our Standards: The Thomson Reuters Trust Principles.

Opinions expressed are those of the author. They do not reflect the views of Reuters News, which, under the Trust Principles, is committed to integrity, independence, and freedom from bias.

Thomson Reuters

Hassan Kanu writes about access to justice, race, and equality under law. Kanu, who was born in Sierra Leone and grew up in Silver Spring, Maryland, worked in public interest law after graduating from Duke University School of Law. After that, he spent five years reporting on mostly employment law. He lives in Washington, D.C. Reach Kanu at hassan.kanu@thomsonreuters.com

Original post:

Artificial intelligence poised to hinder, not help, access to justice - Reuters

Posted in Artificial Intelligence | Comments Off on Artificial intelligence poised to hinder, not help, access to justice – Reuters

How to play the artificial intelligence boom – Investors Chronicle

Posted: at 2:53 pm

A general-purpose technology is one that impacts the whole economy. The invention of the steam engine, for example, changed what people consumed, how they travelled and where they lived. Centuries later, after a variety of modern technologies have had significant impacts of their own, todayit is artificial intelligence (AI) for which the biggest promises are beingmade.

Strange as it may sound, there are potential parallels, in terms of both efficiency gains and investment maniasbetween steam power and AI. In 1698, the original steam pump was patented by Thomas Savery but it wasnt until James Watt patented the improved Boulton-Watt engine in 1769 that steam started to transform the economy.

Factories had previously been powered by horses, wind or water, which imposed physical restrictions on where they could be located. Theclustering enabled by the use of steam power, combined with the invention of the train, allowed the efficient transfer of materials, goodsand ideas between hubs. A flywheel effect was set in motion. Between 1800 and 1900, UK gross domestic product (GDP) per capita rose 109 per cent from 2,331 to 4,930 (the figures are adjusted for 2013 prices).

Visit link:

How to play the artificial intelligence boom - Investors Chronicle

Posted in Artificial Intelligence | Comments Off on How to play the artificial intelligence boom – Investors Chronicle

Artificial Intelligence: Use and misuse – WSPA 7News

Posted: at 2:53 pm

(WSPA) Artificial Intelligence is now at your fingertips like never before.

From ChatGPT to Microsofts Bing, to MidJourney or Snapchats new My AI, the number of interactive AI programs is growing.

With that, the average user is starting to understand the endless possibilities of what AI can do.

Still, along with that comes some serious words of warning.

7NEWS looked into how the technology is already being misused and how it could affect everything from safety online to the job market.

Darren Hick, a professor at Furman University has seen firsthand, how the technology comes with some cautionary tales.

We always thought it was just over the horizon, and always just over the horizon, and last year it arrived, and we werent ready for it.

As an Assistant Professor of Philosophy, Hick was one of the first to catch a new type of plagiarism using AI two weeks after ChatGPT was launched to the public when one of his students turned in a final paper.

Normally when a student plagiarizes a paper this is a last-minute panic and so it sort of screams that it has been thrown together in the last second, according to Hick.This wasnt that. This was clean, this was really nicely put together but had all these other factors. And I had heard about ChatGPT, and it finally dawned on me that maybe this was that.

Thats when Hick started testing out ChatGPT.

The open-source program interacts a lot like a confident, well-spoken human.

You can ask it any question like, Describe AI so a 5-year-old can understand, and it spits out appropriate answers like: Its like having a smart robot that can learn and think like a human.

No matter how many times you ask the question, the answer is always slightly different, again, just like a human.

You can even ask it to write in different styles from Shakespeare to poetry.

However, with it still in its infancy, ChatGPT is full of inaccuracies.

When we asked the AI to tell us about Diane Lee from WSPA, it said she is a former journalist who may have left the station.

Kylan Cleveland, with the IT firm Cyber Solutions in Anderson, is quick to point out that right now programs like ChatGPT dont scour the web for information, which is why they are not up to date.

Programs like ChatGPT are only working with what is fed into them and have limited knowledge past 2021, as is stated on ChatGPTs home page.

Cleveland embraces the technology but also is leery of the day these programs gain access to up-to-date information.

When we can get to the point where AI has current data, I think thats when we should really take a step back and see what type of security measures, we can put in place to prevent it from being almost predictive, he said.

AI could also pose a major shakeup to the job market, with some experts who study the technology, like Thomas Fellows, predicting many white-collar jobs from accounting to marketing will be displaced.

If you dont have a job that has true human judgment and vault, it could be taken away, plain and simple, Fellows said.

Fellows has worked in software jobs where the main goal was to automate tasks. He believes AI will be akin to what machines have been to some factory jobs.

The jobs he said are most at risk are:

Still, no matter the warnings, educators and businesses alike said not embracing the many benefits of the technology would be like rejecting the internet in the 90s.

AI is a huge time saver, making tasks that used to take hours last only minutes or even seconds.

If youre scared of something then youre likely more dangerous with it than someone who is educated on it, according to Cleveland.

Fellows added, those who dont embrace the technology, from educators to companies, will lose out on a tool that is changing virtually every industry.

Fortunately, with the development of AI comes AI detectors, which is one way Hick was able to decipher plagiarism.

Despite a warning in his syllabus this Spring semester, the first case wasnt the last.

I went through exactly the same process. My first thought was not oh well this is ChatGPT, my first thought was well thats a weird way to put this, and eventually it clicked, Oh, its AI again.

Two students, two semesters, two Fs for the final grade.

Hick has a warning for all educators, no matter the school, no matter the grade level.

If we dont get used to the way plagiarism looks like now, then more and more of it is going to sneak by.

Follow this link:

Artificial Intelligence: Use and misuse - WSPA 7News

Posted in Artificial Intelligence | Comments Off on Artificial Intelligence: Use and misuse – WSPA 7News

U.S. Patent filed by BYND Cannasoft to Expand its Artificial Intelligence and Machine Learning Algorithm to a Male Treatment Device – Yahoo Finance

Posted: at 2:53 pm

BYND Cannasoft Enterprises, Inc.

BYND Cannasoft Subsidiary Zigi Carmel Initiatives & Investments LTD. filed U.S. Provisional Patent Application 63461609 on April 25, 2023 covering the mechanical structure, operation, and controlling aspects of a treatment device monitored by sensors and capable of stimulating the male sexual organs based on user preferences

ASHKELON, Israel and VANCOUVER, British Columbia, April 27, 2023 (GLOBE NEWSWIRE) -- BYND Cannasoft Enterprises Inc. (Nasdaq: BCAN) (CSE: BYND) ("BYND Cannasoft" or the "Company") announced today that its Zigi Carmel Initiatives & Investments LTD. subsidiary filed U.S. Provisional Patent Application 63461609 on April 25, 2023, covering the mechanical structure, operation, and controlling aspects of a male treatment device for external use capable of gathering information and creating custom programs according to the collected data from the sensors and uploading the data to the cloud. This U.S. Provisional Patent Application marks BYND Cannasoft's third potential candidate that could introduce new advanced haptic experiences to the fast-growing sexual wellness and sextech market.

The male treatment device utilizes artificial intelligence and machine learning algorithms to control its operational parameters based on the user's physiological parameters. The user, or a partner, can control the device with a smartphone app. Data collected by the device's sensors can be uploaded to the cloud where it will be stored to remember user preferences to create a custom experience for the user.

BYND Cannasoftannounced on March 8, 2023that its Zigi Carmel Initiatives & Investments LTD. subsidiary filed U.S. Provisional Patent Application number 63450503 covering the mechanical structure, operation, and controlling aspects of its smart female treatment device. OnApril 25, 2023, the company announcedit received a positive opinion from the Patent Cooperation Treaty (PCT) for its A.I.-based Female Treatment Device. The Patent Cooperation Treaty (PCT) assists applicants in seeking patent protection internationally for their inventions and currently has 157 contracting states. BYND Cannasoft intends to file a similar application with the PCT for its male treatment device.

Story continues

AnApril 2023 industry reportby Market Research Future projects the Sexual Wellness Market size could grow to $115.92 billion by 2030 from $84.89 billion in 2022. The report cites the growing prevalence of Sexually Transmitted Diseases (STDs), HIV infection, increasing government initiatives, and NGOs promoting contraceptives as the key market drivers dominating the market growth.According to Forbes, the Sextech Market is expected to grow to $52.7 billion by 2026 from its current $30 billion as online sales continue to grow. BYND Cannasoft plans to develop this A.I.-based smart treatment device for men, its A.I.-based smart treatment device for women, and its EZ-G device.

Yftah Ben Yaackov, CEO and Director of BYND Cannasoft, said, "As the multi-billion-dollar sexual wellness and sextech market continues to grow, the industry is undergoing tremendous changes in consumer preferences as devices are increasingly connected online and enabled with interactive content. In this market, A.I., machine learning, and haptic technology have the potential to personalize the operational parameters of sexual wellness devices based on the physiological parameters of the user." Mr. Ben Yaackov continued, "As a corporate lawyer, I recognize the value of licensing our potential A.I. and machine learning patent portfolio to customers in the sexual wellness market and producing innovative new products. The Board of BYND Cannasoft is committed to protecting the company's I.P. covering this potentially lucrative market and bringing this innovative technology to market."

About BYND Cannasoft Enterprises Inc.

BYND Cannasoft Enterprises is an Israeli-based integrated software and cannabis company. BYND Cannasoft owns and markets "Benefit CRM," a proprietary customer relationship management (CRM) software product enabling small and mediumsized businesses to optimize their daytoday business activities such as sales management, personnel management, marketing, call center activities, and asset management. Building on our 20 years of experience in CRM software, BYND Cannasoft is developing an innovative new CRM platform to serve the needs of the medical cannabis industry by making it a more organized, accessible, and price-transparent market. The Cannabis CRM System will include a Job Management (BENEFIT) and a module system (CANNASOFT) for managing farms and greenhouses with varied crops. BYND Cannasoft owns the patent-pending intellectual property for the EZ-G device. This therapeutic device uses proprietary software to regulate the flow of low concentrations of CBD oil, hemp seed oil, and other natural oils into the soft tissues of the female reproductive system to potentially treat a wide variety of women's health issues. The EZ-G device includes technological advancements as a sex toy with a more realistic experience and the prototype utilizes sensors to determine what enhances the users' pleasure. The user can control the device through a Bluetooth app installed on a smartphone or other portable device. The data will be transmitted and received from the device to and from the secure cloud using artificial intelligence (AI). The data is combined with other antonymic user preferences to improve its operation by increasing sexual satisfaction.

For Further Information please refer to information available on the Companys website: http://www.cannasoft-crm.com, the CSEs website: http://www.thecse.com/en/listings/life-sciences/bynd-cannasoft-enterprises-inc and on SEDAR: http://www.sedar.com.

Gabi KabazoChief Financial OfficerTel: (604) 833-6820email: ir@cannasoft-crm.com

For Media and Investor Relations, please contact:

David L. Kugelman(866) 692-6847 Toll Free - U.S. & Canada(404) 281-8556 Mobile and WhatsAppdk@atlcp.comSkype: kugsusa

Cautionary Note Regarding Forward-Looking Statements

This press release contains forward-looking statements within the meaning of the Private Securities Litigation Reform Act of 1995 involving risks and uncertainties, which may cause results to differ materially from the statements made. We intend such forward-looking statements to be covered by the safe harbor provisions for forward-looking statements contained in Section 27A of the Securities Act of 1933, as amended, and Section 21E of the Securities Exchange Act of 1934, as amended. When used in this document, the words "may," "would," "could," "will," "intend," "plan," "anticipate," "believe," "estimate," "expect," "potential," "continue," "strategy," "future," "project," "target," and similar expressions are intended to identify forward-looking statements, though not all forward looking statements use these words or expressions. All statements contained in this press release other than statements of historical fact, including, without limitation, statements regarding our male treatment device, our Cannabis CRM platform, our expanded EZ-G patent application, our market growth, and our objectives for future operations, are forward looking statements. Additional regulatory standards may be required, including FDA approval or any other approval for the purpose of manufacturing, marketing, and selling the devices under therapeutic indications. There is no certainty that the aforementioned approvals will be received, and all the information in this release is forward-looking. Such statements reflect the company's current views with respect to future events and are subject to such risks and uncertainties. Many factors could cause actual results to differ materially from the statements made, including unanticipated regulatory requests and delays, final patents approval, and those factors discussed in filings made by the company with the Canadian securities regulatory authorities, including (without limitation) in the company's management's discussion and analysis for the year ended December 31, 2022 and annual information form dated March 31, 2023, which are available under the company's profile atwww.sedar.com, and in filings made with the U.S. Securities and Exchange Commission. Should one or more of these factors occur, or should assumptions underlying the forward-looking statements prove incorrect, actual results may vary materially from those described herein as intended, planned, anticipated, or expected. We do not intend and do not assume any obligation to update these forwardlooking statements, except as required by law. Any such forward-looking statements represent management's estimates as of the date of this press release. While we may elect to update such forward-looking statements at some point in the future, we disclaim any obligation to do so, even if subsequent events cause our views to change. Shareholders are cautioned not to put undue reliance on such forwardlooking statements.

Link:

U.S. Patent filed by BYND Cannasoft to Expand its Artificial Intelligence and Machine Learning Algorithm to a Male Treatment Device - Yahoo Finance

Posted in Artificial Intelligence | Comments Off on U.S. Patent filed by BYND Cannasoft to Expand its Artificial Intelligence and Machine Learning Algorithm to a Male Treatment Device – Yahoo Finance

US states most and least likely to use Artificial Intelligence revealed – Digital Journal

Posted: at 2:53 pm

Artificial intelligence, or AI, has been increasingly present in everyday life for decades, but the launch of the conversational robot ChatGPT marked a turning point in its perception AFP/File Camille LAFFONT

An assessment of public attitudes to artificial intelligence (AI) has revealed Utah to be the state most likely to use AI over any other state in the U.S. Oregon and Washington are the second and third states most interested in using AI.

At the other end of the scale, Mississippi is the state least likely to use AI (followed by Louisiana and Alabama).

The research carried out by AI-driven website builder YACSS, who examined Google Keywords data of search terms frequently used by people interested in AI over the past 12 months. These terms were combined to find each states average monthly search volume for AI-related terms per 100,000 people and each states most common uses.

By placing Utah at the top, this was based on 202.9 searches per 100,000 people for AI and AI-related tools.

The full list is:

Although it is in its infancy, artificial intelligence is becoming more common to computer systems. In terms of the main applications, AI is being used for art most commonly across all fifty states, with voice generator being its second most popular use

With the above data, the fourth most AI-interested state is Vermont. Vermont uses AI the most for art, voice generator, music, text-to-video and animation. Searches for AI and AI-related terms average 173.08 per 100,000 people per month. Colorado is the fifth state most interested in AI, with an average of 170.03 searches for AI and AI-powered tools per month per 100,000 people. The state uses AI the most for art, followed by voice generator, music, animation and resume writing.

In a message sent to Digital Journal, YACSS comment: The use of artificial intelligence in the U.S. is on the rise, and its clear to see why. It is frequently used to reduce time spent on tedious tasks as well as provide users with endless creative possibilities, and this is all available at the touch of a button.

The rest is here:

US states most and least likely to use Artificial Intelligence revealed - Digital Journal

Posted in Artificial Intelligence | Comments Off on US states most and least likely to use Artificial Intelligence revealed – Digital Journal

Artificial intelligence is infiltrating health care. We shouldnt let it make all the decisions. – MIT Technology Review

Posted: at 2:53 pm

This article is from The Checkup, MIT Technology Reviews weekly biotech newsletter. To receive it in your inbox every Thursday, sign up here.

Would you trust medical advice generated by artificial intelligence? Its a question Ive been thinking over this week, in view of yet more headlines proclaiming that AI technologies can diagnose a range of diseases. The implication is often that theyre better, faster, and cheaper than medically trained professionals.

Many of these technologies have well-known problems. Theyre trained on limited or biased data, and they often dont work as well for women and people of color as they do for white men. Not only that, but some of the data these systems are trained on are downright wrong.

Dont settle for half the story.Get paywall-free access to technology news for the here and now.

Theres another problem. As these technologies begin to infiltrate health-care settings, researchers say were seeing a rise in whats known as AI paternalism. Paternalism in medicine has been problematic since the dawn of the profession. But now, doctors may be inclined to trust AI at the expense of a patients own lived experiences, as well as their own clinical judgment.

AI is already being used in health care. Some hospitals use the technology to help triage patients. Some use it to aid diagnosis, or to develop treatment plans. But the true extent of AI adoption is unclear, says Sandra Wachter, a professor of technology and regulation at the University of Oxford in the UK.

Sometimes we dont actually know what kinds of systems are being used, says Wachter. But we do know that their adoption is likely to increase as the technology improves and as health-care systems look for ways to reduce costs, she says.

Research suggests that doctors may already be putting a lot of faith in these technologies. In a study published a few years ago, oncologists were asked to compare their diagnoses of skin cancer with the conclusions of an AI system. Many of them accepted the AIs results, even when those results contradicted their own clinical opinion.

Theres a very real risk that well come to rely on these technologies to a greater extent than we should. And heres where paternalism could come in.

Paternalism is captured by the idiom the doctor knows best, write Melissa McCradden and Roxanne Kirsch of the Hospital for Sick Children in Ontario, Canada, in a recent scientific journal paper. The idea is that medical training makes a doctor the best person to make a decision for the person being treated, regardless of that persons feelings, beliefs, culture, and anything else that might influence the choices any of us make.

Paternalism can be recapitulated when AI is positioned as the highest form of evidence, replacing the all-knowing doctor with the all-knowing AI, McCradden and Kirsch continue. They say there is a rising trend toward algorithmic paternalism. This would be problematic for a whole host of reasons.

For a start, as mentioned above, AI isnt infallible. These technologies are trained on historical data sets that come with their own flaws. Youre not sending an algorithm to med school and teaching it how to learn about the human body and illnesses, says Wachter.

As a result, AI cannot understand, only predict, write McCradden and Kirsch. An AI could be trained to learn which patterns in skin cell biopsies have been associated with a cancer diagnosis in the past, for example. But the doctors who made those past diagnoses and collected that data might have been more likely to miss cases in people of color.

And identifying past trends wont necessarily tell doctors everything they need to know about how a patients treatment should continue. Today, doctors and patients should collaborate in treatment decisions. Advances in AI use shouldnt diminish patient autonomy.

So how can we prevent that from happening? One potential solution involves designing new technologies that are trained on better data. An algorithm could be trained on information about the beliefs and wishes of various communities, as well as diverse biological data, for instance. Before we can do that, we need to actually go out and collect that dataan expensive endeavor that probably wont appeal to those who are looking to use AI to cut costs, says Wachter.

Designers of these AI systems should carefully consider the needs of the people who will be assessed by them. And they need to bear in mind that technologies that work for some groups wont necessarily work for others, whether thats because of their biology or their beliefs. Humans are not the same everywhere, says Wachter.

The best course of action might be to use these new technologies in the same way we use well-established ones. X-rays and MRIs are used to help inform a diagnosis, alongside other health information. People should be able to choose whether they want a scan, and what they would like to do with their results. We can make use of AI without ceding our autonomy to it.

Philip Nitschke, otherwise known as Dr. Death, is developing an AI that can help people end their own lives. My colleague Will Douglas Heaven explored the messy morality of letting AI make life-and-death decisions in this feature from the mortality issue of our magazine.

In 2020, hundreds of AI tools were developed to aid the diagnosis of covid-19 or predict how severe specific cases would be. None of them worked, as Will reported a couple of years ago.

Will has also covered how AI that works really well in a lab setting can fail in the real world.

My colleague Melissa Heikkil has explored whether AI systems need to come with cigarette-pack-style health warnings in a recent edition of her newsletter, The Algorithm.

Tech companies are keen to describe their AI tools as ethical. Karen Hao put together a list of the top 50 or so words companies can use to show they care without incriminating themselves.

Scientists have used an imaging technique to reveal the long-hidden contents of six sealed ancient Egyptian animal coffins. They found broken bones, a lizard skull, and bits of fabric. (Scientific Reports)

Genetic analyses can suggest targeted treatments for people with colorectal cancerbut people with African ancestry have mutations that are less likely to benefit from these treatments than those with European ancestry. The finding highlights how important it is for researchers to use data from diverse populations. (American Association for Cancer Research)

Sri Lanka is considering exporting 100,000 endemic monkeys to a private company in China. A cabinet spokesperson has said the monkeys are destined for Chinese zoos, but conservationists are worried that the animals will end up in research labs. (Reuters)

Would you want to have electrodes inserted into your brain if they could help treat dementia? Most people who have a known risk of developing the disease seem to be open to the possibility, according to a small study. (Brain Stimulation)

A gene therapy for a devastating disease that affects the muscles of some young boys could be approved following a decision due in the coming weeksdespite not having completed clinical testing. (STAT)

See original here:

Artificial intelligence is infiltrating health care. We shouldnt let it make all the decisions. - MIT Technology Review

Posted in Artificial Intelligence | Comments Off on Artificial intelligence is infiltrating health care. We shouldnt let it make all the decisions. – MIT Technology Review

Landmark Supreme Court case could have ‘far reaching implications’ for artificial intelligence, experts say – Fox News

Posted: at 2:53 pm

An impending Supreme Court ruling focusing on whether legal protections given to Big Tech extend to their algorithms and recommendation features could have significant implications for future cases surrounding artificial intelligence, according to experts.

In late February, the Supreme Court heard oral arguments examining the extent of legal immunity given to tech companies that allow third-party users to publish content on their platforms.

One of two cases, Gonzalez v. Google, focuses on recommendations and algorithms used by sites like YouTube, allowing accounts to arrange and promote content to users.

MEET THE 72-YEAR-OLD CONGRESSMAN GOING BACK TO COLLEGE TO LEARN ABOUT AI

Section 230, which allows online platforms significant leeway regarding responsibility for users' speech, has been challenged multiple times in the Supreme Court. (AP Photo/Patrick Semansky, File)

Nohemi Gonzalez, a 23-year-old U.S. citizen studying abroad in France, was killed by ISIS terrorists who fired into a crowded bistro in Paris in 2015. Her family filed suit against Google, arguing that YouTube, which Google owns, aided and abetted the ISIS terrorists by allowing and promoting ISIS material on the platform with algorithms that helped to recruit ISIS radicals.

Marcus Fernandez, an attorney and co-owner of KFB Law, said the outcome of the case could have "far-reaching implications" for tech companies, noting it remains to be seen whether the decision will establish new legal protections for content or if it will open up more avenues for lawsuits against tech companies.

He added that it is important to remember that the ruling could determine the level of protection given to companies and how courts could interpret such protections when it comes to AI-generated content and algorithmic recommendations.

"The decision is likely to be a landmark one, as it will help define what kind of legal liability companies can expect when they use algorithms to target their users with recommendations, as well as what kind of content and recommendations are protected. In addition to this, it will also set precedent for how courts deal with AI-generated content," he said.

According to Section 230 of the Communications Decency Act, tech companies are immune to lawsuits based on content curated or posted by platform users. Much of the discussion from the justices in February waded into whether the posted content was a form of free speech and questioned the extent to which recommendations or algorithms played a role in promoting the content.

AI PAUSE CEDES POWER TO CHINA, HARMS DEVELOPMENT OF DEMOCRATIC AI, EXPERTS WARN SENATE

Artificial Intelligence words are seen in this illustration taken March 31, 2023. REUTERS/Dado Ruvic/Illustration (REUTERS/Dado Ruvic/Illustration)

At one point, the plaintiff's attorney, Eric Schnapper, detailed how YouTube presents thumbnail images and links to various online videos. He argued that while users create the content itself, the thumbnails and links are joint creations of the user and YouTube, thereby exceeding the scope of YouTube's legal protections.

Google attorney Lisa Blatt said the argument was inadmissible because it was not a part of the plaintiff's original complaint filed to the court.

Justice Sonia Sotomayor expressed concern that such a perspective would create a "world of lawsuits." Throughout the proceedings, she remained skeptical that a tech company should be liable for such speech.

Attorney Joshua Lastine, the owner of Lastine Entertainment Law, told Fox News Digital he would be "very surprised" if the justices found some "nexus" between what the algorithms generate and push onto users and other types of online harm, such as somebody telling another person to commit suicide. He said up until that point he does not believe a tech company would face legal repercussions.

Lastine, citing the story of the Hulu drama "The Girl From Plainville," said it is already extremely difficult to establish one-on-one liability and bringing in a third party, like a social media site or tech company, would only increase the difficulty of winning a case.

In 2014, Michelle Carter fell under the national spotlight after it was discovered that she sent text messages to her boyfriend, Conrad Roy III, urging him to kill himself. Though she was charged with involuntary manslaughter and faced up to 20 years in prison, Carter was only sentenced to 15months behind bars.

CLICK HERE TO READ MORE AI COVERAGE FROM FOX NEWS DIGITAL

Google headquarters in Mountain View, California, US, on Monday, Jan. 30, 2023. Alphabet Inc. is expected to release earnings figures on February 2. (Photographer: Marlena Sloss/Bloomberg via Getty Images)

"It was hard enough to find the girl who was sending the text messages liable, let alone the cell phone that was sending those messages," Lastine said. "Once algorithms and computers start telling people to start inflicting harm on other humans, we have bigger problems when machines start doing that."

Ari Lightman, a Distinguished Service Professor at the Carnegie Mellon Heinz College of Information Systems and Policy, told Fox News Digital that a change to Section 230 could open a "Pandora's box" of litigation against tech companies.

"If this opens up the floodgate of lawsuits for people to start suing all of these platforms for harms that have been perpetrated as they perceive toward themthat could really stifle down innovation considerably," he said.

However, Lightman also said the case reaffirmed the importance of consumer protection and noted that if a digital platform can recommend things to users with immunity, they need to design more accurate, usable, and safer products.

Lightman added that what constitutes harm in a particular case against a tech company is very subjective for example, an AI chatbot making someone wait too long or giving erroneous information. According to Lightman, a standard in which lawyers attempt to tie harm to a platform could be "very problematic," leading to a sort of "open season" for lawyers.

"It's going to be litigated and debated for a long period of time," Lightman said.

ALTERNATIVE INVENTOR? BIDEN AMIN OPENS DOOR TO NON-HUMAN, AI PATENT HOLDERS

Lightman noted that AI has many legal issues associated with it, not just liability and erroneous information but also IP issues specific to the content. He said that greater transparency about where the model acquired its data, why it presented such data, and the ability to audit would be an important mechanism for an argument against tech companies' immunity from grievances filed by users unhappy with the AI's output.

Throughout the oral arguments for the case, Schnapper reaffirmed his stance that YouTube's algorithm, which helps to present content to users, is in an of itself a form of speech on the part of YouTube and should therefore be considered separately from content posted by a third party.

Blatt claimed the company was not responsible because all search engines leverage user information to present results. For example, she noted that someone searching for "football" would be provided different results depending on whether they were in the U.S. or somewhere in Europe.

U.S. Deputy Solicitor General Malcolm Stewart compared the conundrum to a hypothetical situation where a bookstore clerk directs a customer to a specific table where a book is located. In this case, Stewart claimed the clerk's suggestion would be speech about the book and would be separate from any speech contained inside the book.

CLICK HERE TO GET THE FOX NEWS APP

The justices are expected to rule on the case by the end of June to determine whether YouTube could be sued over its algorithms used to push video recommendations.

Fox News' Brianna Herlihy contributed to this report.

Go here to read the rest:

Landmark Supreme Court case could have 'far reaching implications' for artificial intelligence, experts say - Fox News

Posted in Artificial Intelligence | Comments Off on Landmark Supreme Court case could have ‘far reaching implications’ for artificial intelligence, experts say – Fox News

Student Showcase Preview: Customizing the ChatGPT Artificial … – CMUnow

Posted: at 2:53 pm

Discussions about the benefits and risks associated with artificial intelligence (AI) are everywhere right now, and college campuses are grappling with how to address the rise of chat-based AI software like ChatGPT. At this years Student Showcase, several research projects related to machine learning and AI will be on display. The Student Showcase is a celebration of the creativity, research, innovation, entrepreneurship and artistic performance of Colorado Mesa University and Western Colorado Community College students at both the undergraduate and graduate level. This year will mark the 14th anniversary of the event and there will be 377 sessions, a near record number. Those curious to learn more about how students in the Computer Science Program are working with cutting-edge AI technologies are invited to come learn about their work and ask questions during one of the many sessions focused on AI.

One of these groups, comprised of CMU computer science students Sullivan Frazier, Zackary Mason and Axel Garces, is going under the hood to develop their own machine learning software program as well as experimenting to make the popular ChatGPT chatbot platform more user friendly and approachable. A chatbot is a computer program that simulates human conversation and allows humans to engage with digital devices as if they were speaking with a real person.

Working from the premise that many people find AI intimidating, this group has collaborated to build an interactive web application that allows users to customize the characteristics of the chatbot they interact with. For example, you can choose to have your chatbot assume the characteristics, speech patterns and knowledge of Yoda from Star Wars. In addition to making the chatbot experience more playful and fun, this feature can also allow users to select a chatbot based on their personal language and culture preferences allowing for a chatbot experience that reflects the individual using it.

Through their work the group has grappled with some of the deeper issues that AI presents. Mason explained, machine learning has been around since the 90s, but now we have the computing power to make products that people find useful and its not behind closed doors anymore. ChatGPT isnt creating new things, but it is quickly and accurately sorting through the huge repository of human knowledge that people have put into it, which is something new.

Machine learning has been around since the 90s, but now we have the computing power to make products that people find useful and its not behind closed doors anymore. ChatGPT isnt creating new things, but it is quickly and accurately sorting through the huge repository of human knowledge that people have put into it, which is something new. Zackary Mason

The team is specifically concerned about AI applications in which the programs are forced to make tough decisions where serious tradeoffs have to be considered. They believe that AI is great at collecting and organizing data, but the group argues there still needs to be a human element when the stakes are so high. Sometimes you need an ethical line, you need a moral line, you need a human with a heartbeat making those big decisions, said Frazier. Mason agreed, I dont think AI is going to take all our jobs, but we need to find the balance between humans and technology. The group is excited about the future of computer science, and they are optimistic that humanity will be resilient in the face of the changes and challenges that AI presents.

Frazier is excited to present their research and bring this discussion to the larger CMU community at the Student Showcase. Sometimes it feels like Im a bit cooped up in Confluence Hall in my daily life. I dont talk to a lot of people outside of computer science, and a lot of people dont have a clue as to what were doing and whats going on in here. Going to showcase allows people to come see what youre up to and you get to learn about things happening in totally different fields, said Frazier.

Sometimes it feels like Im a bit cooped up in Confluence Hall in my daily life. I dont talk to a lot of people outside of computer science, and a lot of people dont have a clue as to what were doing and whats going on in here. Going to showcase allows people to come see what youre up to and you get to learn about things happening in totally different fields. Sullivan Frazier

Frazier, Mason and Garces group received guidance and support from their faculty mentor Associate Professor of Computer Science and Co-Director of Cyber Security Center Ram Basnet, PhD. Basnet, along with other CMU computer science faculty, is looking to expand the AI program offerings in coming years and the department currently offers professional certificates in cybersecurity, data science and web application development for students pursuing a degree in computer science.

This years Student Showcase will kick off at 12pm on Friday, Apr. 28 at the Love Recital Hall in the Moss Performing Arts Center. Presentations, performances, demonstrations and exhibits will then take place throughout the day across campus. The day will wrap up with a celebration event at 4:30pm in the University Center Meyer Ballroom.

This event is free and open to the public and more information about this years sessions and parking details are available on the Student Showcase website.

More here:

Student Showcase Preview: Customizing the ChatGPT Artificial ... - CMUnow

Posted in Artificial Intelligence | Comments Off on Student Showcase Preview: Customizing the ChatGPT Artificial … – CMUnow

Page 10«..9101112..2030..»