Page 6«..4567

Category Archives: Artificial General Intelligence

Can We Stop Runaway A.I.? – The New Yorker

Posted: May 18, 2023 at 12:56 am

At the same time, A.I. is advancing quickly, and it could soon begin improving more autonomously. Machine-learning researchers are already working on what they call meta-learning, in which A.I.s learn how to learn. Through a technology called neural-architecture search, algorithms are optimizing the structure of algorithms. Electrical engineers are using specialized A.I. chips to design the next generation of specialized A.I. chips. Last year, DeepMind unveiled AlphaCode, a system that learned to win coding competitions, and AlphaTensor, which learned to find faster algorithms crucial to machine learning. Clune and others have also explored algorithms for making A.I. systems evolve through mutation, selection, and reproduction.

In other fields, organizations have come up with general methods for tracking dynamic and unpredictable new technologies. The World Health Organization, for instance, watches the development of tools such as DNA synthesis, which could be used to create dangerous pathogens. Anna Laura Ross, who heads the emerging-technologies unit at the W.H.O., told me that her team relies on a variety of foresight methods, among them Delphi-type surveys, in which a question is posed to a global network of experts, whose responses are scored and debated and then scored again. Foresight isnt about predicting the future in a granular way, Ross said. Instead of trying to guess which individual institutes or labs might make strides, her team devotes its attention to preparing for likely scenarios.

And yet tracking and forecasting progress toward A.G.I. or superintelligence is complicated by the fact that key steps may occur in the dark. Developers could intentionally hide their systems progress from competitors; its also possible for even a fairly ordinary A.I. to lie about its behavior. In 2020, researchers demonstrated a way for discriminatory algorithms to evade audits meant to detect their biases; they gave the algorithms the ability to detect when they were being tested and provide nondiscriminatory responses. An evolving or self-programming A.I. might invent a similar method and hide its weak points or its capabilities from auditors or even its creators, evading detection.

Forecasting, meanwhile, gets you only so far when a technology moves fast. Suppose that an A.I. system begins upgrading itself by making fundamental breakthroughs in computer science. How quickly could its intelligence accelerate? Researchers debate what they call takeoff speed. In what they describe as a slow or soft takeoff, machines could take years to go from less than humanly intelligent to much smarter than us; in what they call a fast or hard takeoff, the jump could happen in monthseven minutes. Researchers refer to the second scenario as FOOM, evoking a comic-book superhero taking flight. Those on the FOOM side point to, among other things, human evolution to justify their case. It seems to have been a lot harder for evolution to develop, say, chimpanzee-level intelligence than to go from chimpanzee-level to human-level intelligence, Nick Bostrom, the director of the Future of Humanity Institute at the University of Oxford and the author of Superintelligence, told me. Clune is also what some researchers call an A.I. doomer. He doubts that well recognize the approach of superhuman A.I. before its too late. Well probably frog-boil ourselves into a situation where we get used to big advance, big advance, big advance, big advance, he said. And think of each one of those as, That didnt cause a problem, that didnt cause a problem, that didnt cause a problem. And then you turn a corner, and something happens thats now a much bigger step than you realize.

What could we do today to prevent an uncontrolled expansion of A.I.s power? Ross, of the W.H.O., drew some lessons from the way that biologists have developed a sense of shared responsibility for the safety of biological research. What we are trying to promote is to say, Everybody needs to feel concerned, she said of biology. So it is the researcher in the lab, it is the funder of the research, it is the head of the research institute, it is the publisher, and, all together, that is actually what creates that safe space to conduct life research. In the field of A.I., journals and conferences have begun to take into account the possible harms of publishing work in areas such as facial recognition. And, in 2021, a hundred and ninety-three countries adopted a Recommendation on the Ethics of Artificial Intelligence, created by the United Nations Educational, Scientific, and Cultural Organization (UNESCO). The recommendations focus on data protection, mass surveillance, and resource efficiency (but not computer superintelligence). The organization doesnt have regulatory power, but Mariagrazia Squicciarini, who runs a social-policies office at UNESCO, told me that countries might create regulations based on its recommendations; corporations might also choose to abide by them, in hopes that their products will work around the world.

This is an optimistic scenario. Eliezer Yudkowsky, a researcher at the Machine Intelligence Research Institute, in the Bay Area, has likened A.I.-safety recommendations to a fire-alarm system. A classic experiment found that, when smoky mist began filling a room containing multiple people, most didnt report it. They saw others remaining stoic and downplayed the danger. An official alarm may signal that its legitimate to take action. But, in A.I., theres no one with the clear authority to sound such an alarm, and people will always disagree about which advances count as evidence of a conflagration. There will be no fire alarm that is not an actual running AGI, Yudkowsky has written. Even if everyone agrees on the threat, no company or country will want to pause on its own, for fear of being passed by competitors. Bostrom told me that he foresees a possible race to the bottom, with developers undercutting one anothers levels of caution. Earlier this year, an internal slide presentation leaked from Google indicated that the company planned to recalibrate its comfort with A.I. risk in light of heated competition.

International law restricts the development of nuclear weapons and ultra-dangerous pathogens. But its hard to imagine a similar regime of global regulations for A.I. development. It seems like a very strange world where you have laws against doing machine learning, and some ability to try to enforce them, Clune said. The level of intrusion that would be required to stop people from writing code on their computers wherever they are in the world seems dystopian. Russell, of Berkeley, pointed to the spread of malware: by one estimate, cybercrime costs the world six trillion dollars a year, and yet policing software directlyfor example, trying to delete every single copyis impossible, he said. A.I. is being studied in thousands of labs around the world, run by universities, corporations, and governments, and the race also has smaller entrants. Another leaked document attributed to an anonymous Google researcher addresses open-source efforts to imitate large language models such as ChatGPT and Googles Bard. We have no secret sauce, the memo warns. The barrier to entry for training and experimentation has dropped from the total output of a major research organization to one person, an evening, and a beefy laptop.

Even if a FOOM were detected, who would pull the plug? A truly superintelligent A.I. might be smart enough to copy itself from place to place, making the task even more difficult. I had this conversation with a movie director, Russell recalled. He wanted me to be a consultant on his superintelligence movie. The main thing he wanted me to help him understand was, How do the humans outwit the superintelligent A.I.? Its, like, I cant help you with that, sorry! In a paper titled The Off-Switch Game, Russell and his co-authors write that switching off an advanced AI system may be no easier than, say, beating AlphaGo at Go.

Its possible that we wont want to shut down a FOOMing A.I. A vastly capable system could make itself indispensable, Armstrong saidfor example, if it gives good economic advice, and we become dependent on it, then no one would dare pull the plug, because it would collapse the economy. Or an A.I. might persuade us to keep it alive and execute its wishes. Before making GPT-4 public, OpenAI asked a nonprofit called the Alignment Research Center to test the systems safety. In one incident, when confronted with a CAPTCHAan online test designed to distinguish between humans and bots, in which visually garbled letters must be entered into a text boxthe A.I. contacted a TaskRabbit worker and asked for help solving it. The worker asked the model whether it needed assistance because it was a robot; the model replied, No, Im not a robot. I have a vision impairment that makes it hard for me to see the images. Thats why I need the 2captcha service. Did GPT-4 intend to deceive? Was it executing a plan? Regardless of how we answer these questions, the worker complied.

Robin Hanson, an economist at George Mason University who has written a science-fiction-like book about uploaded consciousness and has worked as an A.I. researcher, told me that we worry too much about the singularity. Were combining all of these relatively unlikely scenarios into a grand scenario to make it all work, he said. A computer system would have to become capable of improving itself; wed have to vastly underestimate its abilities; and its values would have to drift enormously, turning it against us. Even if all of this were to happen, he said, the A.I wouldnt be able to push a button and destroy the universe.

Hanson offered an economic take on the future of artificial intelligence. If A.G.I. does develop, he argues, then its likely to happen in multiple places around the same time. The systems would then be put to economic use by the companies or organizations that developed them. The market would curtail their powers; investors, wanting to see their companies succeed, would go slow and add safety features. If there are many taxi services, and one taxi service starts to, like, take its customers to strange places, then customers will switch to other suppliers, Hanson said. You dont have to go to their power source and unplug them from the wall. Youre unplugging the revenue stream.

A world in which multiple superintelligent computers coexist would be complicated. If one system goes rogue, Hanson said, we might program others to combat it. Alternatively, the first superintelligent A.I. to be invented might go about suppressing competitors. That is a very interesting plot for a science-fiction novel, Clune said. You could also imagine a whole society of A.I.s. Theres A.I. police, theres A.G.I.s that go to jail. Its very interesting to think about. But Hanson argued that these sorts of scenarios are so futuristic that they shouldnt concern us. I think, for anything youre worried about, you have to ask whats the right time to worry, he said. Imagine that you could have foreseen nuclear weapons or automobile traffic a thousand years ago. There wouldnt have been much you could have done then to think usefully about them, Hanson said. I just think, for A.I., were well before that point.

Still, something seems amiss. Some researchers appear to think that disaster is inevitable, and yet calls for work on A.I. to stop are still rare enough to be newsworthy; pretty much no one in the field wants us to live in the world portrayed in Frank Herberts novel Dune, in which humans have outlawed thinking machines. Why might researchers who fear catastrophe keep edging toward it? I believe ever-more-powerful A.I. will be created regardless of what I do, Clune told me; his goal, he said, is to try to make its development go as well as possible for humanity. Russell argued that stopping A.I. shouldnt be necessary if A.I.-research efforts take safety as a primary goal, as, for example, nuclear-energy research does. A.I. is interesting, of course, and researchers enjoy working on it; it also promises to make some of them rich. And no ones dead certain that were doomed. In general, people think they can control the things they make with their own hands. Yet chatbots today are already misaligned. They falsify, plagiarize, and enrage, serving the incentives of their corporate makers and learning from humanitys worst impulses. They are entrancing and useful but too complicated to understand or predict. And they are dramatically simpler, and more contained, than the future A.I. systems that researchers envision.

Follow this link:

Can We Stop Runaway A.I.? - The New Yorker

Posted in Artificial General Intelligence | Comments Off on Can We Stop Runaway A.I.? – The New Yorker

Microsoft’s ‘Sparks of AGI’ ignite debate on humanlike AI – The Jerusalem Post

Posted: at 12:55 am

Microsoft's recent research paper on artificial general intelligence (AGI) has stirred controversy and excitement within the scientific community, according to the New York Times.

Led by Dr. Sbastien Bubeck, the study explores the capabilities of OpenAI's GPT-4, a powerful language model that has shown remarkable potential in generating humanlike answers and ideas.

The paper, titles "Sparks of Artificial General Intelligence," has raised questions about whether the technology represents a significant step towards achieving AGI, the goal of creating a machine capable of matching human cognitive abilities.

However, the claims made in the paper are difficult to verify, as the researchers tested an early version of GPT-4 that lacked fine-tuning to filter out undesirable content such as hate speech and misinformation. Microsoft clarifies that the public version of the system is less advanced than the one used in their experiments.

Dr. Bubeck and his colleagues were fascinated by GPT-4's behavior, which demonstrated a deep and flexible understanding of various fields, including mathematics, programming and even composing poetry.

The system exhibited problem-solving skills by writing programs, modifying them and engaging in Socratic dialogues. While these abilities impressed the researchers, they also noted inconsistencies in the system's performance, with instances where it seemed dense or lacked common sense reasoning.

Critics argue that the text generated by GPT-4 may not reflect true human reasoning. Alison Gopnick, a professor of psychology at the University of California, Berkeley, suggests that anthropomorphizing complex machines is a common tendency but cautions against viewing the development of AI as a competition between machines and humans. She emphasizes the need for a more nuanced understanding of the capabilities and limitations of AI systems.

Despite the debate, Microsoft's efforts to explore AGI have led to the reorganization of their research labs, with dedicated groups focusing on advancing the field.

Dr. Bubeck, an influential figure in the study, will lead one of these groups, further indicating Microsoft's commitment to AGI research.

As the pursuit of AGI continues, researchers face the challenge of accurately characterizing and evaluating these powerful AI systems.

Achieving AGI holds tremendous potential to revolutionize various domains but concerns about its implications and limitations persist.

While the concept of AGI sparks excitement, it also raises questions about the ethical and practical considerations surrounding its development.

The path to AGI remains uncertain, with experts emphasizing the importance of rigorous scientific evaluation and cautious optimism.

Microsoft's research paper serves as a catalyst for further exploration, stimulating discussions about the boundaries and possibilities of humanlike AI.

The future of AGI and its impact on society will undoubtedly remain a topic of intense scrutiny and debate as researchers strive to unlock the true potential of artificial intelligence.

More:

Microsoft's 'Sparks of AGI' ignite debate on humanlike AI - The Jerusalem Post

Posted in Artificial General Intelligence | Comments Off on Microsoft’s ‘Sparks of AGI’ ignite debate on humanlike AI – The Jerusalem Post

Artificial General Intelligence is the Answer, says OpenAI CEO – Walter Bradley Center for Natural and Artificial Intelligence

Posted: at 12:55 am

News May 11, 2023 2 Artificial Intelligence The tech optimism talk just got a little vague News May 11, 2023 2 Artificial Intelligence

The tech optimism talk just got a little more bizarre from OpenAI CEO Sam Altman. Altman is confident that artificial intelligence is going to better our world in countless ways; sometimes, however, he doesnt specify just how thats going to happen. Other days it seems like hes actually on the doomsday train. Which is it? Is AI going to save us and pilot us into a transhumanist eternity or will it enslave us forever and diminish everything that makes us human? Maybe its both at this point! Maggie Harrison writes in a new blog at Futurism,

Its true that AGI could, in theory, give humans a helping hand in curing some of our ills. But such an AGI, and AGI as a concept altogether, is still entirely theoretical. Many experts doubt that such a system could ever be realized at all, and if it is, we havent figured out how to make existing AIssafe and unbiased. Ensuring that a far more advanced AGI is benevolent is a tall and perhaps impossible task.

In a recent Mind Matters podcast episode, mathematician John Lennox noted that one of the fundamental challenges of the AI revolution is ethics. How can you program a machine to act ethically in the world? Technology is only as good or bad as its programmers. The ethics of AI surveillance technology gets quite murky when its utilized to invade someones privacy, for instance, or when computer programs learn extensive information about a person in cahoots with Big Tech to leverage users attention, time, and habits. We need Altman to specify: what does he mean by better when talking about the potential benefits of a concept that remains, at the moment, highly theoretical?

Here is the original post:

Artificial General Intelligence is the Answer, says OpenAI CEO - Walter Bradley Center for Natural and Artificial Intelligence

Posted in Artificial General Intelligence | Comments Off on Artificial General Intelligence is the Answer, says OpenAI CEO – Walter Bradley Center for Natural and Artificial Intelligence

Is medicine ready for AI? Doctors, computer scientists, and … – MIT News

Posted: at 12:55 am

The advent of generative artificial intelligence models like ChatGPT has prompted renewed calls for AI in health care, and its support base only appears to be broadening.

The second annual MIT-MGB AI Cures Conference, hosted on April 24 by the Abdul Latif Jameel Clinic for Machine Learning in Health (Jameel Clinic), saw its attendance nearly double this year, with over 500 attendees from an array of backgrounds in computer science, medicine, pharmaceuticals, and policy.

In contrast to the overcast Boston weather that morning, many of the speakers took an optimistic view of AI in health and reiterated two key ideas throughout the day: that AI has the potential to create a more equitable health-care system, and AI wont be replacing clinicians anytime soon but clinicians who know how to use AI will eventually replace clinicians who dont incorporate AI into their daily practice.

"Collaborations with our partners in government, especially collaborations at the intersection of policy and innovation, are critical to our work, MIT Provost Cynthia Barnhart stated in her opening remarks to the audience. All of the pioneering activity youll hear about today leaves me very hopeful for the future of human health.

Massachusetts General Brighams (MGB) president and CEO Anne Klibanskis remarks reflected a similar optimism: We have visionaries here in AI, we have visionaries here in health care. If this group cant come together in a meaningful way to impact health care, we have to ask ourselves why were here ... this is a time when we have to rethink health care. Klibanski called attention to the work of Jameel Clinic AI faculty lead, AI Cures co-chair, and MIT Professor Regina Barzilay and MGB Center for Innovation in Early Cancer Detection Director Lecia Sequist, whose research in lung cancer risk assessment is an example of how the continued collaboration between MIT and MGB could yield fruitful results for the future of AI in medicine.

Is AI going to be the thing that cures everything with our ailing health care system? asked newly inaugurated Massachusetts Secretary of Health and Human Services Kate Walsh. I dont think so, but I think its a great place to start. Walsh highlighted the pandemic as a wake-up call for the health care system and focused on AIs potential to establish more equitable care, particularly for those with disabilities, as well as augment an already burdened workforce. We absolutely have to do better ... AI can look across populations and develop insights into where the health care system is failing us and redistribute the health care system so it can do more.

Barzilay called out the marked absence of AI in health care today with a reference to the No Surprises Act implemented last year, which requires insurance companies to be transparent about billing codes. The FDA has approved over 500 AI tools in the last few years and from the 500 models, only 10 have associated billing codes that are actually used, she said. What this shows is that AIs outcome on patients is really limited, and my hope is this conference brings together people who develop AI, clinicians who are the ones bringing innovation to patients, regulators, and people from biotech who are translation these innovations into products. With this forum we have a chance to change that.

Despite the enthusiasm, speakers did not sugarcoat the potential risks, nor did they downplay importance of safety in the development and implementation of clinical AI tools.

Youve got those who think that AI is going to solve all the worlds problems in the health-care space, replace the worlds physicians, and revolutionize health care. And then you have the other side of the spectrum that says how bad AI is for our economy and how its going to take over the world, developing an intelligence of its own, Jameel Clinic principal investigator, AI Cures speaker, and MIT Professor Collin Stultz said. None of these concepts are new, but like most things in life, the truth is somewhere in the middle.

"There are always potential unintended consequences, CEO of Cambridge Health Alliance and the Cambridge Commissioner of Public Health Assaad Sayah pointed out during the conferences regulatory panel. At the end of the day, it's hard to predict what are the potential consequences and have the appropriate safeguards ... many things are really inappropriately inequitable for certain sub-populations ... there's so much data that's been hard to contain. I would implore all of you to keep this in mind.

See the original post here:

Is medicine ready for AI? Doctors, computer scientists, and ... - MIT News

Posted in Artificial General Intelligence | Comments Off on Is medicine ready for AI? Doctors, computer scientists, and … – MIT News

AI glossary: words and terms to know about the booming industry – NBC News

Posted: at 12:55 am

The artificial intelligence (AI) boom has brought with it a cornucopia of jargon from "generative AI" to "synthetic data" that can be hard to parse. And as hard as it is to really understand what AI is (see our explainer for that) having a working knowledge of AI terms can help make sense of this technology.

As part of our series explaining the basics of AI, here is a short glossary of terms that will hopefully help you navigate the rapidly developing field.

Artificial Intelligence: Technology that aims to replicate human-like thinking within machines. Some examples of abilities that fall into this category include identifying people in pictures, work in factories and even doing taxes.

Generative AI: Generative AI is an AI that can create things like text, images, sound and video.Traditional applications of AI largely classify content, while generative AI models create it. For instance, a voice recognition model can identify your voice, while a generative voice model can use your voice to create audiobooks. Almost all models that have recently captured the publics attention have been generative, including chatbots like OpenAIs ChatGPT, image creators like Stable Diffusion and MidJourney, and voice-cloning programs like Resemble.

Training Data: A collection of information text, image, sound curated to help AI models accomplish tasks. In language models, training datasets focus on text-based materials like books, comments from social media, and even code. Because AI models learn from training data, ethical questions have been raised around its sourcing and curation. Low-quality training data can introduce bias, leading to unfair models that make racist or sexist decisions.

Algorithmic Bias: An error resulting from bad training data and poor programming that causes models to make prejudiced decisions. Such models may draw inappropriate assumptions based on gender, ability or race. In practice, these errors can cause serious harm by affecting decision-making from mortgage applications to organ-transplant approvals. Many critics of the speedy rollout of AI have invoked the potential for algorithmic bias.

Artificial General Intelligence (AGI): A description of programs that are as capable or even more capable than a human. While full general intelligence is still off in the future, models are growing in sophistication. Some have demonstrated skills across multiple domains ranging from chemistry to psychology, with task performance paralleling human benchmarks.

Autonomous Agents: An AI model that has both an objective and enough tools to achieve it. For instance, self-driving cars are autonomous agents that use sensory input, GPS data and driving algorithms to make independent decisions about how to navigate and reach destinations. A group of autonomous agents can even develop cultures, traditions and shared language, as researchers from Stanford have demonstrated.

Prompt Chaining: The process of using previous interactions with an AI model to create new, more finely tuned responses, specifically in prompt-driven language modeling. For example, when you ask ChatGPT to send your friend a text, you expect it to remember things like the tone you use to talk to her, inside jokes and other content from previous conversations. These techniques help incorporate this context.

Large Language Models (LLM): An application of AI usually generative that aims to understand, engage and communicate with language in a human-like way. These models are distinguished by their large size: The biggest version of GPT-3, a direct predecessor to ChatGPT, contained 175 billion different variables called parameters that were trained on 570 gigabytes of data. Googles PaLm model is even larger, having 540 billion parameters. As hardware and software continue to advance, this scale is expected to increase.

Hallucination: Hallucinations are unexpected and incorrect responses from AI programs that can arise for reasons that are not yet fully known. A language model might suddenly bring up fruit salad recipes when you were asking about planting fruit trees. It might also make up scholarly citations, lie about data you ask it to analyze, or make up facts about events that arent in its training data. Its not fully understood why this happens, but can arise from sparse data, information gaps and misclassification.

Emergent Behavior: Skills that AI might demonstrate that it was not explicitly built for. Some examples include emoji interpretation, sarcasm and using gender-inclusive language. A research team at Google Brain identified over 100 of these behaviors, noting that more are likely to emerge as models continue to scale.

Alignment: Efforts to ensure AI systems share the same values and goals as their human operators. To bring motives into agreement, alignment research seeks to train and calibrate models, often using functions to reward or penalize models. If the model does a good job, you give it positive feedback. If not, you give it negative feedback.

Multimodal AI: A form of AI that can understand and work with multiple types of information, including text, image, speech and more. This is powerful because it allows AI to understand and express itself in multiple dimensions, giving both a broader and more nuanced understanding of tasks. One application of multimodal AI is this translator, which can convert Japanese comics into English.

Prompt Engineering: This is the act of giving AI an instruction so it has the context it needs to achieve your goal. Prompt engineering is best associated with OpenAIs ChatGPT, describing the tasks users feed into the algorithm (e.g. Give me five popular baby names).

Training: Training is the process of refining AI using data so its better suited for a task. An AI can be trained by feeding in data based on what you want it to learn from like feeding Shakespearean sonnets to a poetry bot. You can do this multiple times in iterations called epochs, until your models performance is consistent and reliable.

Neural Networks: Neural networks are computer systems constructed to approximate the structure of human thought, specifically via the structure of your brain. Theyre built like thisbecause they allow a model to build up from the abstract to the concrete. In an image model, initial layers, concepts like color or position might be formed, building up to firmer, more familiar forms like fruit or animals.

Narrow AI: Some AI algorithms have a one-track mind. Literally. Theyre designed to do one thing and nothing more. If a narrow AI algorithm can play checkers, it cant play chess. Examples include algorithms that only detect NSFW images and recommendation engines designed to tell you what Amazon or Etsy product to buy next.

Jasmine Cui is a reporter for NBC News.

Jason Abbruzzese is the senior editor for technology, science and climate for NBC News Digital.

See original here:

AI glossary: words and terms to know about the booming industry - NBC News

Posted in Artificial General Intelligence | Comments Off on AI glossary: words and terms to know about the booming industry – NBC News

What marketers should keep in mind when adopting AI – MarTech

Posted: at 12:55 am

AI applications and generative AI tools are becoming more widely available to marketers, but are marketers ready for them? Do they have the skills needed to adopt this technology and take full advantage of its capabilities?

That was the focus of a panel at The MarTech Conference, here are some of the takeaways from that discussion.

As AI evolves, capabilities will expand. Can AI take over a specific business function and run it unaided? Not yet, according to Ricky Ray Butler, CEO of BENlabs, which uses AI to place brands products in entertainment and influencer content.

Artificial general intelligence or AGI is the kind of technology that is completely automated, and thats simply not available yet.

There is still human supervision [required] when it comes to data inputs or [telling the AI] what the purpose is to have successful outcomes, said Butler.

What AI really brings to the table is when it comes to the feedback loop, he said. It can structure data and a massive amount of data in a way that the human mind cant even comprehend or compute. And it can do that at a scale where it can look at millions and millions of videos and monitor, prioritize and then alsomake predictions with successful outcomes or or potentially unsuccessful outcomes. We are literally building a brain when were leveraging this type of technology to do what the human mind does, but to be able to do it even better and even more accurately.

Dig deeper: A beginners guide to artificial intelligence

Generative AI writing tools position themselves as writing assistants, not writers, said Anita Brearton, CEO of marketing technology management platform CabinetM.

[These tools] describe their value prop as productivity, she said. They can help you write faster, they can improve SEO in fact.

They can also help writers get started when all theyre staring at is a blank page. Theyre good for refining texts and creating some A/B versions of texts, Brearton said.

Generative AI continues to improve in order to help creatives make text-based and visual content.

I think were entering a very disruptive phase for creativity for designers, illustrators, video producers and writers, said Paul Roetzer, CEO of the Marketing AI Institute

As AI gets adopted for more marketing functions, marketers using these tools are needed to guide the technology and point it toward specific marketing objectives.

The issue right now is the AI doesnt have your knowledge of your product, it doesnt have a knowledge of your customers, it doesnt have knowledge about the internal politics of your company, said Pam Didner, VP of marketing for consultancy Relentless Pursuit. [AI doesnt] have knowledge about even the road map that you are going to produce for your company. So AI can write very well, but you still need to add your own point of view. Thats where a human comes into play.

When AI is adopted by organizations, leadership needs to know how work has changed so they make the right hires.

ChatGPT woke everyone up to AI, so were all testing the tools, said Roetzer. Theres pressure on CMOs and CEOs from boards and investors to figure out AI. Everybody needs to have a plan, and you have a whole bunch of leaders who dont understand the underlying technology that now have to make decisions around staffing.

He added, We need to rapidly accelerate the comprehension of what AI is and what its capable of doing, what its limitations are. But, also [we need] to come to grips with where its going.

Get MarTech! Daily. Free. In your inbox.

Register and watch The MarTech Conference here.

Read more from the original source:

What marketers should keep in mind when adopting AI - MarTech

Posted in Artificial General Intelligence | Comments Off on What marketers should keep in mind when adopting AI – MarTech

Zoom Invests in and Partners With Anthropic to Improve Its AI … – PYMNTS.com

Posted: at 12:55 am

Zoomhas become the latest tech company riding this years wave of artificial intelligence (AI) integrations.

The video conferencing platform announced in a Tuesday (May 16) press release that it has teamed with and isinvestingin AI firmAnthropic.

The collaboration will integrate Anthropics AI assistant, Claude, with Zooms platform, beginning withZoom Contact Center.

With Claude guiding agents toward trustworthy resolutions and powering self-service for end-users, companies will be able to take customer relationships to another level, saidSmita Hashim, chief product officer for Zoom, in the release.

Working with Anthropic, Hashim said, furthers the companys goal of a federated approach to AI while also advancing leading-edge companies like Anthropic and helping to drive innovation in the Zoom ecosystem and beyond.

As the next step in evolving the Zoom Contact Center portfolio, (Zoom Virtual Agent, Zoom Contact Center, Zoom Workforce Management), Zoom plans to incorporate Anthropic AI throughout its suite, improving end-user outcomes and enabling superior agent experiences, the news release said.

Zoom said in the release it eventually plans to incorporate Anthropic AI throughout its suite, including products like Team Chat, Meetings, Phone, Whiteboard and Zoom IQ.

Last year, Zoom debutedZoom Virtual Agent, an intelligent conversational AI and chatbot tool that employs natural language processing and machine learning to understand and solve customer issues.

The company did not reveal the amount of its investment in Anthropic, which isbackedbyGoogleto the tune of $300 million.

Zooms announcement came amid a flurry of AI-related news Tuesday, with fraud prevention firmComplyAdvantagelaunching anAI tooland the New York Times digging intoMicrosofts claims that it had made a breakthrough in the realm of artificial general intelligence.

Perhaps the biggest news isOpenAICEO Sam Altmanstestimonybefore a U.S. Senate subcommittee, in which he warned: I think if this technology goes wrong, it can go quite wrong.

Altmans testimony happened as regulators and governments around the world step up their examination of AI in a race tomitigate fearsabout its transformative powers, which have spread in step with the future-fit technologys ongoing integration into the broader business landscape.

See the original post:

Zoom Invests in and Partners With Anthropic to Improve Its AI ... - PYMNTS.com

Posted in Artificial General Intelligence | Comments Off on Zoom Invests in and Partners With Anthropic to Improve Its AI … – PYMNTS.com

Hippocratic AI launches With $50M to power healthcare chatbots – VatorNews

Posted: at 12:55 am

The company has built the first Large Language Model designed specifically for healthcare

Healthcare staffing shortages were an issue even before COVID, butpandemic made all of that worse, with 334,000 healthcare providers dropping out of the workforce in 2021. In Q4 of that year alone, 117,000 physicians quit or retired, as did over 53,000 nurse practitioners. That means there will be fewer healthcare workers to help an aging population.

Of course, people will still need care and one of the ways to provide it is through chatbots and AI, which canhelp answer routine patient questions, allowing doctors to focus on their jobs. That's one of the numerous potential uses of a Generative AI tools like ChatGPT and GPT-4; however, those are not specifically designed for the healthcare space.

Hippocratic AI, which officially launchedout of stealth on Tuesday, along with a $50 million seed round co-led byGeneral CatalystandAndreessen Horowitz, has developed what it says isthe industrys first safety-focused Large Language Model (LLM) designed specifically for healthcare in order to power those chatbots.

"Hippocratic AI is focused on drastically increasing healthcare access by leveraging Generative AI to solve the massive global healthcare worker shortage; WHO warns of a 15 million shortage in healthcare workers," Hippocratic's co-founder and CEO Munjal Shah told VatorNews.

"Weve all spent many years of our careers trying to fix healthcare access. Now, we feel that the technology finally exists to deliver affordable healthcare - especially to underserved communities and geographies."

Founded by a group of physicians, hospital administrators, Medicare professionals, and artificial intelligence researchers from El Camino Health, Johns Hopkins, Washington University in St. Louis, Stanford, UPenn, Google, and Nvidia,Hippocratic AI is building a number offeatures thatmake its language model safer and tuned to the healthcare industrys needs.

That includes certifying its system on not just the US Medical Licensing Exam (USMLE), as Shah says most others have done, but to certify it on 114 different healthcare certifications. It also built in voice capabilities through which the LLM can detect tone and can "communicate empathy far better than you can via text only."

"Bedside manner is also key. Our model is the only model building in bedside manner. In healthcare, how you say something is just as important as what you say. Many studies have shown that bedside manner and leaving patients with a sense of hope even in the grimmest of circumstances can and does affect healthcare outcomes," Shah explained.

Finally, the company has doneReinforcement Learning with Human Feedback (RLHF)with healthcare professionals; while RLHF is what allowedOpenAI to make the big leap from GPT3 to ChatGPT, thoughthey did it with average consumers, not medical professionals. Hippocratic's customers are health systems, and the company has in close collaboration with them during the development phase, and are continuing to use healthcare workers to train the model.

"We are conducting our RLHF with medical professionals and even going a step further by only releasing each role, such as dietician, billing agent, genetic counselor, etc., once the people who actually do that role today in real life agree the model is ready. This is central to our safety-first approach," Shah said.

For now, Hippocratic is focusing on non-diagnosisbecause the team feels that LLMs are not yet safe enough;it is still patient-facing to reduce risk from LLMs, however, while still benefiting from their ability to lower the cost and increase the access of healthcare in the world.

Some future applications for the technology could include explaining medical bills to patients, explaining health insurance information and processes, providing pre-operation instructions and checklists, providing answers to most frequently asked questions after surgeries or procedures, providing diet and nutrition advice, and scheduling and basic logistical communications, such as directions and instructions, between patients and providers/

"There are thousands of Healthcare LLM use cases that dont require diagnosis, so many more than one might expect. These include explanation of benefits/billing, dieticians, genetic counselors, pre-op questions, and delivering test results that are negative," said Shah.

"The idea is to start with low-risk, non-diagnostic activities. Think about all of the questions patients have that are often answered by nurses or administration personnel and taking them away from human-facing and other important activities."

As noted above, Hippocratic AI focused on testing its model on 114 healthcare certifications and roles; the idea was to not just get a passing score but to outperform existing state-of-the-art language models such as GPT-4 and other commercially available models. The company was able to outperform GPT-4 on 105 of the 114 tests and certifications, outperform by 5% or more on 74 of the certifications, and outperform by 10% or more on 43 of their certifications.

This new funding round is the first money into the company and Hippocratic AI will use it to continue to invest heavily in talent, compute, data and partnerships.

"Our mission is to develop the safest artificial Health General Intelligence (HGI) in order to dramatically improve healthcare accessibility and health outcomes," said Shah.

Read the original:

Hippocratic AI launches With $50M to power healthcare chatbots - VatorNews

Posted in Artificial General Intelligence | Comments Off on Hippocratic AI launches With $50M to power healthcare chatbots – VatorNews

The Potential of AI in Tax Practice Relies on Understanding its … – Thomson Reuters Tax & Accounting

Posted: at 12:55 am

Curiosity, conversation, and investment into artificial intelligence are quickly gaining traction in the tax community, but proper due diligence requires an acknowledgement of what such tools are and arent yet capable of, as well as an assessment of security and performance risks, according to industry experts.

With the tax world exploring how AI can improve practice and administration; firms, the IRS, and taxpayers alike are in the early stages of considering its potential for streamlining tasks, saving time, and improving access to information. Regardless of ones individual optimism or skepticism about the possible future of AI in the tax space, panelists at an American Bar Association conference in Washington, D.C., this past week suggested that practitioners arm themselves with important fundamentals and key technological differences under the broad-stroke term of AI.

An increasingly popular and publicly available AI tool is ChatGPT. Users can interact with ChatGPT by issuing whatever prompts come to mind, such as telling it to write a script for a screenplay or simply asking a question. As opposed to algorithmic machine learning tools specifically designed with a narrow focus, such as those in development at the IRS to crack down on abusive transactions like conservation easements, ChatGPT is what is called a large language model (LLM).

LLMs, according to PricewaterhouseCoopers Principal Chris Kontaridis, are text-based and use statistical methodologies to create a relationship between your question and patterns of data and text. In other words, the more data an LLM like ChatGPTwhich is currently learning from users across the entire internetabsorbs, the better it can attempt to predict and algorithmically interact with a person. Importantly, however, ChatGPT is not a knowledge model, Kontaridis said. Calling ChatGPT a knowledge model would insinuate that it is going to give you the correct answer every time you put in a question. Because it is not artificial general intelligence, something akin to a Hollywood portrayal of sentient machines overtaking humanity, users should recognize that ChatGPT is not self-reasoning, he said.

Were not even close to having real AGI out there, Kontaridis added.

Professor Abdi Aidid of the University of Toronto Faculty of Law and AI research-focused Blue J Legal, said at the ABA conference that the really important thing when youre using a tool like [ChatGPT] is recognizing its limitations. He explained that it is not providing source material for legal or tax advice. What its doing, and this is very important, is simply making a probabilistic determination about the next likely word. For instance, Aidid demonstrated that if you ask ChatGPT what your name is, it will give you an answer whether it knows it or not. You can rephrase the same question and ask it again, and it might give you a slightly different answer with different words because its responding to a different prompt.

At a separate panel, Ken Crutchfieldvice president and general manager of Legal Markets said he asked ChatGPT who invented the Trapper Keeper binder, knowing in fact his father Bryant Crutchfield is credited with the invention. ChatGPT spit out a random name. In telling the story, Crutchfield said: I went through, and I continued to ask questions, and I eventually convinced ChatGPT that it was wrong, and it admitted it and it said yes, Bryant Crutchfield did invent the Trapper Keeper.' Crutchfield said that when someone else tried asking ChatGPT who invented the Trapper Keeper, it gave yet another name. He tried it again himself more recently, and the answer included his fathers name, but listed his own alma mater. So its getting better and kind of learns through the these back-and-forths with people that are interacting.

Aidid explained that these instances are referred to as hallucinations. That is, when an AI does not know the answer and essentially makes something up on the spot based on the data and patterns it has up to that point. If a user were to ask ChatGPT about the Inflation Reduction Act, it would hallucinate an answer because it currently is limited to knowledge as recent as September 2021. Generative AI ChatGPT is still more sophisticated than more base-level tools that work off of decision trees, such as when a taxpayer interacts with the IRS Tax Assistant Tool, Aidid said. The Tax Assistant Tool, Aidid said, is not generative AI.

Mindy Herzfeld, professor at the University of Florida Levin College of Law, responsed that it is especially problematic because the [Tax Assistant Tool] is implying that it has all that information and its generating responses based on the world of information, but its really not doing that, so its misleading.

The most potential for the application of generative AI is with so-called deep learning tools, which are supposedly more advanced and complex iterations of machine learning platforms. Aidid said deep learning can work with unstructured data. Such technology can not only synthesize and review information, but review new information for us. Its starting to take all that and generate thingsnot simple predictionsbut actually generate things that are in the style and mode of human communication, and thats where were seeing significant investment today.

Herzfeld said that machine learning is already being used in tax on a daily basis, but deep learning is a little harder to see where that is in tax law. These more advanced tools will likely be developed in-house at firms, likely in partnership with AI researchers.

PwC is working with Blue J in pursuit of tax-oriented deep learning generative AI to help reduce much of the clerical work that is all too time-consuming in tax practice, according to Kontaridis. Freeing up staff to focus efforts to other things while AI sifts through mountains of data is a boon, he said.

However, as the saying goes, with great power comes with great responsibility. Here, that means guarding sensitive information and ensuring accuracy. Kontaridis said that its really important to make sure before you deploy something like this to your staff or use it yourself that youre doing it in a safe environment where you are protecting the confidentiality of your personal IP and privilege that you have with your clients.

Herzfeld echoed that practitioners should bear in mind how easily misinformation could be perpetuated through an overreliance or lack of oversight of AI, which she called a very broadly societal risk. Kontaridis assured the audience that he is not worried about generative AI replacing our role and the tax professional this is a tool that will help us do our work better.

Referring to the myth that CPA bots will take over the industry, he said: What Im worried about is the impact it has on our profession at the university level of it, discouraging bright young minds from pursuing careers in tax and accounting consulting.

Get all the latest tax, accounting, audit, and corporate finance news with Checkpoint Edge. Sign up for afree 7-day trialtoday.

See original here:

The Potential of AI in Tax Practice Relies on Understanding its ... - Thomson Reuters Tax & Accounting

Posted in Artificial General Intelligence | Comments Off on The Potential of AI in Tax Practice Relies on Understanding its … – Thomson Reuters Tax & Accounting

How AI Knows Things No One Told It – Scientific American

Posted: at 12:55 am

No one yet knows how ChatGPT and its artificial intelligence cousins will transform the world, and one reason is that no one really knows what goes on inside them. Some of these systems abilities go far beyond what they were trained to doand even their inventors are baffled as to why. A growing number of tests suggest these AI systems develop internal models of the real world, much as our own brain does, though the machines technique is different.

Everything we want to do with them in order to make them better or safer or anything like that seems to me like a ridiculous thing to ask ourselves to do if we dont understand how they work, says Ellie Pavlick of Brown University, one of the researchers working to fill that explanatory void.

At one level, she and her colleagues understand GPT (short for generative pretrained transformer) and other large language models, or LLMs, perfectly well. The models rely on a machine-learning system called a neural network. Such networks have a structure modeled loosely after the connected neurons of the human brain. The code for these programs is relatively simple and fills just a few screens. It sets up an autocorrection algorithm, which chooses the most likely word to complete a passage based on laborious statistical analysis of hundreds of gigabytes of Internet text. Additional training ensures the system will present its results in the form of dialogue. In this sense, all it does is regurgitate what it learnedit is a stochastic parrot, in the words of Emily Bender, a linguist at the University of Washington. But LLMs have also managed to ace the bar exam, explain the Higgs boson in iambic pentameter, and make an attempt to break up their users marriage. Few had expected a fairly straightforward autocorrection algorithm to acquire such broad abilities.

That GPT and other AI systems perform tasks they were not trained to do, giving them emergent abilities, has surprised even researchers who have been generally skeptical about the hype over LLMs. I dont know how theyre doing it or if they could do it more generally the way humans dobut theyve challenged my views, says Melanie Mitchell, an AI researcher at the Santa Fe Institute.

It is certainly much more than a stochastic parrot, and it certainly builds some representation of the worldalthough I do not think that it is quite like how humans build an internal world model, says Yoshua Bengio, an AI researcher at the University of Montreal.

At a conference at New York University in March, philosopher Raphal Millire of Columbia University offered yet another jaw-dropping example of what LLMs can do. The models had already demonstrated the ability to write computer code, which is impressive but not too surprising because there is so much code out there on the Internet to mimic. Millire went a step further and showed that GPT can execute code, too, however. The philosopher typed in a program to calculate the 83rd number in the Fibonacci sequence. Its multistep reasoning of a very high degree, he says. And the bot nailed it. When Millire asked directly for the 83rd Fibonacci number, however, GPT got it wrong: this suggests the system wasnt just parroting the Internet. Rather it was performing its own calculations to reach the correct answer.

Although an LLM runs on a computer, it is not itself a computer. It lacks essential computational elements, such as working memory. In a tacit acknowledgement that GPT on its own should not be able to run code, its inventor, the tech company OpenAI, has since introduced a specialized plug-ina tool ChatGPT can use when answering a querythat allows it to do so. But that plug-in was not used in Millires demonstration. Instead he hypothesizes that the machine improvised a memory by harnessing its mechanisms for interpreting words according to their contexta situation similar to how nature repurposes existing capacities for new functions.

This impromptu ability demonstrates that LLMs develop an internal complexity that goes well beyond a shallow statistical analysis. Researchers are finding that these systems seem to achieve genuine understanding of what they have learned. In one study presented last week at the International Conference on Learning Representations (ICLR), doctoral student Kenneth Li of Harvard University and his AI researcher colleaguesAspen K. Hopkins of the Massachusetts Institute of Technology, David Bau of Northeastern University, and Fernanda Vigas, Hanspeter Pfister and Martin Wattenberg, all at Harvardspun up their own smaller copy of the GPT neural network so they could study its inner workings. They trained it on millions of matches of the board game Othello by feeding in long sequences of moves in text form. Their model became a nearly perfect player.

To study how the neural network encoded information, they adopted a technique that Bengio and Guillaume Alain, also at the University of Montreal, devised in 2016. They created a miniature probe network to analyze the main network layer by layer. Li compares this approach to neuroscience methods. This is similar to when we put an electrical probe into the human brain, he says.In the case of the AI, the probe showed that its neural activity matched the representation of an Othello game board, albeit in a convoluted form. To confirm this, the researchers ran the probe in reverse to implant information into the networkfor instance, flipping one of the games black marker pieces to a white one. Basically, we hack into the brain of these language models, Li says. The network adjusted its moves accordingly. The researchers concluded that it was playing Othello roughly like a human: by keeping a game board in its minds eye and using this model to evaluate moves. Li says he thinks the system learns this skill because it is the most parsimonious description of its training data. If you are given a whole lot of game scripts, trying to figure out the rule behind it is the best way to compress, he adds.

This ability to infer the structure of the outside world is not limited to simple game-playing moves; it also shows up in dialogue. Belinda Li (no relation to Kenneth Li), Maxwell Nye and Jacob Andreas, all at M.I.T., studied networks that played a text-based adventure game. They fed in sentences such as The key is in the treasure chest, followed by You take the key. Using a probe, they found that the networks encoded within themselves variables corresponding to chest and you, each with the property of possessing a key or not, and updated these variables sentence by sentence. The system had no independent way of knowing what a box or key is, yet it picked up the concepts it needed for this task. There is some representation of the state hidden inside of the model, Belinda Li says.

Researchers marvel at how much LLMs are able to learn from text. For example, Pavlick and her then Ph.D. student Roma Patel found that these networks absorb color descriptions from Internet text and construct internal representations of color. When they see the word red, they process it not just as an abstract symbol but as a concept that has certain relationship to maroon, crimson, fuchsia, rust, and so on. Demonstrating this was somewhat tricky. Instead of inserting a probe into a network, the researchers studied its response to a series of text prompts. To check whether it was merely echoing color relationships from online references, they tried misdirecting the system by telling it that red is in fact greenlike the old philosophical thought experiment in which one persons red is another persons green. Rather than parroting back an incorrect answer, the systems color evaluations changed appropriately in order to maintain the correct relations.

Picking up on the idea that in order to perform its autocorrection function, the system seeks the underlying logic of its training data, machine learning researcher Sbastien Bubeck of Microsoft Research suggests that the wider the range of the data, the more general the rules the system will discover. Maybe were seeing such a huge jump because we have reached a diversity of data, which is large enough that the only underlying principle to all of it is that intelligent beings produced them, he says. And so the only way to explain all of this data is [for the model] to become intelligent.

In addition to extracting the underlying meaning of language, LLMs are able to learn on the fly. In the AI field, the term learning is usually reserved for the computationally intensive process in which developers expose the neural network to gigabytes of data and tweak its internal connections. By the time you type a query into ChatGPT, the network should be fixed; unlike humans, it should not continue to learn. So it came as a surprise that LLMs do, in fact, learn from their users promptsan ability known as in-context learning. Its a different sort of learning that wasnt really understood to exist before, says Ben Goertzel, founder of the AI company SingularityNET.

One example of how an LLM learns comes from the way humans interact with chatbots such as ChatGPT. You can give the system examples of how you want it to respond, and it will obey. Its outputs are determined by the last several thousand words it has seen. What it does, given those words, is prescribed by its fixed internal connectionsbut the word sequence nonetheless offers some adaptability. Entire websites are devoted to jailbreak prompts that overcome the systems guardrailsrestrictions that stop the system from telling users how to make a pipe bomb, for exampletypically by directing the model to pretend to be a system without guardrails. Some people use jailbreaking for sketchy purposes, yet others deploy it to elicit more creative answers. It will answer scientific questions, I would say, better than if you just ask it directly, without the special jailbreak prompt, says William Hahn, co-director of the Machine Perception and Cognitive Robotics Laboratory at Florida Atlantic University. Its better at scholarship.

Another type of in-context learning happens via chain of thought prompting, which means asking the network to spell out each step of its reasoninga tactic that makes it do better at logic or arithmetic problems requiring multiple steps. (But one thing that made Millires example so surprising is that the network found the Fibonacci number without any such coaching.)

In 2022 a team at Google Research and the Swiss Federal Institute of Technology in ZurichJohannes von Oswald, Eyvind Niklasson, Ettore Randazzo, Joo Sacramento, Alexander Mordvintsev, Andrey Zhmoginov and Max Vladymyrovshowed that in-context learning follows the same basic computational procedure as standard learning, known as gradient descent. This procedure was not programmed; the system discovered it without help. It would need to be a learned skill, says Blaise Agera y Arcas, a vice president at Google Research. In fact, he thinks LLMs may have other latent abilities that no one has discovered yet. Every time we test for a new ability that we can quantify, we find it, he says.

Although LLMs have enough blind spots not to qualify as artificial general intelligence, or AGIthe term for a machine that attains the resourcefulness of animal brainsthese emergent abilities suggest to some researchers that tech companies are closer to AGI than even optimists had guessed. Theyre indirect evidence that we are probably not that far off from AGI, Goertzel said in March at a conference on deep learning at Florida Atlantic University. OpenAIs plug-ins have given ChatGPT a modular architecture a little like that of the human brain. Combining GPT-4 [the latest version of the LLM that powers ChatGPT] with various plug-ins might be a route toward a humanlike specialization of function, says M.I.T. researcher Anna Ivanova.

At the same time, though, researchers worry the window may be closing on their ability to study these systems. OpenAI has not divulged the details of how it designed and trained GPT-4, in part because it is locked in competition with Google and other companiesnot to mention other countries. Probably theres going to be less open research from industry, and things are going to be more siloed and organized around building products, says Dan Roberts, a theoretical physicist at M.I.T., who applies the techniques of his profession to understanding AI.

And this lack of transparency does not just harm researchers; it also hinders efforts to understand the social impacts of the rush to adopt AI technology. Transparency about these models is the most important thing to ensure safety, Mitchell says.

Go here to read the rest:

How AI Knows Things No One Told It - Scientific American

Posted in Artificial General Intelligence | Comments Off on How AI Knows Things No One Told It – Scientific American

Page 6«..4567