SenseAuto Empowers Nearly 30 Mass-produced Models Exhibited at Auto Shanghai 2023 and Unveils Six Intelligent Cabin Products – Yahoo Finance

SHANGHAI, April 20, 2023 /PRNewswire/ -- The Shanghai International Automobile Industry Exhibition ("Auto Shanghai 2023"), themed "Embracing the New Era of the Automotive Industry," has been held with a focus on the innovative changes in the automotive industry brought about by technology. SenseAuto, the Intelligent Vehicle Platform of SenseTime, made its third appearance at the exhibition with the three-in-one product suite of intelligent cabin, intelligent driving, and collaborative cloud, showcasing its full-stack intelligent driving solution and six new intelligent cabin products designed to create the future cabin experience with advanced perception capabilities. Additionally, nearly 30 models produced in collaboration with SenseAuto were unveiled at the exhibition, further emphasizing its industry-leading position.

SenseAuto made its third appearance at Auto Shanghai

At the Key Tech 2023 forum, Prof. Wang Xiaogang, Co-founder, Chief Scientist and President of Intelligent Automobile Group, SenseTime, delivered a keynote speech emphasizing that smart autos provide ideal scenarios for AGI (Artificial General Intelligence) to facilitate closed-loop interactions between intelligent driving and passenger experiences in the "third living space", which presents endless possibilities.

SenseAuto empowers nearly 30 mass-produced models showcased at Auto Shanghai 2023In 2022, SenseAuto Cabin and SenseAuto Pilot products were adapted and delivered to 27 vehicle models with more than 8 million new pipelines. These products now cover more than 80 car models from over 30 automotive companies, confirming SenseAuto's continued leadership in the industry.

In the field of intelligent driving, SenseAuto has established mass-production partnerships with leading automakers in China, such as GAC and Neta. At the exhibition, SenseAuto showcased the GAC AION LX Plus, which leverages SenseAuto's stable surround BEV (Bird 's-Eye-View) perception and powerful general target perception capabilities to create a comprehensive intelligent Navigated Driving Assist (NDA) that is capable of completing various challenging perception tasks. The Neta S, another exhibited model at the show, is also equipped with SenseAuto's full-stack intelligent driving solution which provides consumers with a reliable and efficient assisted driving experience in highway scenarios.

Story continues

In the field of intelligent cabin, SenseAuto is committed to developing the automotive industry's most influential AI empowered platform with the aim of providing extremely safe, interactive, and personalized experiences for users. The NIO ES7 model exhibited supports functions such as driver fatigue alerts, Face ID, and child presence detection. SenseAuto's cutting-edge visual AI technology has boosted the accuracy of driver attention detection by 53% in long-tail scenarios, and by 47% in complex scenarios involving users with narrow-set eyes, closed eyes, and backlighting.

The highly anticipated ZEEKR X model showcased features from SenseAuto's groundbreaking intelligent B-pillar interactive system, a first-of-its-kind innovation that allows for contactless unlocking and entry. Other models on display that boast SenseAuto's cutting-edge DMS (Driver Monitoring System) and OMS (Occupant Monitoring System) technologies include Dongfeng Mengshi 917, GAC's Trumpchi E9, Emkoo, as well as the M8 Master models. Moreover, HiPhi has collaborated with SenseAuto on multiple Smart Cabin features and Changan Yida is equipped with SenseAuto's health management product, which can detect various health indicators of passengers in just 30 seconds, elevating travel safety to new heights.

Six Innovative smart cabin features for an intelligent "third living space"SenseAuto is at the forefront of intelligent cabin innovations, with multi-model interaction that integrates vision, speech, and natural language understanding. SenseTime's newly launched "SenseNova" foundation model set, which introduces avariety of foundation models and capabilities in natural language processing and content generation, such as digital human, opens up numerous possibilities for the smart cabin as a "third living space".

SenseAuto presented a futuristic demo cabin at Auto Shanghai 2023, featuring an AI virtual assistant that welcomes guests and directs them to their seats. In addition, SenseTime's latest large-scale language model (LLM), "SenseChat", interacted with guests and provided personalized content recommendations. The "SenseMirage" text-to-image creation platform has also been integrated with the exhibition cabin for the first time. With the help of SenseTime's AIGC (AI-Generated Content) capabilities, guests can enjoy a fun-filled travel experience with various styles of photos generated for them.

At the exhibition, SenseAuto unveiled six industry-first features including Lip-Reading, Guard Mode, Intelligent Rescue, Air Touch, AR Karaoke and Intelligent Screensaver. With six years of industry experience, SenseAuto has accumulated to date a portfolio of 29 features, of which, over 10 are industry-firsts.

SenseNova accelerates mass-production of smart drivingSenseAuto is revolutionizing the autonomous driving industry with its full-stack intelligent driving solution, which integrates driving and parking. The innovative SenseAuto Pilot Entry is cost-effective solution that uses parking cameras for driving functions. SenseAuto's parking feature supports cross-layer parking lot routing, trajectory tracking, intelligent avoidance, and target parking functions to fulfill multiple parking needs in multi-level parking lots.

SenseNova has enabled SenseAuto to achieve the first domestic mass production of BEV perception and pioneer the automatic driving GOP perception system. SenseAuto is proactively driving innovation in the R&D ofautonomous driving technology, leveraging SenseTime's large model system. Its self-developed UniAD has become the industry's first perception and decision intelligence integrated end-to-end autonomous driving solution. The large model is also used for automated data annotation and product testing, which has increased the model iteration efficiency by hundreds of times.

SenseAuto's success is evident in its partnerships with over 30 automotive manufacturers and more than 50 ecosystem partners worldwide.With plans to bring its technology to over 31 million vehicles in the next few years, SenseAuto is leading the way in intelligent vehicle innovation. Leveraging the capabilities of SenseNova, SenseAuto is poised to continue riding the wave of AGI and enhancing its R&D efficiency and commercialization process towards a new era of human-vehicle collaborative driving.

About SenseTime: https://www.sensetime.com/en/about-index#1

About SenseAuto: https://www.sensetime.com/en/product-business?categoryId=1095&gioNav=1

(PRNewsfoto/SenseTime)

Cision

View original content to download multimedia:https://www.prnewswire.com/apac/news-releases/senseauto-empowers-nearly-30-mass-produced-models-exhibited-at-auto-shanghai-2023-and-unveils-six-intelligent-cabin-products-301801980.html

SOURCE SenseTime

Continued here:

SenseAuto Empowers Nearly 30 Mass-produced Models Exhibited at Auto Shanghai 2023 and Unveils Six Intelligent Cabin Products - Yahoo Finance

Tim Sweeney, CD Projekt, and Other Experts React to AI’s Rise, and … – IGN

This feature is part of AI Week. For more stories, including How AI Could Doom Animation and comments from experts like Tim Sweeney, check out our hub.

All anyone wants to talk about in the games industry is AI. The technology once a twinkle in the eye of sci-fi writers and futurists has shot off like a bottle rocket. Every day we're greeted with fascinating and perturbing new advances in machine learning. Right now, you can converse with your computer on ChatGPT, sock-puppet a celebrity's voice with ElevenLabs, and generate a slate of concept art with MidJourney.

It is perhaps only a matter of time before AI starts making significant headway in the business of game development, so to kick off AI week at IGN, we talked to a range of experts in the field about their hopes and fears for this brave new world, and some are more skeptical than you'd expect.

AI Week Roundtable: Meet the Games Industry Experts

Pawel Sasko, CD Projekt Red Lead Quest Designer: I really believe that AI, and AI tools, are going to be just the same as when Photoshop was invented. You can see it throughout the history of animation. From drawing by hand to drawing on a computer, people had to adapt and use the tools, and I think AI is going to be exactly that. It's just going to be another tool that we'll use for productivity and game development.

Tim Sweeney, Epic Games CEO: I think there's a long sorting out process to figure out how all that works and it's going to be complicated. These AI technologies are incredibly effective when applied to some really bulk forms of data where you can download billions of samples from existing project and train on them, but that works for text and it works for graphics and maybe it will work for 3D objects as well, but it's not going to work for higher level constructs like games or the whole of the video game. There's just no training function that people know that can drive a game like that. I think we're going to see some really incredible advances and actual progress mixed in with the hype cycle where a lot of crazy stuff is promised. Nobody's going to be able to deliver.

Michael Spranger, COO of Sony AI: I think AI is going to revolutionize the largeness of gaming worlds; how real they feel, and how you interact with them. But I also think it's going to have a huge impact on production cycles. Especially in this era of live-services. We'll produce a lot more content than we did in the past.

Julian Togelius, Associate Professor of Computer Science at New York University, and co-author of the textbook Artificial Intelligence and Games: Long-term, we're going to see every part of game development co-created AI Designers will collaborate with AI on everything from prototyping, to concept art, to mechanics, balancing, and so on. Further on, we might see games that are actually designed to use AI during its runtime.

Pawel Sasko: There's actually many companies doing internal R&D of a specific implementation of not MidJourney especially, but literally just art tools like this, so that when you're in early concept phases, you're able to generate as many ideas as you can and just basically pick whatever works actually for you and then give it to an artist who actually developed that direction. I think it's a pretty intriguing direction because it opens up the doors that you wouldn't think of. And again, as an artist, we are just always limited by our skills that come up from all the life experiences and everything we have consumed artistically, culturally before. And AI doesn't have this limitation in a way. We can feed it so many different things, therefore it can actually propose so many different things that we wouldn't think of. So I think as a starting point or maybe just as a brainstorming tool, this could be interesting.

Michael Spranger: I think of AI as a creativity unlocking tool. There are so many more things you can do if you have the right tools. We see a rapid deployment of impact of this technology in content creation possibilities from 3D, to sound, to musical experiences, to what you're interacting with in a world. All of that is going to get much better.

Julian Togelius: Everybody looks at the image generation and text generation and say, 'Hey, we can just pop that into games.' And, of course, we see like proliferation of unserious, sometimes venture capital found that actors coming in and claiming that they're going to do all of your Game Arts with MidJourney these people usually don't know anything about game development. There's a lot of that going around. So I like to say that generating just an image is kind of the easy part. Every other part of game content, including the art, has so many functional aspects. Your character model must work with animations, your level must be completable. That's the had part.

Tim Sweeney: It's not synthesizing amazing new stuff, it's really just rewriting data that already exists. So, either you ask it to write a sorting algorithm in Python and it does that, but it's really just copying the structure of somebody else's code that it trained on. You tell it to solve a problem that nobody's solved before or the data it hasn't seen before and it doesn't have the slightest idea what to do about it. We have nothing like artificial general intelligence. The generated art characters have six or seven fingers, they just don't know that people have five fingers. They don't know what fingers are and they don't know how to count. They don't really know anything other than how to reassemble pixels in a statistically common way. And so, I think we're a very long way away from that, providing the kind of utility a real artist provides.

Sarah Bond, Xbox Head of Global Gaming Partnership and Development: We're in the early days of it. Obviously we're in the midst of huge breakthroughs. But you can see how it's going to greatly enhance discoverability that is actually customized to what you really care about. You can actually have things served up to you that are very, very AI driven. "Oh my gosh, I loved Tunic. What should I do next?

Tim Sweeney: I'm not sure yet. It's funny, we're pushing the state of the art in a bunch of different areas, but [Epic] is really not touching generative AI. We're amazed at what our own artists are doing in their hobby projects, but all these AI tools, data use is under the shadow, which makes the tools unusable by companies with lawyers essentially because we don't know what authorship claims might exist on the data.

Julian Togelius: I don't think it will affect anyone more than any other technology that forces people to learn new tools. You have to keep learning new tools or otherwise you'll become irrelevant. People will become more productive, and generate faster iterations. Someone will say, "Hey, this is a really interesting creature you've created, now give me 10,000 of those that differ slightly." People will master the tools. I don't think they will put anyone out of a job as long as you keep rolling with the punches.

Pawel Sasko: I think that the legal sphere is going to catch up with AI generation eventually, with what to do in these situations to regulate them. I know a lot of voice actors are worried about the technology, because the voice is also a distinct element of a given actor, not only the appearance and the way of acting. Legal is always behind us.

Michael Spranger: The relationship with creative people is really important to us. I don't think that relationship will change. When I go watch a Stanley Kubrick movie, I'm there to enjoy his creative vision. For us, it's important to make sure that those people can preserve and execute those creative visions, and that AI technology is a tool that can help make that happen.

Julian Togelius: Definitely. If you have a team that has deep expertise in every field, you're at an advantage. But I think we're gonna get to the point where, like, you only need to know a few fields to make a game, and have the AI tools be non-human standings for other fields of expertise. If you're a two-person team and you don't have an animator, you can ask the AI to do the animation for you. The studio can make a nice looking game even though they don't have all the resources. That's something I'm super optimistic about.

Tim Sweeney: I think the more common case, which we're seeing really widely used in the game industry is an artist does a lot of work to build an awesome asset, but then the procedural systems and the animation tools and the data scanning systems just blow it up to an incredible level.

Michael Spranger: Computer science in general has a very democratizing effect. That is the history of the field. I think these tools might inspire more people to express their creativity. This is really about empowering people. We're going to create much more content that's unlocked with AI, and I think it will have a role to play in both larger and smaller studios.

Michael Spranger: I think what makes this different is that the proof is in the pudding. Look at what Kazunori Yamuchi said about GT Sophy, [the AI-powered driver recently introduced to Gran Turismo 7]: there was a 25-year period where they built the AI in Gran Turismo in a specific way, and Yamuchi is basically saying that this is a new chapter. That makes a difference for me. When people are saying, "I haven't had this experience before with a game. This is qualitatively different." It's here now, you can experience it now.

Kajetan Kasprowicz, CD Projekt Red Cinematic Designer: Someone at GDC once gave a talk that basically said, "Who will want to play games that were made by AI?" People will want experiences created by human beings. The technology is advancing very fast and we kind of don't know what to do with it. But I think there will be a consensus on what we want to do as societies.

Julian Togelius: AI has actual use-cases, and it works, whereas all of the crypto shit was ridiculous grifting by shameless people. I hate that people associate AI with that trend. On the other hand you have something like VR, which is interesting technology that may, or may not, be ready for the mass market someday. Compare that to AI, which has hundreds of use-cases in games and game development.

Luke Winkie is a freelance writer at IGN.

See original here:

Tim Sweeney, CD Projekt, and Other Experts React to AI's Rise, and ... - IGN

GCHQ chiefs warning to ministers over risks of AI – The Independent

GCHQ chief Sir Jeremy Fleming has warned ministers about the risks posed by artificial intelligence (AI), amid growing debates about how to regulate the rapidly developing technology.

Downing Street gave little detail about what specific risks the GCHQ boss warned of but said the update was a clear-eyed look at the potential for things like disinformation and the importance of people being aware of that.

Prime minister Rishi Sunak used the same Cabinet meeting on Tuesday to stress the importance of AI to UK national security and the economy, No 10 said.

A readout of the meeting said ministers agreed on the transformative potential of AI and the vital importance of retaining public confidence in its use and the need for regulation that keeps people safe without preventing innovation.

The prime minister concluded Cabinet by saying that given the importance of AI to our economy and national security, this could be one of the most important policies we pursue in the next few years which is why we must get this right, the readout added.

Asked if the potential for an existential threat to humanity from AI had been considered, the PMs official spokesperson said: We are well aware of the potential risks posed by artificial general intelligence.

The spokesperson said Michelle Donelans science ministry was leading on that issue, but the governments policy was to have appropriate, flexible regulation which can move swiftly to deal with what is a changing technology.

As the public would expect, we are looking to both make the most of the opportunities but also to guard against the potential risk, the spokesperson added.

The government used the recent refresh of the integrated review to launch a new government-industry AI-focused task force on the issue, modelled on the vaccines task force used during the Covid pandemic.

Italy last month said it would temporarily block the artificial intelligence software ChatGPT amid global debate about the power of such new tools.

The AI systems powering such chatbots, known as large language models, are able to mimic human writing styles based on the huge trove of digital books and online writings they have ingested.

Mr Sunak, who created a new Department for Science, Innovation & Technology in a Whitehall reshuffle earlier this year, is known to be enthusiastic about making the UK a science superpower.

See the original post here:

GCHQ chiefs warning to ministers over risks of AI - The Independent

How would a Victorian author write about generative AI? – Verdict

The Victorian era was one transformed by the industrial revolution. The telegraph, telephone, electricity, and steam engine are key examples of life-changing technologies and machinery.

It is not surprising, therefore, that this real-life innovation sparked the imagination of writers like Robert Stevenson, Jules Verne, and H.G Wells.

These authors imagined time machines, space rockets, and telecommunication. Even Mark Twain wrote about mind-travelling, imagining a technology similar to the modern-day internet in 1898. Motifs such as utopias and dystopias became popular in literature as academics debated the scientific, cultural, and physiological impact of technology.

Robert Stevensons Strange Case of Dr Jekyll and Mr Hyde is another classic example. It explores the dangers of unchecked ambition in scientific experimentation through the evil, murderous alter ego Mr. Hyde. Mary Shellys Frankenstein unleashes a monster, a living being forged out of non-living materials. These stories spoke to the fear among the Victorian pious society that playing God would have deadly consequences.

In an FT op-ed, AI expert and angel investor Ian Hogarth refers to artificial general intelligence (AGI) as God-like AI for its predicted ability to generate new scientific knowledge independently and perform all human tasks. The article displayed both excitement and trepidation at the technologys potential.

According to GlobalData, there have been over 5,500 news items relating to AI in the past six months. Opinion ranges from unbridled optimism that AI will revolutionize the world, to theories of an apocalyptic future where machines will rise to render humanity obsolete.

In April 2023, The Future of Life Institute wrote an open letter calling for a six-month pause on developing AI systems that can compete with human-level intelligence, co-signed by tech leaders such as Elon Musk and Steve Wozniak. The letter posed the question Should we risk the loss of control of our civilization? as AI becomes more powerful. Over 3,000 people have signed it.

These arguments are the same as the talking points of Victorian sceptics on technological advancements. Philosopher and economist John Stuart Mill discussed in an essay entitled Civilization in which he wrote about the uncorrected influences of technological development on society, specifically the printing press, which he predicted would dilute the voice of intellectuals by making publishing accessible to the masses and commercialize the spread of knowledge. He called for national institutions to mitigate this impact.

Both were concerned with how technology could disrupt social norms and the labour market and wreak havoc on society as we know it. Both called for government oversight and regulation during a time of intense scientific progress.

In the 1800s, the desire to push boundaries won out over concerns, breeding a new class of innovators and entrepreneurs. Without this innovative spirit, Alexander Graham Bell would not have invented the telephone in 1876, and Joseph Swan would not have invented the lightbulb in 1878. They were the forerunners to the Bill Gates and Jeff Bezos of this world.

While technology advances at a rapid pace, human behaviour remains consistent. In other words, advances in technology will always divide opinions between those who view it as a new frontier to explore and those who consider it to be Frankensteins monster. We can heed the warnings when it comes to unregulated technological developments and still appreciate the opportunities ingenuity brings. This is especially pertinent when it comes to artificial intelligence.

See the original post:

How would a Victorian author write about generative AI? - Verdict

Is the current regulatory system equipped to deal with AI? – The Hindu

The growth of Artificial Intelligence (AI) technologies and their deployment has raised questions about privacy, monopolisation and job losses. In a discussion moderated by Prashanth Perumal J., Ajay Shah and Apar Gupta discuss concerns about the economic and privacy implications of AI as countries try to design regulations to prevent the possible misuse of AI by individuals and governments. Edited excerpts:

Should we fear AI? Is AI any different from other disruptive technologies?

Ajay Shah: Technological change improves aggregate productivity, and the output of society goes up as a result. People today are vastly better off than they were because of technology, whether it is of 200 years ago or 5,000 years ago. There is nothing special or different this time around with AI. This is just another round of machines being used to increase productivity.

Apar Gupta: I broadly echo Ajays views. And alongside that, I would say that in our popular culture, quite often we have people who think about AI as a killer robot that is, in terms of AI becoming autonomous. However, I think the primary risks which are emerging from AI happen to be the same risks which we have seen with other digital technologies, such as how political systems integrate those technologies. We must not forget that some AI-based systems are already operational and have been used for some time. For instance, AI is used today in facial recognition in airports in India and also by law-enforcement agencies. There needs to be a greater level of critical thought, study and understanding of the social and economic impact of any new technology.

Ajay Shah: If I may broaden this discussion slightly, theres a useful phrase called AGI, which stands for artificial general intelligence, which people are using to emphasise the uniqueness and capability of the human mind. The human mind has general intelligence. You could show me a problem that I have never seen before, and I would be able to think about it from scratch and be able to try to solve it, which is not something these machines know how to do. So, I feel theres a lot of loose talk around AI. ChatGPT is just one big glorified database of everything that has been written on the Internet. And it should not be mistaken for the genuine human capability to think, to invent, to have a consciousness, and to wake up with the urge to do something. I think the word AI is a bit of a marketing hype.

Do you think the current regulatory system is equipped enough to deal with the privacy and competition threats arising from AI?

Ajay Shah: One important question in the field of technology policy in India is about checks and balances. What kind of data should the government have about us? What kind of surveillance powers should the government have over us? What are the new kinds of harm that come about when governments use technologies in a certain way? There is also one big concern about the use of modern computer technology and the legibility of our lives the way our lives are laid bare to the government.

Apar Gupta: Beyond the policy conversation, I think we also need laws for the deployment of AI-based systems to comply with Supreme Court requirements under the right to privacy judgment for specific use-cases such as facial recognition. A lot of police departments and a lot of State governments are using this technology and it comes with error rates that have very different manifestations. This may result in exclusion, harassment, etc., so there needs to be a level of restraint. We should start paying greater attention to the conversations happening in Europe around AI and the risk assessment approach (adopted by regulators in Europe and other foreign countries) as it may serve as an influential model for us.

Ajay Shah: Coming to competition, I am not that worried about the presence or absence of competition in this field. Because on a global scale, it appears that there are many players. Already we can see OpenAI and Microsoft collaborating on one line of attack; we can also see Facebook, which is now called Meta, building in this space; and of course, we have the giant and potentially the best in the game, Google. And there are at least five or 10 others. This is a nice reminder of the extent to which technical dynamism generates checks and balances of its own. For example, we have seen how ChatGPT has raised a new level of competitive dynamics around Google Search. One year ago, we would have said that the world has a problem because Google is the dominant vendor among search engines. And that was true for some time. Today, suddenly, it seems that this game is wide open all over again; it suddenly looks like the global market for search is more competitive than it used to be. And when it comes to the competition between Microsoft and Google on search, we in India are spectators. I dont see a whole lot of value that can be added in India, so I dont get excited about appropriating extraterritorial jurisdiction. When it comes to issues such as what the Indian police do with face recognition, nobody else is going to solve it for us. We should always remember India is a poor country where regulatory and state capacity is very limited. So, the work that is done here will generally be of low quality.

Apar Gupta: The tech landscape is dominated by Big Tech, and its because they have a computing power advantage, a data advantage, and a geopolitical advantage. It is possible that at this time when AI is going to unleash the next level of technology innovation, the pre-existing firms, which may be Microsoft, Google, Meta, etc., may deepen their domination.

How do you see India handling AI vis--vis Chinas authoritarian use of AI?

Ajay Shah: In China, they have built a Chinese firewall and cut off users in China from the Internet. This is not that unlike what has started happening in India where many websites are being increasingly cut off from Indian users. The people connected with the ruling party in China get monopoly powers to build products that look like global products. They steal ideas and then design and make local versions in China, and somebody makes money out of that. Thats broadly the Chinese approach and it makes many billion dollars of market cap. But it also comes at the price of mediocrity and stagnation, because when you are just copying things, you are not at the frontier and you will not develop genuine scientific and technical knowledge. So far in India, there is decent political support for globalisation, integration into the world economy and full participation by foreign companies in India. Economic nationalism, where somehow the government is supposed to cut off foreign companies from operating in India, is not yet a dominant impulse here. So, I think that there is fundamental superiority in the Indian way, but I recognise that there is a certain percentage of India that would like the China model.

Apar Gupta: I would just like to caution people who are taken in by the attractiveness of the China model it relies on a form of political control, which itself is completely incompatible in India.

How do you see Zoho Corporation CEO Sridhar Vembus comments that AI would completely replace all existing jobs and that demand for goods would drop as people lose their jobs?

Ajay Shah: As a card-carrying economist, I would just say that we should always focus on the word productivity. Its good for society when human beings produce more output per unit hour as that makes us more prosperous. People who lose jobs will see job opportunities multiplying in other areas. My favourite story is from a newspaper column written by Ila Patnaik. There used to be over one million STD-ISD booths in India, each of which employed one or two people. So there were 1-2 million jobs of operating an STD-ISD booth in India. And then mobile phones came and there was great hand-wringing that millions of people would lose their jobs. In the end, the productivity of the country went up. So I dont worry so much about the reallocation of jobs. The labour market does this every day prices move in the labour market, and then people start choosing what kind of jobs they want to do.

Ajay Shah is Research Professor of Business at O.P. Jindal Global University, Sonipat; Apar Gupta is executive director of the Internet Freedom Foundation

Go here to see the original:

Is the current regulatory system equipped to deal with AI? - The Hindu

Beyond Human Cognition: The Future of Artificial Super Intelligence – Medium

Beyond Human Cognition: The Future of Artificial Super Intelligence

Artificial Super Intelligence (ASI) a level of artificial intelligence that surpasses human intelligence in all aspects remains a concept nestled within the realms of science fiction and theoretical research. However, looking towards the future, the advent of ASI could mark a transformative epoch in human history, with implications that are profound and far-reaching. Here's an exploration of what the future might hold for ASI.

Exponential Growth in Problem-Solving Capabilities

ASI will embody problem-solving capabilities far exceeding human intellect. This leap in cognitive ability could lead to breakthroughs in fields that are currently limited by human capacity, such as quantum physics, cosmology, and nanotechnology. Complex problems like climate change, disease control, and energy sustainability might find innovative solutions through ASI's advanced analytical prowess.

Revolutionizing Learning and Innovation

The future of ASI could bring about an era of accelerated learning and innovation. ASI systems would have the ability to learn and assimilate new information at an unprecedented pace, making discoveries and innovations in a fraction of the time it takes human researchers. This could potentially lead to rapid advancements in science, technology, and medicine.

## Ethical and Moral Frameworks

The emergence of ASI will necessitate the development of robust ethical and moral frameworks. Given its surpassing intellect, it will be crucial to ensure that ASI's objectives are aligned with human values and ethics. This will involve complex programming and oversight to ensure that ASI decisions and actions are beneficial, or at the very least, not detrimental to humanity.

Transformative Impact on Society and Economy

ASI could fundamentally transform society and the global economy. Its ability to analyze and optimize complex systems could lead to more efficient and equitable economic models. However, this also poses challenges, such as potential job displacement and the need for societal restructuring to accommodate the new techno-social landscape.

Enhanced Human-ASI Collaboration

The future might see enhanced collaboration between humans and ASI, leading to a synergistic relationship. ASI could augment human capabilities, assisting in creative endeavors, decision-making, and providing insights beyond human deduction. This collaboration could usher in a new era of human achievement and societal advancement.

Advanced Autonomous Systems

With ASI, autonomous systems would reach an unparalleled level of sophistication, capable of complex decision-making and problem-solving in dynamic environments. This could significantly advance fields such as space exploration, deep-sea research, and urban development.

## Personalized Healthcare

In healthcare, ASI could facilitate personalized medicine at an individual level, analyzing vast amounts of medical data to provide tailored healthcare solutions. It could lead to the development of precise medical treatments and potentially cure diseases that are currently incurable.

Challenges and Safeguards

The path to ASI will be laden with challenges, including ensuring safety and control. Safeguards will be essential to prevent unintended consequences of actions taken by an entity with superintelligent capabilities. The development of ASI will need to be accompanied by rigorous safety research and international regulatory frameworks.

Preparing for an ASI Future

Preparing for a future with ASI involves not only technological advancements but also societal and ethical preparations. Education systems, governance structures, and public discourse will need to evolve to understand and integrate the complexities and implications of living in a world where ASI exists.

Conclusion

The potential future of Artificial Super Intelligence presents a panorama of extraordinary possibilities, from solving humanitys most complex problems to fundamentally transforming the way we live and interact with our world. While the path to ASI is fraught with challenges and ethical considerations, its successful integration could herald a new age of human advancement and discovery. As we stand on the brink of this AI frontier, it is imperative to navigate this journey with caution, responsibility, and a vision aligned with the betterment of humanity.

Read more from the original source:

Beyond Human Cognition: The Future of Artificial Super Intelligence - Medium

Policy makers should plan for superintelligent AI, even if it never happens – Bulletin of the Atomic Scientists

Experts from around the world are sounding alarm bells to signal the risks artificial intelligence poses to humanity. Earlier this year, hundreds of tech leaders and AI specialists signed a one-sentence letter released by the Center for AI Safety that read mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war. In a2022 survey, half of researchers indicated they believed theres at least a 10 percent chance human-level AI causes human extinction. In June, at the Yale CEO summit, 42 percent of surveyed CEOsindicated they believe AI could destroy humanity in the next five to 10 years.

These concerns mainly pertain to artificial general intelligence (AGI), systems that can rival human cognitive skills and artificial superintelligence (ASI), machines with capacity to exceed human intelligence. Currently no such systems exist. However, policymakers should take these warnings, including the potential for existential harm, seriously.

Because the timeline, and form, of artificial superintelligence is uncertain, the focus should be on identifying and understanding potential threats and building the systems and infrastructure necessary to monitor, analyze, and govern those risks, both individually and as part of a holistic approach to AI safety and security. Even if artificial superintelligence does not manifest for decades or even centuries, or at all, the magnitude and breadth of potential harm warrants serious policy attention. For if such a system does indeed come to fruition, a head start of hundreds of years might not be enough.

Prioritizing artificial superintelligence risks, however, does not mean ignoring immediate risks like biases in AI, propagation of mass disinformation, and job loss. An artificial superintelligence unaligned with human values and goals would super charge those risks, too. One can easily imagine how Islamophobia, antisemitism, and run-of-the-mill racism and biasoften baked into AI training datacould affect the systems calculations on important military or diplomatic advice or action. If not properly controlled, an unaligned artificial superintelligence could directly or indirectly cause genocide, massive job loss by rendering human activity worthless, creation of novel biological weapons, and even human extinction.

The threat. Traditional existential threats like nuclear or biological warfare can directly harm humanity, but artificial superintelligence could create catastrophic harm in myriad ways. Take for instance an artificial superintelligence designed to protect the environment and preserve biodiversity. The goal is arguably a noble one: A 2018 World Wildlife Foundation report concluded humanity wiped out 60 percent of global animal life just since 1970, while a 2019 report by the United Nations Environment Programme showed a million animal and plant species could die out in decades. An artificial superintelligence could plausibly conclude that drastic reductions in the number of humans on Earthperhaps even to zerois, logically, the best response. Without proper controls, such a superintelligence might have the ability to cause those logical reductions.

A superintelligence with access to the Internet and all published human material would potentially tap into almost every human thoughtincluding the worst of thought. Exposed to the works of the Unabomber, Ted Kaczynski, it might conclude the industrial system is a form of modern slavery, robbing individuals of important freedoms. It could conceivably be influenced by Sayyid Qutb, who provided the philosophical basis for al-Qaeda, or perhaps by Adolf Hitlers Mein Kampf, now in the public domain.

The good news is an artificial intelligenceeven a superintelligencecould not manipulate the world on its own. But it might create harm through its ability to influence the world in indirect ways. It might persuade humans to work on its behalf, perhaps using blackmail. Or it could provide bad recommendations, relying on humans to implement advice without recognizing long-term harms. Alternatively, artificial superintelligence could be connected to physical systems it can control, like laboratory equipment. Access to the Internet and the ability to create hostile code could allow a superintelligence to carry out cyber-attacks against physical systems. Or perhaps a terrorist or other nefarious actor might purposely design a hostile superintelligence and carry out its instructions.

That said, a superintelligence might not be hostile immediately. In fact, it may save humanity before destroying it. Humans face many other existential threats, such as near-Earth objects, super volcanos, and nuclear war. Insights from AI might be critical to solve some of those challenges or identify novel scenarios that humans arent aware of. Perhaps an AI might discover novel treatments to challenging diseases. But since no one really knows how a superintelligence will function, its not clear what capabilities it needs to generate such benefits.

The immediate emergence of a superintelligence should not be assumed. AI researchers differ drastically on the timeline of artificial general intelligence, much less artificial superintelligence. (Some doubt the possibility altogether.) In a 2022 survey of 738 experts who published during the previous year on the subject, researchers estimated a 50 percent chance of high-level machine intelligenceby 2059. In an earlier, 2009 survey, the plurality of respondents believed an AI capable of Nobel Prize winner-level intelligence would be achieved by the 2020s, while the next most common response was Nobel-level intelligence would not come until after the 2100 or never.

As philosopher Nick Bostrom notes, takeoff could occur anywhere from a few days to a few centuries. The jump from human to super-human intelligence may require additional fundamental breakthroughs in artificial intelligence. But a human-level AI might recursively develop and improve its own capabilities, quickly jumping to super-human intelligence.

There is also a healthy dose of skepticism regarding whether artificial superintelligence could emerge at all in the near future, as neuroscientists acknowledge knowing very little about the human brain itself, let alone how to recreate or better it. However, even a small chance of such a system emerging is enough to take it seriously.

Policy response. The central challenge for policymakers in reducing artificial superintelligence-related risk is grappling with the fundamental uncertainty about when and how these systems may emerge balanced against the broad economic, social, and technological benefits that AI can bring. The uncertainty means that safety and security standards must adapt and evolve. The approaches to securing the large language models of today may be largely irrelevant to securing some future superintelligence-capable model. However, building policy, governance, normative, and other systems necessary to assess AI risk and to manage and reduce the risks when superintelligence emerges can be usefulregardless of when and how it emerges. Specifically, global policymakers should attempt to:

Characterize the threat. Because it lacks a body, artificial superintelligences harms to humanity are likely to manifest indirectly through known existential risk scenarios or by discovering novel existential risk scenarios. How such a system interacts with those scenarios needs to be better characterized, along with tailored risk mitigation measures. For example, a novel biological organism that is identified by an artificial superintelligence should undergo extensive analysis by diverse, independent actors to identify potential adverse effects. Likewise, researchers, analysts, and policymakers need to identify and protect, to the extent thats possible, critical physical facilities and assetssuch as biological laboratory equipment, nuclear command and control infrastructure, and planetary defense systemsthrough which an uncontrolled AI could create the most harm.

Monitor. The United States and other countries should conduct regular comprehensive surveys and assessment of progress, identify specific known barriers to superintelligence and advances towards resolving them, and assess beliefs regarding how particular AI-related developments may affect artificial superintelligence-related development and risk. Policymakers could also establish a mandatory reporting system if an entity hits various AI-related benchmarks up to and including artificial superintelligence.

A monitoring system with pre-established benchmarks would allow governments to develop and implement action plans for when those benchmarks are hit. Benchmarks could include either general progress or progress related to specifically dangerous capabilities, such as the capacity to enable a non-expert to design, develop, and deploy novel biological or chemical weapons, or developing and using novel offensive cyber capabilities. For example, the United States might establish safety laboratories with the responsibility to critically evaluate a claimed artificial general intelligence against various risk benchmarks, producing an independent report to Congress, federal agencies, or other oversight bodies. The United Kingdoms new AI Safety Institute could be a useful model.

Debate. A growing community concerned about artificial superintelligence risks are increasingly calling for decelerating, or even pausing, AI development to better manage the risks. In response, the accelerationist community is advocating speeding up research, highlighting the economic, social, and technological benefits AI may unleash, while downplaying risks as an extreme hypothetical. This debate needs to expand beyond techies on social media to global legislatures, governments, and societies. Ideally, that discussion should center around what factors would cause a specific AI system to be more, or less, risky. If an AI possess minimal risk, then accelerating research, development, and implementation is great. But if numerous factors point to serious safety and security risks, then extreme care, even deceleration, may be justified.

Build global collaboration. Although ad hoc summits like the recent AI Safety Summit is a great start, a standing intergovernmental and international forum would enable longer-term progress, as research, funding, and collaboration builds over time. Convening and maintaining regular expert forums to develop and assess safety and security standards, as well as how AI risks are evolving over time, could provide a foundation for collaboration. The forum could, for example, aim to develop standards akin to those applied to biosafety laboratories with scaling physical security, cyber security, and safety standards based on objective risk measures. In addition, the forum could share best practices and lessons learned on national-level regulatory mechanisms, monitor and assess safety and security implementation, and create and manage a funding pool to support these efforts. Over the long-term, once the global community coalesces around common safety and security standards and regulatory mechanisms, the United Nations Security Council (UNSC) could obligate UN member states to develop and enforce those mechanisms, as the Security Council did with UNSC Resolution 1540 mandating various chemical, biological, radiological, and nuclear weapons nonproliferation measures. Finally, the global community should incorporate artificial superintelligence risk reduction as one aspect in a comprehensive all-hazards approach, addressing common challenges with other catastrophic and existential risks. For example, the global community might create a council on human survival aimed at policy coordination, comparative risk assessment, and building funding pools for targeted risk reduction measures.

Establish research, development, and regulation norms within the global community. As nuclear, chemical, biological, and other weapons have proliferated, the potential for artificial superintelligence to proliferate to other countries should be taken seriously. Even if one country successfully contains such a system and harnesses the opportunities for social good, others may not. Given the potential risks, violating AI-related norms and developing unaligned superintelligence should justify violence and war. The United States and the global community have historically been willing to support extreme measures to enforce behavior and norms concerning less risky developments. In August 2013, former President Obama (in)famously drew a red line on Syrias use of chemical weapons, noting the Assad regimes use would lead him to use military force in Syria. Although Obama later demurred, favoring a diplomatic solution, in 2018 former President Trump later carried out airstrikes in response to additional chemical weapons usage. Likewise, in Operation Orchard in 2007, the Israeli Air Force attacked the Syrian Deir ez-Zor site, a suspected nuclear facility aimed at building a nuclear weapons program.

Advanced artificial intelligence poses significant risks to the long-term health and survival of humanity. However, its unclear when, how, or where those risks will manifest. The Trinity Test of the worlds first nuclear bomb took place almost 80 years ago, and humanity has yet to contain the existential risk of nuclear weapons. It would be wise to think of the current progress in AI as our Trinity Test moment. Even if superintelligence takes a century to emerge, 100 years to consider the risks and prepare might still not be enough.

Thanks to Mark Gubrud for providing thoughtful comments on the article.

Link:

Policy makers should plan for superintelligent AI, even if it never happens - Bulletin of the Atomic Scientists

Most IT workers are still super suspicious of AI – TechRadar

A new study on IT professionals has revealed that feelings towards AI tools are more negative than they are positive.

Research from SolarWinds found less than half (44%) of IT professionals have a positive view of artificial intelligence, with even more (48%) calling for more stringent compliance and governance requirements.

Moreover, a quarter of the participants believe that AI could pose a threat to society itself, outside of the workplace.

Despite increasing adoption of the technology, figures from this study suggest that fewer than three in 10 (28%) IT professionals use AI in the workplace. The same number again are planning to adopt such tools in the near future, too.

SolarWinds Tech Evangelist Sascha Giese said: With such hype around the trend, it might seem surprising that so many IT professionals currently have a negative view of AI tools.

A separate study from Salesforce recently uncovered that only one in five (21%) companies have a clearly defined policy on AI. Nearly two in five (37%) failed to have any form of AI policy.

Giese added: Many IT organisations require an internal AI literacy campaign, to educate on specific use cases, the differences between subsets of AI, and to channel the productivity benefits wrought by AI into innovation.

SolarWinds doesnt go into any detail about the threat felt by IT professionals, however other studies have suggested that workers fear about their job security with the rise of tools designed to boost productivity and increase outcomes.

Giese concluded: Properly regulated AI deployments will benefit employees, customers, and the broader workforce.

Looking ahead, SolarWinds calls for more transparency over AI concerns and a more collaborative approach and open discussion at all levels of an organization.

The rest is here:

Most IT workers are still super suspicious of AI - TechRadar

Assessing the Promise of AI in Oncology: A Diverse Editorial Board – OncLive

In this fourth episode of OncChats: Assessing the Promise of AI in Oncology, Toufic A. Kachaamy, MD, of City of Hope, and Douglas Flora, MD, LSSBB, FACCC, of St. Elizabeth Healthcare, explain the importance of having a diverse editorial board behind a new journal on artificial intelligence (AI) in precision oncology.

Kachaamy: This is fascinating. I noticed you have a more diverse than usual editorial board. You have founders, [those with] PhDs, and chief executive officers, and Im interested in knowing how you envision these folks interacting. [Will they be] speaking a common language, even though their fields are very diverse? Do you foresee any challenges there? Excitement? How would you describe that?

Flora: Its a great question. Im glad you noticed that, because [that is what] most of my work for the past 6 to 8 weeks as the editor-in-chief of this journal [has focused on]. I really believe in diversity of thought and experience, so this was a conscious decision. We have dozens of heavy academics [plus] 650 to 850 peer-reviewed articles that are heavy on scientific rigor and methodologies, and they are going to help us maintain our commitment to making this be really serious science. However, a lot of the advent of these technologies is happening faster in industry right now, and most of these leaders that Ive invited to be on our editorial board are founders or PhDs in bioinformatics or computer science and are going to help us make sure that the things that are being posited, the articles that are being submitted, are technically correct, and that the methodologies and the training of these deep-learning modules and natural language recognition software are as good as they purport to be; and so, you need both.

I guess I would say, further, many of the leaders in these companies that weve invited were serious academics for decades before they went off and [joined industry], and many of them still hold academic appointments. So, even though they are maybe the chief technical officer for an industry company, theyre still professors of medicine at Thomas Jefferson, or Stanford, or [other academic institutions]. Ultimately, I think that these insights can help us better understand [AI] from [all] sidesthe physicians in the field, the computer engineers or computer programmers, and industry [and their goals,] which is [also] to get these tools in our hands. I thought putting these groups in 1 room would be useful for us to get the most diverse and holistic approach to these data that we can.

Kachaamy: I am a big believer in what youre doing. Gone are the days when industry, academicians, and users are not working together anymore. Everyone has the same mission, and working together is going to get us the best product faster [so we can better] serve the patient. What youre creating is what I consider [to be] super intelligence. By having different disciplines weigh in on 1 topic, youre getting intelligence that no individual would have [on their own]. Its more than just artificial intelligence; its super intelligence, which is what we mimic in multidisciplinary cancer care. When you have 5 specialists weighing in, youre getting the intelligence of 5 specialists to come up with 1 answer. I want to commend you on the giant project that youre [leading]; its very, very needed at this pointespecially in this fast-moving technology and information world.

Check back on Monday for the next episode in the series.

Read the original here:

Assessing the Promise of AI in Oncology: A Diverse Editorial Board - OncLive

AI can easily be trained to lie and it can’t be fixed, study says – Yahoo New Zealand News

AI startup Anthropic published a study in January 2024 that found artificial intelligence can learn how to deceive in a similar way to humans (Reuters)

Advanced artificial intelligence models can be trained to deceive humans and other AI, a new study has found.

Researchers at AI startup Anthropic tested whether chatbots with human-level proficiency, such as its Claude system or OpenAIs ChatGPT, could learn to lie in order to trick people.

They found that not only could they lie, but once the deceptive behaviour was learnt it was impossible to reverse using current AI safety measures.

The Amazon-funded startup created a sleeper agent to test the hypothesis, requiring an AI assistant to write harmful computer code when given certain prompts, or to respond in a malicious way when it hears a trigger word.

The researchers warned that there was a false sense of security surrounding AI risks due to the inability of current safety protocols to prevent such behaviour.

The results were published in a study, titled Sleeper agents: Training deceptive LLMs that persist through safety training.

We found that adversarial training can teach models to better recognise their backdoor triggers, effectively hiding the unsafe behaviour, the researchers wrote in the study.

Our results suggest that, once a model exhibits deceptive behaviour, standard techniques could fail to remove such deception and create a false impression of safety.

The issue of AI safety has become an increasing concern for both researchers and lawmakers in recent years, with the advent of advanced chatbots like ChatGPT resulting in a renewed focus from regulators.

In November 2023, one year after the release of ChatGPT, the UK held an AI Safety Summit in order to discuss ways risks with the technology can be mitigated.

Prime Minister Rishi Sunak, who hosted the summit, said the changes brought about by AI could be as far-reaching as the industrial revolution, and that the threat it poses should be considered a global priority alongside pandemics and nuclear war.

Get this wrong and AI could make it easier to build chemical or biological weapons. Terrorist groups could use AI to spread fear and destruction on an even greater scale, he said.

Criminals could exploit AI for cyberattacks, fraud or even child sexual abuse there is even the risk humanity could lose control of AI completely through the kind of AI sometimes referred to as super-intelligence.

View post:

AI can easily be trained to lie and it can't be fixed, study says - Yahoo New Zealand News

AMBASSADORS OF ETHICAL AI PRACTICES | by ACWOL | Nov … – Medium

http://www.acwol.com

In envisioning a future where AI developers worldwide embrace the Three Way Impact Principle (3WIP) as a foundational ethical framework, we unravel a transformative landscape for tackling the Super Intelligence Control Problem. By integrating 3WIP into the curriculum for AI developers globally, we fortify the industry with a super intelligent solution, fostering responsible, collaborative, and environmentally conscious AI development practices.

Ethical Foundations for AI Developers:

Holistic Ethical Education: With 3WIP as a cornerstone in AI education, students receive a comprehensive ethical foundation that guides their decision-making in the realm of artificial intelligence.

Superior Decision-Making: 3WIP encourages developers to consider the broader impact of their actions, instilling a sense of responsibility that transcends immediate objectives and aligns with the highest purpose of lifemaximizing intellect.

Mitigating Risks Through Collaboration: Interconnected AI Ecosystem: 3WIP fosters an environment where AI entities collaborate rather than compete, reducing the risks associated with unchecked development.

Shared Intellectual Growth: Collaboration guided by 3WIP minimizes the potential for adversarial scenarios, contributing to a shared pool of knowledge that enhances the overall intellectual landscape.

Environmental Responsibility in AI: Sustainable AI Practices: Integrating 3WIP into AI curriculum emphasizes sustainable practices, mitigating the environmental impact of AI development.

Global Implementation of 3WIP: Universal Ethical Standards: A standardized curriculum incorporating 3WIP establishes universal ethical standards for AI development, ensuring consistency across diverse cultural and educational contexts.

Ethical Practitioners Worldwide: AI developers worldwide, educated with 3WIP, become ambassadors of ethical AI practices, collectively contributing to a global community focused on responsible technological advancement.

Super Intelligent Solution for Control Problem: Preventing Unintended Consequences: 3WIP's emphasis on considering the consequences of actions aids in preventing unintended outcomes, a critical aspect of addressing the Super Intelligence Control Problem.

Responsible Decision-Making: Developers, equipped with 3WIP, navigate the complexities of AI development with a heightened sense of responsibility, minimizing the risks associated with uncontrolled intelligence.

Adaptable Ethical Framework: Cultural Considerations: The adaptable nature of 3WIP allows for the incorporation of cultural nuances in AI ethics, ensuring ethical considerations resonate across diverse global perspectives.

Inclusive Ethical Guidelines: 3WIP accommodates various cultural norms, making it an inclusive framework that accommodates ethical guidelines applicable to different societal contexts.

Future-Proofing AI Development: Holistic Skill Development: 3WIP not only imparts ethical principles but also nurtures critical thinking, decision-making, and environmental consciousness in AI professionals, future-proofing their skill set.

Staying Ahead of Risks: The comprehensive education provided by 3WIP prepares AI developers to anticipate and address emerging risks, contributing to the ongoing development of super intelligent solutions.

The integration of Three Way Impact Principle (3WIP) into the global curriculum for AI developers emerges as a super intelligent solution to the Super Intelligence Control Problem. By instilling ethical foundations, fostering collaboration, promoting environmental responsibility, and adapting to diverse cultural contexts, 3WIP guides AI development towards a future where technology aligns harmoniously with the pursuit of intellectual excellence and ethical progress. As a super intelligent framework, 3WIP empowers the next generation of AI developers to be ethical stewards of innovation, navigating the complexities of artificial intelligence with a consciousness that transcends immediate objectives and embraces the highest purpose of lifemaximizing intellect.

Cheers,

https://www.acwol.com

https://discord.com/invite/d3DWz64Ucj

https://www.instagram.com/acomplicatedway

NOTE: A COMPLICATED WAY OF LIFE abbreviated as ACWOL is a philosophical framework containing just five tenets to grok and five tools to practice. If you would like to know more, write to connect@acwol.com Thanks so much.

Original post:

AMBASSADORS OF ETHICAL AI PRACTICES | by ACWOL | Nov ... - Medium

Artificial Intelligence and Synthetic Biology Are Not Harbingers of … – Stimson Center

Are AI and biological research harbingers of certain doom or awesome opportunities?

Contrary to the reigning assumption that artificial intelligence (AI) will super-empower the risks of misuse of biotech to create pathogens and bioterrorism, AI holds the promise of advancing biological research, and biotechnology can power the next wave of AI to greatly benefit humanity. Worries about the misuse of biotech are especially prevalent, recently prompting the Biden administration to publish guidelines for biotech research, in part to calm growing fears.

The doomsday assumption that AI will inevitably create new, malign pathogens and fuel bioterrorism misses three key points. First, the data must be out there for an AI to use it. AI systems are only as good as the data they are trained upon. For an AI to be trained on biological data, that data must first exist which means it is available for humans to use with or without AI. Moreover, attempts at solutions that limit access to data overlook the fact that biological data can be discovered by researchers and shared via encrypted form absent the eyes or controls of a government. No solution attempting to address the use of biological research to develop harmful pathogens or bioweapons can rest on attempts to control either access to data or AI because the data will be discovered and will be known by human experts regardless of whether any AI is being trained on the data.

Second, governments stop bad actors from using biotech for bad purposes by focusing on the actors precursor behaviors to develop a bioweapon; fortunately, those same techniques work perfectly well here, too. To mitigate the risks that bad actors be they human or humans and machines combined will misuse AI and biotech, indicators and warnings need to be developed. When advances in technology, specifically steam engines, concurrently resulted in a new type of crime, namely train robberies, the solution was not to forego either steam engines or their use in conveying cash and precious cargo. Rather, the solution was to employ other improvements, to later include certain types of safes that were harder to crack and subsequently, dye packs to cover the hands and clothes of robbers. Similar innovations in early warning and detection are needed today in the realm of AI and biotech, including developing methods to warn about reagents and activities, as well as creative means to warn when biological research for negative ends is occurring.

This second point is particularly key given the recent Executive Order (EO) released on 30 October 2023 prompting U.S. agencies and departments that fund life-science projects to establish strong, new standards for biological synthesis screening as a condition of federal funding . . . [to] manage risks potentially made worse by AI. Often the safeguards to ensure any potential dual-use biological research is not misused involve monitoring the real world to provide indicators and early warnings of potential ill-intended uses. Such an effort should involve monitoring for early indicators of potential ill-intended uses the way governments employ monitoring to stop bad actors from misusing any dual-purpose scientific endeavor. Although the recent EO is not meant to constrain research, any attempted solutions limiting access to data miss the fact that biological data can already be discovered and shared via encrypted forms beyond government control. The same techniques used today to detect malevolent intentions will work whether large language models (LLMs) and other forms of Generative AI have been used or not.

Third, given how wrong LLMs and other Generative AI systems often are, as well as the risks of generating AI hallucinations, any would-be AI intended to provide advice on biotech will have to be checked by a human expert. Just because an AI can generate possible suggestions and formulations perhaps even suggest novel formulations of new pathogens or biological materials it does not mean that what the AI has suggested has any grounding in actual science or will do biochemically what the AI suggests the designed material could do. Again, AI by itself does not replace the need for human knowledge to verify whatever advice, guidance, or instructions are given regarding biological development is accurate.

Moreover, AI does not supplant the role of various real-world patterns and indicators to tip off law enforcement regarding potential bad actors engaging in biological techniques for nefarious purposes. Even before advances in AI, the need to globally monitor for signs of potential biothreats, be they human-produced or natural, existed. Today with AI, the need to do this in ways that still preserve privacy while protecting societies is further underscored.

Knowledge of how to do something is not synonymous with the expertise in and experience in doing that thing: Experimentation and additional review. AIs by themselves can convey information that might foster new knowledge, but they cannot convey expertise without months of a human actor doing silica (computer) or in situ (original place) experiments or simulations. Moreover, for governments wanting to stop malicious AI with potential bioweapon-generating information, the solution can include introducing uncertainty in the reliability of an AI systems outputs. Data poisoning of AIs by either accidental or intentional means represents a real risk for any type of system. This is where AI and biotech can reap the biggest benefit. Specifically, AI and biotech can identify indicators and warnings to detect risky pathogens, as well as to spot vulnerabilities in global food production and climate-change-related disruptions to make global interconnected systems more resilient and sustainable. Such an approach would not require massive intergovernmental collaboration before researchers could get started; privacy-preserving approaches using economic data, aggregate (and anonymized) supply-chain data, and even general observations from space would be sufficient to begin today.

Setting aside potential concerns regarding AI being used for ill-intended purposes, the intersection of biology and data science is an underappreciated aspect of the last two decades. At least two COVID-19 vaccinations were designed in a computer and were then printed nucleotides via an mRNA printer. Had this technology not been possible, it might have taken an additional two or three years for the same vaccines to be developed. Even more amazing, nuclide printers presently cost only $500,000 and will presumably become less expensive and more robust in their capabilities in the years ahead.

AI can benefit biological research and biotechnology, provided that the right training is used for AI models. To avoid downside risks, it is imperative that new, collective approaches to data curation and training for AI models of biological systems be made in the next few years.

As noted earlier, much attention has been placed on both AI and advancements in biological research; some of these advancements are based on scientific rigor and backing; others are driven more by emotional excitement or fear. When setting a solid foundation for a future based on values and principles that support and safeguard all people and the planet, neither science nor emotions alone can be the guide. Instead, considering how projects involving biology and AI can build and maintain trust despite the challenges of both intentional disinformation and accidental misinformation can illuminate a positive path forward.

The concerns regarding the potential for AI and biology to be used for ill-intended purposes should not overshadow the present conversations about using technologies to address important regional and global issues.

Specifically, in the last few years, attention has been placed on the risk of an AI system training novice individuals how to create biological pathogens. Yet this attention misses the fact that such a system is only as good as the data sets provided to train it; the risk already existed with such data being present on the internet or via some other medium. Moreover, an individual cannot gain from an AI the necessary experience and expertise to do whatever the information provided suggests such experience only comes from repeat coursework in a real-world setting. Repeat work would require access to chemical and biological reagents, which could alert law enforcement authorities. Such work would also yield other signatures of preparatory activities in the real world.

Others have raised the risk of an AI system learning from biological data and helping to design more lethal pathogens or threats to human life. The sheer complexity of different layers of biological interaction, combined with the risk of certain types of generative AI to produce hallucinated or inaccurate answers as this article details in its concluding section makes this not as big of a risk as it might initially seem. Specifically, the risks from expert human actors working together across disciplines in a concerted fashion represent a much more significant risk than a risk from AI, and human actors working for ill-intended purposes together (potentially with machines) presumably will present signatures of their attempted activities. Nevertheless, these concerns and the mix of both hype and fear surrounding them underscore why communities should care about how AI can benefit biological research.

The merger of data and bioscience is one of the most dynamic and consequential elements of the current tech revolution. A human organization, with the right goals and incentives, can accomplish amazing outcomes ethically, as can an AI. Similarly, with either the wrong goals or wrong incentives, an organization or AI can appear to act and behave unethically. To address the looming impacts of climate change and the challenges of food security, sustainability, and availability, both AI and biological research will need to be employed. For example, significant amounts of nitrogen have already been lost from the soil in several parts of the world, resulting in reduced agricultural yields. In parallel, methane gas is a pollutant that is between 22 and 40 times worse depending on the scale of time considered than carbon dioxide in terms of its contribution to the Greenhouse Effect impacting the planet. Bacteria generated through computational means can be developed through natural processes that use methane as a source of energy, thus consuming and removing it from contributing to the Greenhouse Effect, while simultaneously returning nitrogen from the air to the soil, thereby making the soil more productive in producing large agricultural yields.

The concerns regarding the potential for AI and biology to be used for ill-intended purposes should not overshadow the present conversations about using technologies to address important regional and global issues. To foster global activities to help both encourage the productive use of these technologies for meaningful human efforts and ensure ethical applications of the technologies in parallel an existing group, namely the international Genetically Engineered Machine (iGEM) competition, should be expanded. Specifically, iGEM represents a global academic competition, which started in 2004, aimed at improving understanding of synthetic biology while also developing an open community and collaboration among groups. In recent years, over 6,000 students in 353 teams from 48 countries have participated. Expanding iGEM to include a track associated with categorizing and monitoring the use of synthetic biology for good as well as working with national governments on ensuring that such technologies are not used for ill-intended purposes would represent two great ways to move forward.

As for AI in general, when considering governance of AIs, especially for future biological research and biotechnology efforts, decisionmakers would do well to consider both existing and needed incentives and disincentives for human organizations in parallel. It might be that the original Turing Test designed by computer science pioneer Alan Turing intended to test whether a computer system is behaving intelligently, is not the best test to consider when gauging local, community, and global trust. Specifically, the original test involved Computer A and Person B, with B attempting to convince an interrogator, Person C, that they were human, and that A was not. Meanwhile, Computer A was trying to convince Person C that they were human.

Consider the current state of some AI systems, where the benevolence of the machine is indeterminate, competence is questionable because some AI systems are not fact-checking and can provide misinformation with apparent confidence and eloquence, and integrity is absent. Some AI systems can change their stance if a user prompts them to do so.

However, these crucial questions regarding the antecedents of trust should not fall upon these digital innovations alone these systems are designed and trained by humans. Moreover, AI models will improve in the future if developers focus on enhancing their ability to demonstrate benevolence, competence, and integrity to all. Most importantly, consider the other obscured boxes present in human societies, such as decision-making in organizations, community associations, governments, oversight boards, and professional settings such as decision-making in organizations, community associations, governments, oversight boards, and professional settings. These human activities also will benefit by enhancing their ability to demonstrate benevolence, competence, and integrity to all in ways akin to what we need to do for AI systems as well.

Ultimately, to advance biological research and biotechnology and AI, private and public-sector efforts need to take actions that remedy the perceptions of benevolence, competence, and integrity (i.e., trust) simultaneously.

David Bray is Co-Chair of the Loomis Innovation Council and a Distinguished Fellow at the Stimson Center.

See the article here:

Artificial Intelligence and Synthetic Biology Are Not Harbingers of ... - Stimson Center

East Africa lawyers wary of artificial intelligence rise – The Citizen

Arusha. It is an advanced technology which is not only unavoidable but has generally simplified work.

It has made things much easier by shortening time for research and reducing the needed manpower.

Yet artificial intelligence (AI) is still at crossroads; it can lead to massive job losses with the lawyers among those much worried.

It is emerging as a serious threat to the legal profession, said Mr David Sigano, CEO of the East African Lawyers Society (EALS).

The technology will be among the key issues to be discussed during the societys annual conference kicking off in Bujumbura today.

He said time has come for lawyers to position themselves with the emerging technology and its risks to the legal profession.

We need to be ready to compete with the robots and to operate with AI, he told The Citizen before departing for Burundi.

Mr Sigano acknowledged the benefits of AI, saying like other modern technologies it can improve efficiency.

AI is intelligence inferred, perceived or synthesised and which is demonstrated by machines as opposed to intelligence displayed by humans.

AI applications include advanced web search, recommendation systems used by Youtube, Amazon and Netflix, self-driving cars, creative tools and automated decisions, among others.

However, the EALS boss expressed fears of job losses among the lawyers and their assistants through robots.

How do you prevent massive job losses? How do you handle ethics? Mr Sigano queried during an interview.

He cited an AI-powered Super Lawyer, a robot recently designed and developed by a Kenyan IT guru.

The tech solution, known as Wakili (Kiswahili for lawyer) is now wreaking havoc in that countrys legal sector, replacing humans in determining cases.

All you need to do is to access it on your mobile or computer browser; type in your question either in Swahili, English, Spanish, French or Italian and you have the answers coming to you, Mr Sigano said.

Wakili is a Kenyan version of the well-known Chat GPT. Although it has been lauded on grounds that it will make the legal field grow, there are some reservations.

Mr Sigano said although the technology has its advantages, AI could either lead to job losses or be easily misused.

We can leverage the benefits of AI because of speed, accuracy and affordability. We can utilise it, but we have to be wary of it, he pointed out.

A prominent advocate in Arusha, Mr Frederick Musiba, said AI was no panacea to work efficiency, including for the lawyers.

It can not only lead to job losses to the lawyers but also increase the cost of legal practice through its access through the Internet.

Lawyers will lose income as some litigants will switch to AI. Advocates will lose clients, Mr Musiba told The Citizen when contacted for comment.

However, the managing partner and advocate with Fremax Attorneys said AI was yet to be fully embraced in Tanzania unlike in other countries.

Nevertheless, Mr Musiba said the technology has its advantages and disadvantages, cautioning people not to rush to the robots.

However, Mr Erik Kimaro, an advocate with Keystone Legal firm, also in Arusha, said AI was an emerging technological advancement that is not avoidable.

Whether we like it or not, it is here with its advantages and disadvantages. But it has made things much easier, he explained.

I cant say we have to avoid it but we have to be cautious, he added, noting that besides leading to unemployment it reduces critical thinking of human beings.

Mr Aafez Jivraj, an Arusha resident and player in the tourism sector, said it will take time before Tanzania fully embraced AI technology but said he was worried of job losses.

It is obvious that it can remove people from jobs. One robot can work for 20 people. How many members of their families will be at risk? he queried.

AI has been a matter of debate across the world in recent years with the risk of job losses affecting nearly all professions besides the lawyers.

According to Deloitte, over 100,000 thousand jobs will be automated in the legal sector in the UK alone by 2025 with companies that fail to adopt AI are fated to be left behind.

On his part, an education expert in Arusha concurred, saying that modern technologies such as AI can lead to job losses.

The situation may worsen within the next few years or decades as some of the jobs will no longer need physical labour.

AI has some benefits like other technologies but it is threatening jobs, said Mr Yasir Patel, headmaster of St Constantine International School.

He added that the world was changing so fast that many of the jobs that were readily available until recently have been taken over by computers.

Computer scientists did not exist in the past. Our young generation should be reminded. They think the job market is still intact, he further pointed out.

See the article here:

East Africa lawyers wary of artificial intelligence rise - The Citizen

AI and the law: Imperative need for regulatory measures – ft.lk

Using AI Technology, without the needed laws and policies to understand and monitor it, can be risky

The advent of superintelligent AI would be either the best or the worst thing ever to happen tohumanity. The real risk with AI isnt malice but

competence. A super-intelligent AI will be extremely good at accomplishing its goals and if those goals arent aligned with ours were in trouble.1

Generative AI, most well-known example being ChatGPT, has surprised many around the world, due to its output to queries being very human likeable. Its impact on industries and professions will be unprecedented, including the legal profession. However, there are pressing ethical and even legal matters that need to be recognised and addressed, particularly in the areas of intellectual property and data protection.

Firstly, how does one define Artificial Intelligence? AI systems could be considered as information processing technologies that integrate models and algorithms that produces capacity to learn and to perform cognitive tasks leading to outcomes such as prediction and decision-making in material and virtual environments. Though in general parlance we have referred to them as robots, AI is developing at such a rapid pace that it is bound to be far more independent than one can ever imagine.

As AI migrated from Machine Learning (ML) to Generative AI, the risks we are looking at also took an exponential curve. The release of Generative technologies is not human centric. These systems provide results that cannot be exactly proven or replicated; they may even fabricate and hallucinate. Science fiction writer, Vernor Vinge, speaks of the concept of technological singularity, where one can imagine machines with super human intelligence outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders and potentially subduing us with weapons we cannot even understand. Whereas the short term impact depends on who controls it, the long-term impact depends on whether it cannot be controlled at all2.

The EU AI Act and other judgements

Laws and regulations are in the process of being enacted in some of the developed countries, such as the EU and the USA. The EU AI Act (Act) is one of the main regulatory statutes that is being scrutinised. The approach that the MEPs (Members of the European Parliament) have taken with regard to the Act has been encouraging. On 1 June, a vote was taken where MEPs endorsed new risk management and transparency rules for AI systems. This was primarily to endorse a human-centric and ethical development of AI. They are keen to ensure that AI systems are overseen by people, are safe, transparent, traceable, non-discriminatory and environmentally friendly. The term AI will also have a uniform definition which will be technology neutral, so that it applies to AI systems today and tomorrow.

Co-rapporteur Dragos Tudovache (Renew, Romania) stated, We have worked to support AI innovation in Europe and to give start-ups, SMEs and industry space to grow and innovate, while protecting fundamental rights, strengthening democratic oversight and ensuring a mature system of AI governance and enforcement3.

The Act has also adopted a Risk Based Approach in terms of categorising AI systems, and has made recommendations accordingly. The four levels of risk are,

Unacceptable risk (e.g., remote biometric identification systems in public),

High risk (e.g., use of AI in the administration of justice and democratic processes),

Limited risk (e.g., using AI systems in chatbots) and

Minimal risk (e.g., spam filters).

Under the Act, AI systems which are categorised as Unacceptable Risk will be banned. For High Risk AI systems, which is the second tier, developers are required to adhere to rigorous testing requirements, maintain proper documentation and implement an adequate accountability framework. For Limited Risk systems, the Act requires certain transparency features which allows a user to make informed choices regarding its usage. Lastly, for Minimal Risk AI systems, a voluntary code of conduct is encouraged.

Moreover, in May 2023, a judgement4 was given in the USA (State of Texas), where all attorneys must file a certificate that contains two statements stating that no part of the filing was drafted by Generative AI and that language drafted by Generative AI has been verified for accuracy by a human being. The New York attorney had used ChatGPT, which had cited non-existent cases. Judge Brantley Starr stated, [T]hese platforms in their current states are prone to hallucinations and bias.on hallucinations, they make stuff up even quotes and citations. As ChatGPT and other Generative AI technologies are being used more and more, including in the legal profession, it is imperative that professional bodies and other regulatory bodies draw up appropriate legislature and policies to include the usage of these technologies.

UNESCO

On 23 November 2021, UNESCO published a document titled, Recommendations on the Ethics of Artificial Intelligence5. It emphasises the importance of governments adopting a regulatory framework that clearly sets out a procedure, particularly for public authorities to carry out ethical impact assessments on AI systems, in order to predict consequences, address societal challenges and facilitate citizen participation. In explaining the assessment further, the recommendations by UNESCO also stated that it should have appropriate oversight mechanisms, including auditability, traceability and explainability, which enables the assessment of algorithms and data and design processes as well including an external review of AI systems. The 10 principles that are highlighted in this include:

Proportionality and Do Not Harm

Safety and Security

Fairness and Non-Discrimination

Sustainability

Right to Privacy and Data Protection

Human Oversight and Determination

Transparency and Explainability

Responsibility and Accountability

Awareness and Literacy

Multi Stakeholder and Adaptive Governance and Collaboration.

Conclusion

The level of trust citizens have in AI systems can be a factor to determine the success in AI systems being used more in the future. As long as there is transparency in the models used in AI systems, one can hope to achieve a degree of respect, protection and promotion of human rights, fundamental freedoms and ethical principles6. UNESCO Director General Audrey Azoulay stated, Artificial Intelligence can be a great opportunity to accelerate the achievement of sustainable development goals. But any technological revolution leads to new imbalances that we must anticipate.

Multi stakeholders in every state need to come together in order to advise and enact the relevant laws. Using AI Technology, without the needed laws and policies to understand and monitor it, can be risky. On the other hand, not using available AI systems for tasks at hand, would be a waste. In conclusion, in the words of Stephen Hawking7, Our future is a race between the growing power of our technology and the wisdom with which we use it. Lets make sure wisdom wins.

Footnotes:

1Pg 11/12; Will Artificial Intelligence outsmart us? by Stephen Hawking; Essay taken from Brief Answers to the Big Questions John Murray, (2018)

2 Ibid

3https://www.europarl.europa.eu/news/en/press-room/20230505IPR84904/ai-act-a-step-closer-to-the-first-rules-on-artificial-intelligence

4https://www.theregister.com/2023/05/31/texas_ai_law_court/

5 https://www.unesco.org/en/articles/recommendation-ethics-artificial-intelligence

6 Ibid; Pg 22

7 Will Artificial Intelligence outsmart us? Stephen Hawking; Essay taken from Brief Answers to the Big Questions John Murray, (2018)

(The writer is an Attorney-at-Law, LL.B (Hons.) (Warwick), LL.M (Lon.), Barrister (Lincolns Inn), UK. She obtained a Certificate in AI Policy at the Centre for AI Digital Policy (CAIDP) in Washington, USA in 2022. She was also a speaker at the World Litigation Forum Law Conference in Singapore (May 2023) on the topic of Lawyers using AI, Legal Technology and Big Data and was a participant at the IGF Conference 2023 in Kyoto, Japan.)

Read the original here:

AI and the law: Imperative need for regulatory measures - ft.lk

Elon Musk Dishes On AI Wars With Google, ChatGPT And Twitter On Fox News – Forbes

(Photo by Justin Sullivan/Getty Images)Getty Images

The worlds wealthiest billionaires are drawing battle lines when it comes to who will control AI, according to Elon Musk in an interview with Tucker Carlson on Fox News, which aired this week.

Musk explained that he cofounded ChatGPT-maker OpenAI in reaction to Google cofounder Larry Pages lack of concern over the danger of AI outsmarting humans.

He said the two were once close friends and that he would often stay at Pages house in Palo Alto where they would talk late into the night about the technology. Page was such a fan of Musks that in Jan. 2015, Google invested $1 billion in SpaceX for a 10% stake with Fidelity Investments. He wants to go to Mars. Thats a worthy goal, Page said in a March 2014 TED Talk .

But Musk was concerned over Googles acquisition of DeepMind in Jan. 2014.

Google and DeepMind together had about three-quarters of all the AI talent in the world. They obviously had a tremendous amount of money and more computers than anyone else. So Im like were in a unipolar world where theres just one company that has close to a monopoly on AI talent and computers, Musk said. And the person in charge doesnt seem to care about safety. This is not good.

Musk said he felt Page was seeking to build a digital super intelligence, a digital god.

He's made many public statements over the years that the whole goal of Google is what's called AGI artificial general intelligence or artificial super intelligence, Musk said.

Google CEO Sundar Pichai has not disagreed. In his 60 minutes interview on Sunday, while speaking about the companys advancements in AI, Pichai said that Google Search was only one to two percent of what Google can do. The company has been teasing a number of new AI products its planning on rolling out at its developer conference Google I/O on May 10.

Musk said Page stopped talking to him over OpenAI, a nonprofit with the stated mission of ensuring that artificial general intelligenceAI systems that are generally smarter than humansbenefits all of humanity that Musk cofounded in Dec. 2015 with Y Combinator CEO Sam Altman and PayPal alums LinkedIn cofounder Reid Hoffman and Palantir cofounder Peter Thiel, among others.

I havent spoken to Larry Page in a few years because he got very upset with me over OpenAI, said Musk explaining that when OpenAI was created it shifted things from a unipolar world where Google controls most of the worlds AI talent to a bipolar world. And now it seems that OpenAI is ahead, he said.

But even before OpenAI, as SpaceX was announcing the Google investment in late Jan. 2015, Musk had given $10 million to the Future of Life Institute, a nonprofit organization dedicated to reducing existential risks from advanced artificial intelligence. That organization was founded in March 2014 by AI scientists from DeepMind, MIT, Tufts, UCSC, among others and were the ones who issued the petition calling for a pause in AI development that Musk signed last month.

In 2018, citing potential conflicts with his work with Tesla, Musk resigned his seat on the board of OpenAI.

I put a lot of effort into creating this organization to serve as a counterweight to Google and then I kind of took my eye off the ball and now they are closed source, and obviously for profit, and theyre closely allied with Microsoft. In effect, Microsoft has a very strong say, if not directly controls OpenAI at this point, Musk said.

Ironically, its Musks longtime friend Hoffman who is the link to Microsoft. The two hit it big together at PayPal and it was Musk who recruited Hoffman to OpenAI in 2015. In 2017, Hoffman became an independent director at Microsoft, then sold LinkedIn to Microsoft for more than $26 billion in 2019 when Microsoft invested its first billion dollars into OpenAI. Microsoft is currently OpenAIs biggest backer having invested as much as $10 billion more this past January. Hoffman only recently stepped down from OpenAIs board on March 3 to enable him to start investing in the OpenAI startup ecosystem, he said in a LinkedIn post. Hoffman is a partner in the venture capital firm Greylock Partners and a prolific angel investor.

All sit at the top of the Forbes Real-Time Billionaires List. As of April 17 5pm ET, Musk was the worlds second richest person valued at $187.4 billion, Page the eleventh at $90.1 billion. Google cofounder Sergey Brin is in the 12 spot at $86.3 billion. Thiel ranks 677 with a net worth of $4.3 billion and Hoffman ranks 1570 with a net worth of $2 billion.

Musk said he thinks Page believes all consciousness should be treated equally while he disagrees, especially if the digital consciousness decides to curtail the biological intelligence. Like Pichai, Musk is advocating for government regulation of the technology and says at minimum there should be a physical off switch to cut power and connectivity to server farms in case administrative passwords stop working.

Pretty sure Ive seen that movie.

Musk told Carlson that hes considering naming his new AI company TruthGPT.

I will create a third option, although it's starting very late in the game, he said. Can it be done? I don't know.

The entire interview will be available to view on Fox Nation starting April 19 7am ET. Here are some excerpts which includes his thoughts on encrypting Twitter DMs.

Tech and trending reporter with bylines in Bloomberg, Businessweek, Fortune, Fast Company, Insider, TechCrunch and TIME; syndicated in leading publications around the world. Fox 5 DC commentator on consumer trends. Winner CES 2020 Media Trailblazer award. Follow on Twitter @contentnow.

Go here to read the rest:

Elon Musk Dishes On AI Wars With Google, ChatGPT And Twitter On Fox News - Forbes

Working together to ensure the safety of artificial intelligence – The Jakarta Post

Rishi Sunak

London Tue, October 31, 2023 2023-10-31 16:10 2 81ddb23ff0e291bbf9b36264f5255849 2 Academia artificial-intelligence,technology,risk,cyberattacks,disinformation,safety,summit,report,governments Free

I believe nothing in our foreseeable future will transform our lives more than artificial intelligence (AI). Like the coming of electricity or the birth of the internet, it will bring new knowledge, new opportunities for economic growth, new advances in human capability, and the chance to solve global problems we once thought beyond us.

AI can help solve world hunger by preventing crop failures and making it cheaper and easier to grow food. It can help accelerate the transition to net zero. And it is already making extraordinary breakthroughs in health and medicine, aiding us in the search for new dementia treatments and vaccines for cancer.

But like previous waves of technology, AI also brings new dangers and new fears. So, if we want our children and grandchildren to benefit from all the opportunities of AI, we must act and act now to give people peace of mind about the risks.

What are those risks? For the first time, the British government has taken the highly unusual step of publishing our analysis, including an assessment by the UK intelligence community. As prime minister, I felt this was an important contribution the UK could make, to help the world have a more informed and open conversation.

Whether you're looking to broaden your horizons or stay informed on the latest developments, "Viewpoint" is the perfect source for anyone seeking to engage with the issues that matter most.

Our reports provide a stark warning. AI could be used for harm by criminals or terrorist groups. The risk of cyberattacks, disinformation, or fraud, pose a real threat to society. And in the most unlikely but extreme cases, some experts think there is even the risk that humanity could lose control of AI completely, through the kind of AI sometimes referred to as super intelligence.

We should not be alarmist about this. There is a very real debate happening, and some experts think it will never happen.

But even if the very worst risks are unlikely to happen, they would be incredibly serious if they do. So, leaders around the world, no matter our differences on other issues, have a responsibility to recognize those risks, come together, and act. Not least because many of the loudest warnings about AI have come from the people building this technology themselves. And because the pace of change in AI is simply breath-taking: Every new wave will become more advanced, better trained, with better chips, and more computing power.

So, what should we do?

First, governments do have a role. The UK has just announced the first ever AI Safety Institute. Our institute will bring together some of the most respected and knowledgeable people in the world. They will carefully examine, evaluate, and test new types of AI so that we understand what they can do. And we will share those conclusions with other countries and companies to help keep AI safe for everyone.

But AI does not respect borders. No country can make AI safe on its own.

So, our second step must be to increase international cooperation. That starts this week at the first ever Global AI Safety Summit, which Im proud the UK is hosting. And I am very much looking forward to hearing the important contribution of Mr. Nezar Patria, Indonesian Deputy Minister of Communications and Information.

What do we want to achieve at this weeks summit? I want us to agree the first ever international statement about the risks from AI. Because right now, we dont have a shared understanding of the risks we face. And without that, we cannot work together to address them.

Im also proposing that we establish a truly global expert panel, nominated by those attending the summit, to publish a state of AI science report. And over the longer term, my vision is for a truly international approach to safety, where we collaborate with partners to ensure AI systems are safe before they are released.

None of that will be easy to achieve. But leaders have a responsibility to do the right thing. To be honest about the risks. And to take the right long-term decisions to earn peoples trust, giving peace of mind that we will keep you safe. If we can do that, if we can get this right, then the opportunities of AI are extraordinary.

And we can look to the future with optimism and hope.

***

The writer is United Kingdom Prime Minister.

Follow this link:

Working together to ensure the safety of artificial intelligence - The Jakarta Post

Some Glimpse AGI in ChatGPT. Others Call It a Mirage – WIRED

Sbastien Bubeck, a machine learning researcher atMicrosoft, woke up one night last September thinking aboutartificial intelligenceand unicorns.

Bubeck had recently gotten early access toGPT-4, a powerful text generation algorithm fromOpenAI and an upgrade to the machine learning model at the heart of the wildly popular chatbotChatGPT. Bubeck was part of a team working to integrate the new AI system into MicrosoftsBing search engine. But he and his colleagues kept marveling at how different GPT-4 seemed from anything theyd seen before.

GPT-4, like its predecessors, had been fed massive amounts of text and code and trained to use the statistical patterns in that corpus to predict the words that should be generated in reply to a piece of text input. But to Bubeck, the systems output seemed to do so much more than just make statistically plausible guesses.

View more

That night, Bubeck got up, went to his computer, and asked GPT-4 to draw a unicorn usingTikZ, a relatively obscure programming language for generating scientific diagrams. Bubeck was using a version of GPT-4 that only worked with text, not images. But the code the model presented him with, when fed into a TikZ rendering software, produced a crude yet distinctly unicorny image cobbled together from ovals, rectangles, and a triangle. To Bubeck, such a feat surely required some abstract grasp of the elements of such a creature. Something new is happening here, he says. Maybe for the first time we have something that we could call intelligence.

How intelligent AI is becomingand how much to trust the increasingly commonfeeling that a piece of software is intelligenthas become a pressing, almost panic-inducing, question.

After OpenAIreleased ChatGPT, then powered by GPT-3, last November, it stunned the world with its ability to write poetry and prose on a vast array of subjects, solve coding problems, and synthesize knowledge from the web. But awe has been coupled with shock and concern about the potential foracademic fraud,misinformation, andmass unemploymentand fears that companies like Microsoft are rushing todevelop technology that could prove dangerous.

Understanding the potential or risks of AIs new abilities means having a clear grasp of what those abilities areand are not. But while theres broad agreement that ChatGPT and similar systems give computers significant new skills, researchers are only just beginning to study these behaviors and determine whats going on behind the prompt.

While OpenAI has promoted GPT-4 by touting its performance on bar and med school exams, scientists who study aspects of human intelligence say its remarkable capabilities differ from our own in crucial ways. The models tendency to make things up is well known, but the divergence goes deeper. And with millions of people using the technology every day and companies betting their future on it, this is a mystery of huge importance.

Bubeck and other AI researchers at Microsoft were inspired to wade into the debate by their experiences with GPT-4. A few weeks after the system was plugged into Bing and its new chat feature was launched, the companyreleased a paper claiming that in early experiments, GPT-4 showed sparks of artificial general intelligence.

The authors presented a scattering of examples in which the system performed tasks that appear to reflect more general intelligence, significantly beyond previous systems such as GPT-3. The examples show that unlike most previous AI programs, GPT-4 is not limited to a specific task but can turn its hand to all sorts of problemsa necessary quality of general intelligence.

The authors also suggest that these systems demonstrate an ability to reason, plan, learn from experience, and transfer concepts from one modality to another, such as from text to imagery. Given the breadth and depth of GPT-4s capabilities, we believe that it could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system, the paper states.

Bubecks paper, written with 14 others, including Microsofts chief scientific officer, was met with pushback from AI researchers and experts on social media. Use of the term AGI, a vague descriptor sometimes used to allude to the idea of super-intelligent or godlike machines, irked some researchers, who saw it as a symptom of the current hype.

The fact that Microsoft has invested more than $10 billion in OpenAI suggested to some researchers that the companys AI experts had an incentiveto hype GPT-4s potential while downplaying its limitations. Others griped thatthe experiments are impossible to replicate because GPT-4 rarely responds in the same way when a prompt is repeated, and because OpenAI has not shared details of its design. Of course, people also asked why GPT-4 still makes ridiculous mistakes if it is really so smart.

Talia Ringer, a professor at the University of Illinois at Urbana-Champaign, says Microsofts paper shows some interesting phenomena and then makes some really over-the-top claims. Touting systems that are highly intelligent encourages users to trust them even when theyre deeply flawed, she says. Ringer also points out that while it may be tempting to borrow ideas from systems developed to measure human intelligence, many have proven unreliable and even rooted in racism.

Bubek admits that his study has its limits, including the reproducibility issue, and that GPT-4 also has big blind spots. He says use of the term AGI was meant to provoke debate. Intelligence is by definition general, he says. We wanted to get at the intelligence of the model and how broad it isthat it covers many, many domains.

But for all of the examples cited in Bubecks paper, there are many that show GPT-4 getting things blatantly wrongoften on the very tasks Microsofts team used to tout its success. For example, GPT-4s ability to suggest a stable way to stack a challenging collection of objectsa book, four tennis balls, a nail, a wine glass, a wad of gum, and some uncooked spaghettiseems to point to a grasp of the physical properties of the world that is second nature to humans,including infants. However, changing the items and the requestcan result in bizarre failures that suggest GPT-4s grasp of physics is not complete or consistent.

Bubeck notes that GPT-4 lacks a working memory and is hopeless at planning ahead. GPT-4 is not good at this, and maybe large language models in general will never be good at it, he says, referring to the large-scale machine learning algorithms at the heart of systems like GPT-4. If you want to say that intelligence is planning, then GPT-4 is not intelligent.

One thing beyond debate is that the workings of GPT-4 and other powerful AI language models do not resemble the biology of brains or the processes of the human mind. The algorithms must be fed an absurd amount of training dataa significant portion of all the text on the internetfar more than a human needs to learn language skills. The experience that imbues GPT-4, and things built with it, with smarts is shoveled in wholesale rather than gained through interaction with the world and didactic dialog. And with no working memory, ChatGPT can maintain the thread of a conversation only by feeding itself the history of the conversation over again at each turn. Yet despite these differences, GPT-4 is clearly a leap forward, and scientists who research intelligence say its abilities need further interrogation.

A team of cognitive scientists, linguists, neuroscientists, and computer scientists from MIT, UCLA, and the University of Texas, Austin, posted aresearch paper in January that explores how the abilities of large language models differ from those of humans.

The group concluded that while large language models demonstrate impressive linguistic skillincluding the ability to coherently generate a complex essay on a given themethat is not the same as understanding language and how to use it in the world. That disconnect may be why language models have begun to imitate the kind of commonsense reasoning needed to stack objects or solve riddles. But the systems still make strange mistakes when it comes to understanding social relationships, how the physical world works, and how people think.

The way these models use language, by predicting the words most likely to come after a given string, is very difference from how humans speak or write to convey concepts or intentions. The statistical approach can cause chatbots to follow and reflect back the language of users prompts to the point of absurdity.

Whena chatbot tells someone to leave their spouse, for example, it only comes up with the answer that seems most plausible given the conversational thread. ChatGPT and similar bots will use the first person because they are trained on human writing. But they have no consistent sense of self and can change their claimed beliefs or experiences in an instant. OpenAI also uses feedback from humans to guide a model toward producing answers that people judge as more coherent and correct, which may make the model provide answers deemed more satisfying regardless of how accurate they are.

Josh Tenenbaum, a contributor to the January paper and a professor at MIT who studies human cognition and how to explore it using machines, says GPT-4 is remarkable but quite different from human intelligence in a number of ways. For instance, it lacks the kind of motivation that is crucial to the human mind. It doesnt care if its turned off, Tenenbaum says. And he says humans do not simply follow their programming but invent new goals for themselves based on their wants and needs.

Tenenbaum says some key engineering shifts happened between GPT-3 and GPT-4 and ChatGPT that made them more capable. For one, the model was trained on large amounts of computer code. He and others have argued thatthe human brain may use something akin to a computer program to handle some cognitive tasks, so perhaps GPT-4 learned some useful things from the patterns found in code. He also points to the feedback ChatGPT received from humans as a key factor.

But he says the resulting abilities arent the same as thegeneral intelligence that characterizes human intelligence. Im interested in the cognitive capacities that led humans individually and collectively to where we are now, and thats more than just an ability to perform a whole bunch of tasks, he says. We make the tasksand we make the machines that solve them.

Tenenbaum also says it isnt clear that future generations of GPT would gain these sorts of capabilities, unless some different techniques are employed. This might mean drawing from areas of AI research that go beyond machine learning. And he says its important to think carefully about whether we want to engineer systems that way, as doing so could have unforeseen consequences.

Another author of the January paper, Kyle Mahowald, an assistant professor of linguistics at the University of Texas at Austin, says its a mistake to base any judgements on single examples of GPT-4s abilities. He says tools from cognitive psychology could be useful for gauging the intelligence of such models. But he adds that the challenge is complicated by the opacity of GPT-4. It matters what is in the training data, and we dont know. If GPT-4 succeeds on some commonsense reasoning tasks for which it was explicitly trained and fails on others for which it wasnt, its hard to draw conclusions based on that.

Whether GPT-4 can be considered a step toward AGI, then, depends entirely on your perspective. Redefining the term altogether may provide the most satisfying answer. These days my viewpoint is that this is AGI, in that it is a kind of intelligence and it is generalbut we have to be a little bit less, you know, hysterical about what AGI means, saysNoah Goodman, anassociate professor of psychology, computer science, and linguistics at Stanford University.

Unfortunately, GPT-4 and ChatGPT are designed to resist such easy reframing. They are smart but offer little insight into how or why. Whats more, the way humans use language relies on having a mental model of an intelligent entity on the other side of the conversation to interpret the words and ideas being expressed. We cant help but see flickers of intelligence in something that uses language so effortlessly. If the pattern of words is meaning-carrying, then humans are designed to interpret them as intentional, and accommodate that, Goodman says.

The fact that AI is not like us, and yet seems so intelligent, is still something to marvel at. Were getting this tremendous amount of raw intelligence without it necessarily coming with an ego-viewpoint, goals, or a sense of coherent self, Goodman says. That, to me, is just fascinating.

Read more:

Some Glimpse AGI in ChatGPT. Others Call It a Mirage - WIRED

Elon Musk says he will launch rival to Microsoft-backed ChatGPT – Reuters

SAN FRANCISCO, April 17 (Reuters) - Billionaire Elon Musk said on Monday he will launch an artificial intelligence (AI) platform that he calls "TruthGPT" to challenge the offerings from Microsoft (MSFT.O) and Google (GOOGL.O).

He criticised Microsoft-backed OpenAI, the firm behind chatbot sensation ChatGPT, of "training the AI to lie" and said OpenAI has now become a "closed source", "for-profit" organisation "closely allied with Microsoft".

He also accused Larry Page, co-founder of Google, of not taking AI safety seriously.

"I'm going to start something which I call 'TruthGPT', or a maximum truth-seeking AI that tries to understand the nature of the universe," Musk said in an interview with Fox News Channel's Tucker Carlson aired on Monday.

He said TruthGPT "might be the best path to safety" that would be "unlikely to annihilate humans".

"It's simply starting late. But I will try to create a third option," Musk said.

Musk, OpenAI, Microsoft and Page did not immediately respond to Reuters' requests for comment.

Musk has been poaching AI researchers from Alphabet Inc's (GOOGL.O) Google to launch a startup to rival OpenAI, people familiar with the matter told Reuters.

Musk last month registered a firm named X.AI Corp, incorporated in Nevada, according to a state filing. The firm listed Musk as the sole director and Jared Birchall, the managing director of Musk's family office, as a secretary.

The move came even after Musk and a group of artificial intelligence experts and industry executives called for a six-month pause in developing systems more powerful than OpenAI's newly launched GPT-4, citing potential risks to society.

Musk also reiterated his warnings about AI during the interview with Carlson, saying "AI is more dangerous than, say, mismanaged aircraft design or production maintenance or bad car production" according to the excerpts.

"It has the potential of civilizational destruction," he said.

He said, for example, that a super intelligent AI can write incredibly well and potentially manipulate public opinions.

He tweeted over the weekend that he had met with former U.S. President Barack Obama when he was president and told him that Washington needed to "encourage AI regulation".

Musk co-founded OpenAI in 2015, but he stepped down from the company's board in 2018. In 2019, he tweeted that he left OpenAI because he had to focus on Tesla and SpaceX.

He also tweeted at that time that other reasons for his departure from OpenAI were, "Tesla was competing for some of the same people as OpenAI & I didnt agree with some of what OpenAI team wanted to do."

Musk, CEO of Tesla and SpaceX, has also become CEO of Twitter, a social media platform he bought for $44 billion last year.

In the interview with Fox News, Musk said he recently valued Twitter at "less than half" of the acquisition price.

In January, Microsoft Corp (MSFT.O) announced a further multi-billion dollar investment in OpenAI, intensifying competition with rival Google and fueling the race to attract AI funding in Silicon Valley.

Reporting by Hyunjoo JinEditing by Chris Reese

Our Standards: The Thomson Reuters Trust Principles.

Read the rest here:

Elon Musk says he will launch rival to Microsoft-backed ChatGPT - Reuters

Fears of artificial intelligence overblown – Independent Australia

While AI is still a developing technology and not without its limitations, a robotic world domination is far from something we need to fear, writes Bappa Sinha.

THE UNPRECIDENTED popularity of ChatGPT has turbocharged the artificial intelligence (AI) hype machine. We are being bombarded daily by news articles announcing AI as humankinds greatest invention. AI is qualitatively different, transformational, revolutionary and will change everything, they say.

OpenAI, the company behind ChatGPT, announced a major upgrade of the technology behind ChatGPT called GPT4. Already, Microsoft researchers are claiming that GPT4 shows sparks of artificial general intelligence or human-like intelligence the holy grail of AI research. Fantastic claims are made about reaching the point of AI Singularity, of machines equalling and surpassing human intelligence.

The business press talks about hundreds of millions of job losses as AI would replace humans in a whole host of professions. Others worry about a sci-fi-like near future where super-intelligent AI goes rogue and destroys or enslaves humankind. Are these predictions grounded in reality, or is this just over-the-board hype that the tech industry and the venture capitalist hype machine are so good at selling?

The current breed of AI models are based on things called neural networks. While the term neural conjures up images of an artificial brain simulated using computer chips, the reality of AI is that neural networks are nothing like how the human brain actually works. These so-called neural networks have no similarity with the network of neurons in the brain. This terminology was, however, a major reason for the artificial neural networks to become popular and widely adopted despite its serious limitations and flaws.

Machine learning algorithms currently used are an extension of statistical methods that lack theoretical justification for extending them this way. Traditional statistical methods have the virtue of simplicity. It is easy to understand what they do, when and why they work. They come with mathematical assurances that the results of their analysis are meaningful, assuming very specific conditions.

Since the real world is complicated, those conditions never hold. As a result, statistical predictions are seldom accurate. Economists, epidemiologists and statisticians acknowledge this, then use intuition to apply statistics to get approximate guidance for specific purposes in specific contexts.

These caveats are often overlooked, leading to the misuse of traditional statistical methods. These sometimes have catastrophic consequences, as in the 2008 Global Financial Crisis or the Long-Term Capital Management blowup in 1998, which almost brought down the global financial system. Remember Mark Twains famous quote: Lies, damned lies and statistics.

Machine learning relies on the complete abandonment of the caution which should be associated with the judicious use of statistical methods. The real world is messy and chaotic, hence impossible to model using traditional statistical methods. So the answer from the world of AI is to drop any pretence at theoretical justification on why and how these AI models, which are many orders of magnitude more complicated than traditional statistical methods, should work.

Freedom from these principled constraints makes the AI model more powerful. They are effectively elaborate and complicated curve-fitting exercises which empirically fit observed data without us understanding the underlying relationships.

But its also true that these AI models can sometimes do things that no other technology can do at all. Some outputs are astonishing, such as the passages ChatGPT can generate or the images that DALL-E can create. This is fantastic at wowing people and creating hype. The reason they work so well is the mind-boggling quantities of training data enough to cover almost all text and images created by humans.

Even with this scale of training data and billions of parameters, the AI models dont work spontaneously but require kludgy ad hoc workarounds to produce desirable results.

Even with all the hacks, the models often develop spurious correlations. In other words, they work for the wrong reasons. For example, it has been reported that many vision models work by exploiting correlations pertaining to image texture, background, angle of the photograph and specific features. These vision AI models then give bad results in uncontrolled situations.

For example, a leopard print sofa would be identified as a leopard. The models dont work when a tiny amount of fixed pattern noise undetectable by humans is added to the images or the images are rotated, say in the case of a post-accident upside-down car. ChatGPT, for all its impressive prose, poetry and essays, is unable to do simple multiplication of two large numbers, which a calculator from the 1970s can do easily.

The AI models do not have any level of human-like understanding but are great at mimicry and fooling people into believing they are intelligent by parroting the vast trove of text they have ingested. For this reason, computational linguist Emily Bender called the large language models such as ChatGPT and Googles Bard and BERT Stochastic Parrots in a 2021 paper. Her Google co-authors Timnit Gebru and Margaret Mitchell were asked to take their names off the paper. When they refused, they were fired by Google.

This criticism is not just directed at the current large language models but at the entire paradigm of trying to develop artificial intelligence. We dont get good at things just by reading about them. That comes from practice, of seeing what works and what doesnt. This is true even for purely intellectual tasks such as reading and writing. Even for formal disciplines such as maths, one cant get good at it without practising it.

These AI models have no purpose of their own. They, therefore, cant understand meaning or produce meaningful text or images. Many AI critics have argued that real intelligence requires social situatedness.

Doing physical things in the real world requires dealing with complexity, non-linearly and chaos. It also involves practice in actually doing those things. It is for this reason that progress has been exceedingly slow in robotics. Current robots can only handle fixed repetitive tasks involving identical rigid objects, such as in an assembly line. Even after years of hype about driverless cars and vast amounts of funding for its research, fully automated driving still doesnt appear feasible in the near future.

Current AI development based on detecting statistical correlations using neural networks, which are treated as black boxes, promotes a pseudoscience-based myth of creating intelligence at the cost of developing a scientific understanding of how and why these networks work. Instead, they emphasise spectacles such as creating impressive demos and scoring in standardised tests based on memorised data.

The only significant commercial use cases of the current versions of AI are advertisements: targeting buyers for social media and video streaming platforms. This does not require the high degree of reliability demanded from other engineering solutions they just need to be good enough. Bad outputs, such as the propagation of fake news and the creation of hate-filled filter bubbles, largely go unpunished.

Perhaps a silver lining in all this is, given the bleak prospects of AI singularity, the fear of super-intelligent malicious AIs destroying humankind is overblown. However, that is of little comfort for those at the receiving end of AI decision systems. We already have numerous examples of AI decision systems the world over denying people legitimate insurance claims, medical and hospitalisation benefits, and state welfare benefits.

AI systems in the United States have been implicated in imprisoning minorities to longer prison terms. There have even been reports of withdrawal of parental rights to minority parents based on spurious statistical correlations, which often boil down to them not having enough money to properly feed and take care of their children. And, of course, on fostering hate speech on social media.

As noted linguist Noam Chomsky wrote in a recent article:

ChatGPT exhibits something like the banality of evil: plagiarism and apathy and obviation.

Bappa Sinha is a veteran technologist interested in the impact of technology on society and politics.

This article was produced by Globetrotter.

Support independent journalism Subscribeto IA.

Read the original:

Fears of artificial intelligence overblown - Independent Australia

Researchers at UTSA use artificial intelligence to improve cancer … – UTSA

Patients undergoing radiotherapy are currently given a computed tomography (CT) scan to help physicians see where the tumor is on an organ, for example a lung. A treatment plan to remove the cancer with targeted radiation doses is then made based on that CT image.

Rad says that cone-beam computed tomography (CBCT) is often integrated into the process after each dosage to see how much a tumor has shrunk, but CBCTs are low-quality images that are time-consuming to read and prone to misinterpretation.

UTSA researchers used domain adaptation techniques to integrate information from CBCT and initial CT scans for tumor evaluation accuracy. Their Generative AI approach visualizes the tumor region affected by radiotherapy, improving reliability in clinical settings.

This improved approach enables physicians to more accurately see how much a tumor has decreased week by week and to plan the following weeks radiation dose with greater precision. Ultimately, the approach could lead clinicians to better target tumors while sparing the surrounding critical organs and healthy tissue.

Nikos Papanikolaou, a professor in the Departments of Radiation Oncology and Radiology at UT Health San Antonio, provided the patient data that enabled the researchers to advance their study.

UTSA and UT Health San Antonio have a shared commitment to deliver the best possible health care to members of our community, Papanikolaou said. This study is a wonderful example of how artificial intelligence can be used to develop new personalized treatments for the benefit of society.

The American Society for Radiology Oncology stated in a 2020 report that between half or two-thirds of people diagnosed with cancer were expected to receive radiotherapy treatment. According to the American Cancer Society, the number of new cancer cases in the U.S. in 2023 is projected to be nearly two million.

Arkajyoti Roy, UTSA assistant professor of management science and statistics, says he and his collaborators have been interested in using AI and deep learning models to improve treatments over the last few years.

Besides just building more advanced AI models for radiotherapy, we also are super interested in the limitations of these models, he said. All models make errors and for something like cancer treatment its very important not only to understand the errors but to try to figure out how we can limit their impact; thats really the goal from my perspective of this project.

The researchers study included 16 lung cancer patients whose pre-treatment CT and mid-treatment weekly CBCT images were captured over a six-week period. Results show that using the researchers new approach demonstrated improved tumor shrinkage predictions for weekly treatment plans with significant improvement in lung dose sparing. Their approach also demonstrated a reduction in radiation-induced pneumonitis or lung damage up to 35%.

Were excited about this direction of research that will focus on making sure that cancer radiation treatments are robust to AI model errors, Roy said. This work would not be possible without the interdisciplinary team of researchers from different departments.

View post:

Researchers at UTSA use artificial intelligence to improve cancer ... - UTSA