GCHQ chiefs warning to ministers over risks of AI – The Independent

GCHQ chief Sir Jeremy Fleming has warned ministers about the risks posed by artificial intelligence (AI), amid growing debates about how to regulate the rapidly developing technology.

Downing Street gave little detail about what specific risks the GCHQ boss warned of but said the update was a clear-eyed look at the potential for things like disinformation and the importance of people being aware of that.

Prime minister Rishi Sunak used the same Cabinet meeting on Tuesday to stress the importance of AI to UK national security and the economy, No 10 said.

A readout of the meeting said ministers agreed on the transformative potential of AI and the vital importance of retaining public confidence in its use and the need for regulation that keeps people safe without preventing innovation.

The prime minister concluded Cabinet by saying that given the importance of AI to our economy and national security, this could be one of the most important policies we pursue in the next few years which is why we must get this right, the readout added.

Asked if the potential for an existential threat to humanity from AI had been considered, the PMs official spokesperson said: We are well aware of the potential risks posed by artificial general intelligence.

The spokesperson said Michelle Donelans science ministry was leading on that issue, but the governments policy was to have appropriate, flexible regulation which can move swiftly to deal with what is a changing technology.

As the public would expect, we are looking to both make the most of the opportunities but also to guard against the potential risk, the spokesperson added.

The government used the recent refresh of the integrated review to launch a new government-industry AI-focused task force on the issue, modelled on the vaccines task force used during the Covid pandemic.

Italy last month said it would temporarily block the artificial intelligence software ChatGPT amid global debate about the power of such new tools.

The AI systems powering such chatbots, known as large language models, are able to mimic human writing styles based on the huge trove of digital books and online writings they have ingested.

Mr Sunak, who created a new Department for Science, Innovation & Technology in a Whitehall reshuffle earlier this year, is known to be enthusiastic about making the UK a science superpower.

See the original post here:

GCHQ chiefs warning to ministers over risks of AI - The Independent

Tim Sweeney, CD Projekt, and Other Experts React to AI’s Rise, and … – IGN

This feature is part of AI Week. For more stories, including How AI Could Doom Animation and comments from experts like Tim Sweeney, check out our hub.

All anyone wants to talk about in the games industry is AI. The technology once a twinkle in the eye of sci-fi writers and futurists has shot off like a bottle rocket. Every day we're greeted with fascinating and perturbing new advances in machine learning. Right now, you can converse with your computer on ChatGPT, sock-puppet a celebrity's voice with ElevenLabs, and generate a slate of concept art with MidJourney.

It is perhaps only a matter of time before AI starts making significant headway in the business of game development, so to kick off AI week at IGN, we talked to a range of experts in the field about their hopes and fears for this brave new world, and some are more skeptical than you'd expect.

AI Week Roundtable: Meet the Games Industry Experts

Pawel Sasko, CD Projekt Red Lead Quest Designer: I really believe that AI, and AI tools, are going to be just the same as when Photoshop was invented. You can see it throughout the history of animation. From drawing by hand to drawing on a computer, people had to adapt and use the tools, and I think AI is going to be exactly that. It's just going to be another tool that we'll use for productivity and game development.

Tim Sweeney, Epic Games CEO: I think there's a long sorting out process to figure out how all that works and it's going to be complicated. These AI technologies are incredibly effective when applied to some really bulk forms of data where you can download billions of samples from existing project and train on them, but that works for text and it works for graphics and maybe it will work for 3D objects as well, but it's not going to work for higher level constructs like games or the whole of the video game. There's just no training function that people know that can drive a game like that. I think we're going to see some really incredible advances and actual progress mixed in with the hype cycle where a lot of crazy stuff is promised. Nobody's going to be able to deliver.

Michael Spranger, COO of Sony AI: I think AI is going to revolutionize the largeness of gaming worlds; how real they feel, and how you interact with them. But I also think it's going to have a huge impact on production cycles. Especially in this era of live-services. We'll produce a lot more content than we did in the past.

Julian Togelius, Associate Professor of Computer Science at New York University, and co-author of the textbook Artificial Intelligence and Games: Long-term, we're going to see every part of game development co-created AI Designers will collaborate with AI on everything from prototyping, to concept art, to mechanics, balancing, and so on. Further on, we might see games that are actually designed to use AI during its runtime.

Pawel Sasko: There's actually many companies doing internal R&D of a specific implementation of not MidJourney especially, but literally just art tools like this, so that when you're in early concept phases, you're able to generate as many ideas as you can and just basically pick whatever works actually for you and then give it to an artist who actually developed that direction. I think it's a pretty intriguing direction because it opens up the doors that you wouldn't think of. And again, as an artist, we are just always limited by our skills that come up from all the life experiences and everything we have consumed artistically, culturally before. And AI doesn't have this limitation in a way. We can feed it so many different things, therefore it can actually propose so many different things that we wouldn't think of. So I think as a starting point or maybe just as a brainstorming tool, this could be interesting.

Michael Spranger: I think of AI as a creativity unlocking tool. There are so many more things you can do if you have the right tools. We see a rapid deployment of impact of this technology in content creation possibilities from 3D, to sound, to musical experiences, to what you're interacting with in a world. All of that is going to get much better.

Julian Togelius: Everybody looks at the image generation and text generation and say, 'Hey, we can just pop that into games.' And, of course, we see like proliferation of unserious, sometimes venture capital found that actors coming in and claiming that they're going to do all of your Game Arts with MidJourney these people usually don't know anything about game development. There's a lot of that going around. So I like to say that generating just an image is kind of the easy part. Every other part of game content, including the art, has so many functional aspects. Your character model must work with animations, your level must be completable. That's the had part.

Tim Sweeney: It's not synthesizing amazing new stuff, it's really just rewriting data that already exists. So, either you ask it to write a sorting algorithm in Python and it does that, but it's really just copying the structure of somebody else's code that it trained on. You tell it to solve a problem that nobody's solved before or the data it hasn't seen before and it doesn't have the slightest idea what to do about it. We have nothing like artificial general intelligence. The generated art characters have six or seven fingers, they just don't know that people have five fingers. They don't know what fingers are and they don't know how to count. They don't really know anything other than how to reassemble pixels in a statistically common way. And so, I think we're a very long way away from that, providing the kind of utility a real artist provides.

Sarah Bond, Xbox Head of Global Gaming Partnership and Development: We're in the early days of it. Obviously we're in the midst of huge breakthroughs. But you can see how it's going to greatly enhance discoverability that is actually customized to what you really care about. You can actually have things served up to you that are very, very AI driven. "Oh my gosh, I loved Tunic. What should I do next?

Tim Sweeney: I'm not sure yet. It's funny, we're pushing the state of the art in a bunch of different areas, but [Epic] is really not touching generative AI. We're amazed at what our own artists are doing in their hobby projects, but all these AI tools, data use is under the shadow, which makes the tools unusable by companies with lawyers essentially because we don't know what authorship claims might exist on the data.

Julian Togelius: I don't think it will affect anyone more than any other technology that forces people to learn new tools. You have to keep learning new tools or otherwise you'll become irrelevant. People will become more productive, and generate faster iterations. Someone will say, "Hey, this is a really interesting creature you've created, now give me 10,000 of those that differ slightly." People will master the tools. I don't think they will put anyone out of a job as long as you keep rolling with the punches.

Pawel Sasko: I think that the legal sphere is going to catch up with AI generation eventually, with what to do in these situations to regulate them. I know a lot of voice actors are worried about the technology, because the voice is also a distinct element of a given actor, not only the appearance and the way of acting. Legal is always behind us.

Michael Spranger: The relationship with creative people is really important to us. I don't think that relationship will change. When I go watch a Stanley Kubrick movie, I'm there to enjoy his creative vision. For us, it's important to make sure that those people can preserve and execute those creative visions, and that AI technology is a tool that can help make that happen.

Julian Togelius: Definitely. If you have a team that has deep expertise in every field, you're at an advantage. But I think we're gonna get to the point where, like, you only need to know a few fields to make a game, and have the AI tools be non-human standings for other fields of expertise. If you're a two-person team and you don't have an animator, you can ask the AI to do the animation for you. The studio can make a nice looking game even though they don't have all the resources. That's something I'm super optimistic about.

Tim Sweeney: I think the more common case, which we're seeing really widely used in the game industry is an artist does a lot of work to build an awesome asset, but then the procedural systems and the animation tools and the data scanning systems just blow it up to an incredible level.

Michael Spranger: Computer science in general has a very democratizing effect. That is the history of the field. I think these tools might inspire more people to express their creativity. This is really about empowering people. We're going to create much more content that's unlocked with AI, and I think it will have a role to play in both larger and smaller studios.

Michael Spranger: I think what makes this different is that the proof is in the pudding. Look at what Kazunori Yamuchi said about GT Sophy, [the AI-powered driver recently introduced to Gran Turismo 7]: there was a 25-year period where they built the AI in Gran Turismo in a specific way, and Yamuchi is basically saying that this is a new chapter. That makes a difference for me. When people are saying, "I haven't had this experience before with a game. This is qualitatively different." It's here now, you can experience it now.

Kajetan Kasprowicz, CD Projekt Red Cinematic Designer: Someone at GDC once gave a talk that basically said, "Who will want to play games that were made by AI?" People will want experiences created by human beings. The technology is advancing very fast and we kind of don't know what to do with it. But I think there will be a consensus on what we want to do as societies.

Julian Togelius: AI has actual use-cases, and it works, whereas all of the crypto shit was ridiculous grifting by shameless people. I hate that people associate AI with that trend. On the other hand you have something like VR, which is interesting technology that may, or may not, be ready for the mass market someday. Compare that to AI, which has hundreds of use-cases in games and game development.

Luke Winkie is a freelance writer at IGN.

See original here:

Tim Sweeney, CD Projekt, and Other Experts React to AI's Rise, and ... - IGN

Is the current regulatory system equipped to deal with AI? – The Hindu

The growth of Artificial Intelligence (AI) technologies and their deployment has raised questions about privacy, monopolisation and job losses. In a discussion moderated by Prashanth Perumal J., Ajay Shah and Apar Gupta discuss concerns about the economic and privacy implications of AI as countries try to design regulations to prevent the possible misuse of AI by individuals and governments. Edited excerpts:

Should we fear AI? Is AI any different from other disruptive technologies?

Ajay Shah: Technological change improves aggregate productivity, and the output of society goes up as a result. People today are vastly better off than they were because of technology, whether it is of 200 years ago or 5,000 years ago. There is nothing special or different this time around with AI. This is just another round of machines being used to increase productivity.

Apar Gupta: I broadly echo Ajays views. And alongside that, I would say that in our popular culture, quite often we have people who think about AI as a killer robot that is, in terms of AI becoming autonomous. However, I think the primary risks which are emerging from AI happen to be the same risks which we have seen with other digital technologies, such as how political systems integrate those technologies. We must not forget that some AI-based systems are already operational and have been used for some time. For instance, AI is used today in facial recognition in airports in India and also by law-enforcement agencies. There needs to be a greater level of critical thought, study and understanding of the social and economic impact of any new technology.

Ajay Shah: If I may broaden this discussion slightly, theres a useful phrase called AGI, which stands for artificial general intelligence, which people are using to emphasise the uniqueness and capability of the human mind. The human mind has general intelligence. You could show me a problem that I have never seen before, and I would be able to think about it from scratch and be able to try to solve it, which is not something these machines know how to do. So, I feel theres a lot of loose talk around AI. ChatGPT is just one big glorified database of everything that has been written on the Internet. And it should not be mistaken for the genuine human capability to think, to invent, to have a consciousness, and to wake up with the urge to do something. I think the word AI is a bit of a marketing hype.

Do you think the current regulatory system is equipped enough to deal with the privacy and competition threats arising from AI?

Ajay Shah: One important question in the field of technology policy in India is about checks and balances. What kind of data should the government have about us? What kind of surveillance powers should the government have over us? What are the new kinds of harm that come about when governments use technologies in a certain way? There is also one big concern about the use of modern computer technology and the legibility of our lives the way our lives are laid bare to the government.

Apar Gupta: Beyond the policy conversation, I think we also need laws for the deployment of AI-based systems to comply with Supreme Court requirements under the right to privacy judgment for specific use-cases such as facial recognition. A lot of police departments and a lot of State governments are using this technology and it comes with error rates that have very different manifestations. This may result in exclusion, harassment, etc., so there needs to be a level of restraint. We should start paying greater attention to the conversations happening in Europe around AI and the risk assessment approach (adopted by regulators in Europe and other foreign countries) as it may serve as an influential model for us.

Ajay Shah: Coming to competition, I am not that worried about the presence or absence of competition in this field. Because on a global scale, it appears that there are many players. Already we can see OpenAI and Microsoft collaborating on one line of attack; we can also see Facebook, which is now called Meta, building in this space; and of course, we have the giant and potentially the best in the game, Google. And there are at least five or 10 others. This is a nice reminder of the extent to which technical dynamism generates checks and balances of its own. For example, we have seen how ChatGPT has raised a new level of competitive dynamics around Google Search. One year ago, we would have said that the world has a problem because Google is the dominant vendor among search engines. And that was true for some time. Today, suddenly, it seems that this game is wide open all over again; it suddenly looks like the global market for search is more competitive than it used to be. And when it comes to the competition between Microsoft and Google on search, we in India are spectators. I dont see a whole lot of value that can be added in India, so I dont get excited about appropriating extraterritorial jurisdiction. When it comes to issues such as what the Indian police do with face recognition, nobody else is going to solve it for us. We should always remember India is a poor country where regulatory and state capacity is very limited. So, the work that is done here will generally be of low quality.

Apar Gupta: The tech landscape is dominated by Big Tech, and its because they have a computing power advantage, a data advantage, and a geopolitical advantage. It is possible that at this time when AI is going to unleash the next level of technology innovation, the pre-existing firms, which may be Microsoft, Google, Meta, etc., may deepen their domination.

How do you see India handling AI vis--vis Chinas authoritarian use of AI?

Ajay Shah: In China, they have built a Chinese firewall and cut off users in China from the Internet. This is not that unlike what has started happening in India where many websites are being increasingly cut off from Indian users. The people connected with the ruling party in China get monopoly powers to build products that look like global products. They steal ideas and then design and make local versions in China, and somebody makes money out of that. Thats broadly the Chinese approach and it makes many billion dollars of market cap. But it also comes at the price of mediocrity and stagnation, because when you are just copying things, you are not at the frontier and you will not develop genuine scientific and technical knowledge. So far in India, there is decent political support for globalisation, integration into the world economy and full participation by foreign companies in India. Economic nationalism, where somehow the government is supposed to cut off foreign companies from operating in India, is not yet a dominant impulse here. So, I think that there is fundamental superiority in the Indian way, but I recognise that there is a certain percentage of India that would like the China model.

Apar Gupta: I would just like to caution people who are taken in by the attractiveness of the China model it relies on a form of political control, which itself is completely incompatible in India.

How do you see Zoho Corporation CEO Sridhar Vembus comments that AI would completely replace all existing jobs and that demand for goods would drop as people lose their jobs?

Ajay Shah: As a card-carrying economist, I would just say that we should always focus on the word productivity. Its good for society when human beings produce more output per unit hour as that makes us more prosperous. People who lose jobs will see job opportunities multiplying in other areas. My favourite story is from a newspaper column written by Ila Patnaik. There used to be over one million STD-ISD booths in India, each of which employed one or two people. So there were 1-2 million jobs of operating an STD-ISD booth in India. And then mobile phones came and there was great hand-wringing that millions of people would lose their jobs. In the end, the productivity of the country went up. So I dont worry so much about the reallocation of jobs. The labour market does this every day prices move in the labour market, and then people start choosing what kind of jobs they want to do.

Ajay Shah is Research Professor of Business at O.P. Jindal Global University, Sonipat; Apar Gupta is executive director of the Internet Freedom Foundation

Go here to see the original:

Is the current regulatory system equipped to deal with AI? - The Hindu

How would a Victorian author write about generative AI? – Verdict

The Victorian era was one transformed by the industrial revolution. The telegraph, telephone, electricity, and steam engine are key examples of life-changing technologies and machinery.

It is not surprising, therefore, that this real-life innovation sparked the imagination of writers like Robert Stevenson, Jules Verne, and H.G Wells.

These authors imagined time machines, space rockets, and telecommunication. Even Mark Twain wrote about mind-travelling, imagining a technology similar to the modern-day internet in 1898. Motifs such as utopias and dystopias became popular in literature as academics debated the scientific, cultural, and physiological impact of technology.

Robert Stevensons Strange Case of Dr Jekyll and Mr Hyde is another classic example. It explores the dangers of unchecked ambition in scientific experimentation through the evil, murderous alter ego Mr. Hyde. Mary Shellys Frankenstein unleashes a monster, a living being forged out of non-living materials. These stories spoke to the fear among the Victorian pious society that playing God would have deadly consequences.

In an FT op-ed, AI expert and angel investor Ian Hogarth refers to artificial general intelligence (AGI) as God-like AI for its predicted ability to generate new scientific knowledge independently and perform all human tasks. The article displayed both excitement and trepidation at the technologys potential.

According to GlobalData, there have been over 5,500 news items relating to AI in the past six months. Opinion ranges from unbridled optimism that AI will revolutionize the world, to theories of an apocalyptic future where machines will rise to render humanity obsolete.

In April 2023, The Future of Life Institute wrote an open letter calling for a six-month pause on developing AI systems that can compete with human-level intelligence, co-signed by tech leaders such as Elon Musk and Steve Wozniak. The letter posed the question Should we risk the loss of control of our civilization? as AI becomes more powerful. Over 3,000 people have signed it.

These arguments are the same as the talking points of Victorian sceptics on technological advancements. Philosopher and economist John Stuart Mill discussed in an essay entitled Civilization in which he wrote about the uncorrected influences of technological development on society, specifically the printing press, which he predicted would dilute the voice of intellectuals by making publishing accessible to the masses and commercialize the spread of knowledge. He called for national institutions to mitigate this impact.

Both were concerned with how technology could disrupt social norms and the labour market and wreak havoc on society as we know it. Both called for government oversight and regulation during a time of intense scientific progress.

In the 1800s, the desire to push boundaries won out over concerns, breeding a new class of innovators and entrepreneurs. Without this innovative spirit, Alexander Graham Bell would not have invented the telephone in 1876, and Joseph Swan would not have invented the lightbulb in 1878. They were the forerunners to the Bill Gates and Jeff Bezos of this world.

While technology advances at a rapid pace, human behaviour remains consistent. In other words, advances in technology will always divide opinions between those who view it as a new frontier to explore and those who consider it to be Frankensteins monster. We can heed the warnings when it comes to unregulated technological developments and still appreciate the opportunities ingenuity brings. This is especially pertinent when it comes to artificial intelligence.

See the original post:

How would a Victorian author write about generative AI? - Verdict

Elon Musk Dishes On AI Wars With Google, ChatGPT And Twitter On Fox News – Forbes

(Photo by Justin Sullivan/Getty Images)Getty Images

The worlds wealthiest billionaires are drawing battle lines when it comes to who will control AI, according to Elon Musk in an interview with Tucker Carlson on Fox News, which aired this week.

Musk explained that he cofounded ChatGPT-maker OpenAI in reaction to Google cofounder Larry Pages lack of concern over the danger of AI outsmarting humans.

He said the two were once close friends and that he would often stay at Pages house in Palo Alto where they would talk late into the night about the technology. Page was such a fan of Musks that in Jan. 2015, Google invested $1 billion in SpaceX for a 10% stake with Fidelity Investments. He wants to go to Mars. Thats a worthy goal, Page said in a March 2014 TED Talk .

But Musk was concerned over Googles acquisition of DeepMind in Jan. 2014.

Google and DeepMind together had about three-quarters of all the AI talent in the world. They obviously had a tremendous amount of money and more computers than anyone else. So Im like were in a unipolar world where theres just one company that has close to a monopoly on AI talent and computers, Musk said. And the person in charge doesnt seem to care about safety. This is not good.

Musk said he felt Page was seeking to build a digital super intelligence, a digital god.

He's made many public statements over the years that the whole goal of Google is what's called AGI artificial general intelligence or artificial super intelligence, Musk said.

Google CEO Sundar Pichai has not disagreed. In his 60 minutes interview on Sunday, while speaking about the companys advancements in AI, Pichai said that Google Search was only one to two percent of what Google can do. The company has been teasing a number of new AI products its planning on rolling out at its developer conference Google I/O on May 10.

Musk said Page stopped talking to him over OpenAI, a nonprofit with the stated mission of ensuring that artificial general intelligenceAI systems that are generally smarter than humansbenefits all of humanity that Musk cofounded in Dec. 2015 with Y Combinator CEO Sam Altman and PayPal alums LinkedIn cofounder Reid Hoffman and Palantir cofounder Peter Thiel, among others.

I havent spoken to Larry Page in a few years because he got very upset with me over OpenAI, said Musk explaining that when OpenAI was created it shifted things from a unipolar world where Google controls most of the worlds AI talent to a bipolar world. And now it seems that OpenAI is ahead, he said.

But even before OpenAI, as SpaceX was announcing the Google investment in late Jan. 2015, Musk had given $10 million to the Future of Life Institute, a nonprofit organization dedicated to reducing existential risks from advanced artificial intelligence. That organization was founded in March 2014 by AI scientists from DeepMind, MIT, Tufts, UCSC, among others and were the ones who issued the petition calling for a pause in AI development that Musk signed last month.

In 2018, citing potential conflicts with his work with Tesla, Musk resigned his seat on the board of OpenAI.

I put a lot of effort into creating this organization to serve as a counterweight to Google and then I kind of took my eye off the ball and now they are closed source, and obviously for profit, and theyre closely allied with Microsoft. In effect, Microsoft has a very strong say, if not directly controls OpenAI at this point, Musk said.

Ironically, its Musks longtime friend Hoffman who is the link to Microsoft. The two hit it big together at PayPal and it was Musk who recruited Hoffman to OpenAI in 2015. In 2017, Hoffman became an independent director at Microsoft, then sold LinkedIn to Microsoft for more than $26 billion in 2019 when Microsoft invested its first billion dollars into OpenAI. Microsoft is currently OpenAIs biggest backer having invested as much as $10 billion more this past January. Hoffman only recently stepped down from OpenAIs board on March 3 to enable him to start investing in the OpenAI startup ecosystem, he said in a LinkedIn post. Hoffman is a partner in the venture capital firm Greylock Partners and a prolific angel investor.

All sit at the top of the Forbes Real-Time Billionaires List. As of April 17 5pm ET, Musk was the worlds second richest person valued at $187.4 billion, Page the eleventh at $90.1 billion. Google cofounder Sergey Brin is in the 12 spot at $86.3 billion. Thiel ranks 677 with a net worth of $4.3 billion and Hoffman ranks 1570 with a net worth of $2 billion.

Musk said he thinks Page believes all consciousness should be treated equally while he disagrees, especially if the digital consciousness decides to curtail the biological intelligence. Like Pichai, Musk is advocating for government regulation of the technology and says at minimum there should be a physical off switch to cut power and connectivity to server farms in case administrative passwords stop working.

Pretty sure Ive seen that movie.

Musk told Carlson that hes considering naming his new AI company TruthGPT.

I will create a third option, although it's starting very late in the game, he said. Can it be done? I don't know.

The entire interview will be available to view on Fox Nation starting April 19 7am ET. Here are some excerpts which includes his thoughts on encrypting Twitter DMs.

Tech and trending reporter with bylines in Bloomberg, Businessweek, Fortune, Fast Company, Insider, TechCrunch and TIME; syndicated in leading publications around the world. Fox 5 DC commentator on consumer trends. Winner CES 2020 Media Trailblazer award. Follow on Twitter @contentnow.

Go here to read the rest:

Elon Musk Dishes On AI Wars With Google, ChatGPT And Twitter On Fox News - Forbes

Elon Musk says he will launch rival to Microsoft-backed ChatGPT – Reuters

SAN FRANCISCO, April 17 (Reuters) - Billionaire Elon Musk said on Monday he will launch an artificial intelligence (AI) platform that he calls "TruthGPT" to challenge the offerings from Microsoft (MSFT.O) and Google (GOOGL.O).

He criticised Microsoft-backed OpenAI, the firm behind chatbot sensation ChatGPT, of "training the AI to lie" and said OpenAI has now become a "closed source", "for-profit" organisation "closely allied with Microsoft".

He also accused Larry Page, co-founder of Google, of not taking AI safety seriously.

"I'm going to start something which I call 'TruthGPT', or a maximum truth-seeking AI that tries to understand the nature of the universe," Musk said in an interview with Fox News Channel's Tucker Carlson aired on Monday.

He said TruthGPT "might be the best path to safety" that would be "unlikely to annihilate humans".

"It's simply starting late. But I will try to create a third option," Musk said.

Musk, OpenAI, Microsoft and Page did not immediately respond to Reuters' requests for comment.

Musk has been poaching AI researchers from Alphabet Inc's (GOOGL.O) Google to launch a startup to rival OpenAI, people familiar with the matter told Reuters.

Musk last month registered a firm named X.AI Corp, incorporated in Nevada, according to a state filing. The firm listed Musk as the sole director and Jared Birchall, the managing director of Musk's family office, as a secretary.

The move came even after Musk and a group of artificial intelligence experts and industry executives called for a six-month pause in developing systems more powerful than OpenAI's newly launched GPT-4, citing potential risks to society.

Musk also reiterated his warnings about AI during the interview with Carlson, saying "AI is more dangerous than, say, mismanaged aircraft design or production maintenance or bad car production" according to the excerpts.

"It has the potential of civilizational destruction," he said.

He said, for example, that a super intelligent AI can write incredibly well and potentially manipulate public opinions.

He tweeted over the weekend that he had met with former U.S. President Barack Obama when he was president and told him that Washington needed to "encourage AI regulation".

Musk co-founded OpenAI in 2015, but he stepped down from the company's board in 2018. In 2019, he tweeted that he left OpenAI because he had to focus on Tesla and SpaceX.

He also tweeted at that time that other reasons for his departure from OpenAI were, "Tesla was competing for some of the same people as OpenAI & I didnt agree with some of what OpenAI team wanted to do."

Musk, CEO of Tesla and SpaceX, has also become CEO of Twitter, a social media platform he bought for $44 billion last year.

In the interview with Fox News, Musk said he recently valued Twitter at "less than half" of the acquisition price.

In January, Microsoft Corp (MSFT.O) announced a further multi-billion dollar investment in OpenAI, intensifying competition with rival Google and fueling the race to attract AI funding in Silicon Valley.

Reporting by Hyunjoo JinEditing by Chris Reese

Our Standards: The Thomson Reuters Trust Principles.

Read the rest here:

Elon Musk says he will launch rival to Microsoft-backed ChatGPT - Reuters

Researchers at UTSA use artificial intelligence to improve cancer … – UTSA

Patients undergoing radiotherapy are currently given a computed tomography (CT) scan to help physicians see where the tumor is on an organ, for example a lung. A treatment plan to remove the cancer with targeted radiation doses is then made based on that CT image.

Rad says that cone-beam computed tomography (CBCT) is often integrated into the process after each dosage to see how much a tumor has shrunk, but CBCTs are low-quality images that are time-consuming to read and prone to misinterpretation.

UTSA researchers used domain adaptation techniques to integrate information from CBCT and initial CT scans for tumor evaluation accuracy. Their Generative AI approach visualizes the tumor region affected by radiotherapy, improving reliability in clinical settings.

This improved approach enables physicians to more accurately see how much a tumor has decreased week by week and to plan the following weeks radiation dose with greater precision. Ultimately, the approach could lead clinicians to better target tumors while sparing the surrounding critical organs and healthy tissue.

Nikos Papanikolaou, a professor in the Departments of Radiation Oncology and Radiology at UT Health San Antonio, provided the patient data that enabled the researchers to advance their study.

UTSA and UT Health San Antonio have a shared commitment to deliver the best possible health care to members of our community, Papanikolaou said. This study is a wonderful example of how artificial intelligence can be used to develop new personalized treatments for the benefit of society.

The American Society for Radiology Oncology stated in a 2020 report that between half or two-thirds of people diagnosed with cancer were expected to receive radiotherapy treatment. According to the American Cancer Society, the number of new cancer cases in the U.S. in 2023 is projected to be nearly two million.

Arkajyoti Roy, UTSA assistant professor of management science and statistics, says he and his collaborators have been interested in using AI and deep learning models to improve treatments over the last few years.

Besides just building more advanced AI models for radiotherapy, we also are super interested in the limitations of these models, he said. All models make errors and for something like cancer treatment its very important not only to understand the errors but to try to figure out how we can limit their impact; thats really the goal from my perspective of this project.

The researchers study included 16 lung cancer patients whose pre-treatment CT and mid-treatment weekly CBCT images were captured over a six-week period. Results show that using the researchers new approach demonstrated improved tumor shrinkage predictions for weekly treatment plans with significant improvement in lung dose sparing. Their approach also demonstrated a reduction in radiation-induced pneumonitis or lung damage up to 35%.

Were excited about this direction of research that will focus on making sure that cancer radiation treatments are robust to AI model errors, Roy said. This work would not be possible without the interdisciplinary team of researchers from different departments.

View post:

Researchers at UTSA use artificial intelligence to improve cancer ... - UTSA

Some Glimpse AGI in ChatGPT. Others Call It a Mirage – WIRED

Sbastien Bubeck, a machine learning researcher atMicrosoft, woke up one night last September thinking aboutartificial intelligenceand unicorns.

Bubeck had recently gotten early access toGPT-4, a powerful text generation algorithm fromOpenAI and an upgrade to the machine learning model at the heart of the wildly popular chatbotChatGPT. Bubeck was part of a team working to integrate the new AI system into MicrosoftsBing search engine. But he and his colleagues kept marveling at how different GPT-4 seemed from anything theyd seen before.

GPT-4, like its predecessors, had been fed massive amounts of text and code and trained to use the statistical patterns in that corpus to predict the words that should be generated in reply to a piece of text input. But to Bubeck, the systems output seemed to do so much more than just make statistically plausible guesses.

View more

That night, Bubeck got up, went to his computer, and asked GPT-4 to draw a unicorn usingTikZ, a relatively obscure programming language for generating scientific diagrams. Bubeck was using a version of GPT-4 that only worked with text, not images. But the code the model presented him with, when fed into a TikZ rendering software, produced a crude yet distinctly unicorny image cobbled together from ovals, rectangles, and a triangle. To Bubeck, such a feat surely required some abstract grasp of the elements of such a creature. Something new is happening here, he says. Maybe for the first time we have something that we could call intelligence.

How intelligent AI is becomingand how much to trust the increasingly commonfeeling that a piece of software is intelligenthas become a pressing, almost panic-inducing, question.

After OpenAIreleased ChatGPT, then powered by GPT-3, last November, it stunned the world with its ability to write poetry and prose on a vast array of subjects, solve coding problems, and synthesize knowledge from the web. But awe has been coupled with shock and concern about the potential foracademic fraud,misinformation, andmass unemploymentand fears that companies like Microsoft are rushing todevelop technology that could prove dangerous.

Understanding the potential or risks of AIs new abilities means having a clear grasp of what those abilities areand are not. But while theres broad agreement that ChatGPT and similar systems give computers significant new skills, researchers are only just beginning to study these behaviors and determine whats going on behind the prompt.

While OpenAI has promoted GPT-4 by touting its performance on bar and med school exams, scientists who study aspects of human intelligence say its remarkable capabilities differ from our own in crucial ways. The models tendency to make things up is well known, but the divergence goes deeper. And with millions of people using the technology every day and companies betting their future on it, this is a mystery of huge importance.

Bubeck and other AI researchers at Microsoft were inspired to wade into the debate by their experiences with GPT-4. A few weeks after the system was plugged into Bing and its new chat feature was launched, the companyreleased a paper claiming that in early experiments, GPT-4 showed sparks of artificial general intelligence.

The authors presented a scattering of examples in which the system performed tasks that appear to reflect more general intelligence, significantly beyond previous systems such as GPT-3. The examples show that unlike most previous AI programs, GPT-4 is not limited to a specific task but can turn its hand to all sorts of problemsa necessary quality of general intelligence.

The authors also suggest that these systems demonstrate an ability to reason, plan, learn from experience, and transfer concepts from one modality to another, such as from text to imagery. Given the breadth and depth of GPT-4s capabilities, we believe that it could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system, the paper states.

Bubecks paper, written with 14 others, including Microsofts chief scientific officer, was met with pushback from AI researchers and experts on social media. Use of the term AGI, a vague descriptor sometimes used to allude to the idea of super-intelligent or godlike machines, irked some researchers, who saw it as a symptom of the current hype.

The fact that Microsoft has invested more than $10 billion in OpenAI suggested to some researchers that the companys AI experts had an incentiveto hype GPT-4s potential while downplaying its limitations. Others griped thatthe experiments are impossible to replicate because GPT-4 rarely responds in the same way when a prompt is repeated, and because OpenAI has not shared details of its design. Of course, people also asked why GPT-4 still makes ridiculous mistakes if it is really so smart.

Talia Ringer, a professor at the University of Illinois at Urbana-Champaign, says Microsofts paper shows some interesting phenomena and then makes some really over-the-top claims. Touting systems that are highly intelligent encourages users to trust them even when theyre deeply flawed, she says. Ringer also points out that while it may be tempting to borrow ideas from systems developed to measure human intelligence, many have proven unreliable and even rooted in racism.

Bubek admits that his study has its limits, including the reproducibility issue, and that GPT-4 also has big blind spots. He says use of the term AGI was meant to provoke debate. Intelligence is by definition general, he says. We wanted to get at the intelligence of the model and how broad it isthat it covers many, many domains.

But for all of the examples cited in Bubecks paper, there are many that show GPT-4 getting things blatantly wrongoften on the very tasks Microsofts team used to tout its success. For example, GPT-4s ability to suggest a stable way to stack a challenging collection of objectsa book, four tennis balls, a nail, a wine glass, a wad of gum, and some uncooked spaghettiseems to point to a grasp of the physical properties of the world that is second nature to humans,including infants. However, changing the items and the requestcan result in bizarre failures that suggest GPT-4s grasp of physics is not complete or consistent.

Bubeck notes that GPT-4 lacks a working memory and is hopeless at planning ahead. GPT-4 is not good at this, and maybe large language models in general will never be good at it, he says, referring to the large-scale machine learning algorithms at the heart of systems like GPT-4. If you want to say that intelligence is planning, then GPT-4 is not intelligent.

One thing beyond debate is that the workings of GPT-4 and other powerful AI language models do not resemble the biology of brains or the processes of the human mind. The algorithms must be fed an absurd amount of training dataa significant portion of all the text on the internetfar more than a human needs to learn language skills. The experience that imbues GPT-4, and things built with it, with smarts is shoveled in wholesale rather than gained through interaction with the world and didactic dialog. And with no working memory, ChatGPT can maintain the thread of a conversation only by feeding itself the history of the conversation over again at each turn. Yet despite these differences, GPT-4 is clearly a leap forward, and scientists who research intelligence say its abilities need further interrogation.

A team of cognitive scientists, linguists, neuroscientists, and computer scientists from MIT, UCLA, and the University of Texas, Austin, posted aresearch paper in January that explores how the abilities of large language models differ from those of humans.

The group concluded that while large language models demonstrate impressive linguistic skillincluding the ability to coherently generate a complex essay on a given themethat is not the same as understanding language and how to use it in the world. That disconnect may be why language models have begun to imitate the kind of commonsense reasoning needed to stack objects or solve riddles. But the systems still make strange mistakes when it comes to understanding social relationships, how the physical world works, and how people think.

The way these models use language, by predicting the words most likely to come after a given string, is very difference from how humans speak or write to convey concepts or intentions. The statistical approach can cause chatbots to follow and reflect back the language of users prompts to the point of absurdity.

Whena chatbot tells someone to leave their spouse, for example, it only comes up with the answer that seems most plausible given the conversational thread. ChatGPT and similar bots will use the first person because they are trained on human writing. But they have no consistent sense of self and can change their claimed beliefs or experiences in an instant. OpenAI also uses feedback from humans to guide a model toward producing answers that people judge as more coherent and correct, which may make the model provide answers deemed more satisfying regardless of how accurate they are.

Josh Tenenbaum, a contributor to the January paper and a professor at MIT who studies human cognition and how to explore it using machines, says GPT-4 is remarkable but quite different from human intelligence in a number of ways. For instance, it lacks the kind of motivation that is crucial to the human mind. It doesnt care if its turned off, Tenenbaum says. And he says humans do not simply follow their programming but invent new goals for themselves based on their wants and needs.

Tenenbaum says some key engineering shifts happened between GPT-3 and GPT-4 and ChatGPT that made them more capable. For one, the model was trained on large amounts of computer code. He and others have argued thatthe human brain may use something akin to a computer program to handle some cognitive tasks, so perhaps GPT-4 learned some useful things from the patterns found in code. He also points to the feedback ChatGPT received from humans as a key factor.

But he says the resulting abilities arent the same as thegeneral intelligence that characterizes human intelligence. Im interested in the cognitive capacities that led humans individually and collectively to where we are now, and thats more than just an ability to perform a whole bunch of tasks, he says. We make the tasksand we make the machines that solve them.

Tenenbaum also says it isnt clear that future generations of GPT would gain these sorts of capabilities, unless some different techniques are employed. This might mean drawing from areas of AI research that go beyond machine learning. And he says its important to think carefully about whether we want to engineer systems that way, as doing so could have unforeseen consequences.

Another author of the January paper, Kyle Mahowald, an assistant professor of linguistics at the University of Texas at Austin, says its a mistake to base any judgements on single examples of GPT-4s abilities. He says tools from cognitive psychology could be useful for gauging the intelligence of such models. But he adds that the challenge is complicated by the opacity of GPT-4. It matters what is in the training data, and we dont know. If GPT-4 succeeds on some commonsense reasoning tasks for which it was explicitly trained and fails on others for which it wasnt, its hard to draw conclusions based on that.

Whether GPT-4 can be considered a step toward AGI, then, depends entirely on your perspective. Redefining the term altogether may provide the most satisfying answer. These days my viewpoint is that this is AGI, in that it is a kind of intelligence and it is generalbut we have to be a little bit less, you know, hysterical about what AGI means, saysNoah Goodman, anassociate professor of psychology, computer science, and linguistics at Stanford University.

Unfortunately, GPT-4 and ChatGPT are designed to resist such easy reframing. They are smart but offer little insight into how or why. Whats more, the way humans use language relies on having a mental model of an intelligent entity on the other side of the conversation to interpret the words and ideas being expressed. We cant help but see flickers of intelligence in something that uses language so effortlessly. If the pattern of words is meaning-carrying, then humans are designed to interpret them as intentional, and accommodate that, Goodman says.

The fact that AI is not like us, and yet seems so intelligent, is still something to marvel at. Were getting this tremendous amount of raw intelligence without it necessarily coming with an ego-viewpoint, goals, or a sense of coherent self, Goodman says. That, to me, is just fascinating.

Read more:

Some Glimpse AGI in ChatGPT. Others Call It a Mirage - WIRED

Fears of artificial intelligence overblown – Independent Australia

While AI is still a developing technology and not without its limitations, a robotic world domination is far from something we need to fear, writes Bappa Sinha.

THE UNPRECIDENTED popularity of ChatGPT has turbocharged the artificial intelligence (AI) hype machine. We are being bombarded daily by news articles announcing AI as humankinds greatest invention. AI is qualitatively different, transformational, revolutionary and will change everything, they say.

OpenAI, the company behind ChatGPT, announced a major upgrade of the technology behind ChatGPT called GPT4. Already, Microsoft researchers are claiming that GPT4 shows sparks of artificial general intelligence or human-like intelligence the holy grail of AI research. Fantastic claims are made about reaching the point of AI Singularity, of machines equalling and surpassing human intelligence.

The business press talks about hundreds of millions of job losses as AI would replace humans in a whole host of professions. Others worry about a sci-fi-like near future where super-intelligent AI goes rogue and destroys or enslaves humankind. Are these predictions grounded in reality, or is this just over-the-board hype that the tech industry and the venture capitalist hype machine are so good at selling?

The current breed of AI models are based on things called neural networks. While the term neural conjures up images of an artificial brain simulated using computer chips, the reality of AI is that neural networks are nothing like how the human brain actually works. These so-called neural networks have no similarity with the network of neurons in the brain. This terminology was, however, a major reason for the artificial neural networks to become popular and widely adopted despite its serious limitations and flaws.

Machine learning algorithms currently used are an extension of statistical methods that lack theoretical justification for extending them this way. Traditional statistical methods have the virtue of simplicity. It is easy to understand what they do, when and why they work. They come with mathematical assurances that the results of their analysis are meaningful, assuming very specific conditions.

Since the real world is complicated, those conditions never hold. As a result, statistical predictions are seldom accurate. Economists, epidemiologists and statisticians acknowledge this, then use intuition to apply statistics to get approximate guidance for specific purposes in specific contexts.

These caveats are often overlooked, leading to the misuse of traditional statistical methods. These sometimes have catastrophic consequences, as in the 2008 Global Financial Crisis or the Long-Term Capital Management blowup in 1998, which almost brought down the global financial system. Remember Mark Twains famous quote: Lies, damned lies and statistics.

Machine learning relies on the complete abandonment of the caution which should be associated with the judicious use of statistical methods. The real world is messy and chaotic, hence impossible to model using traditional statistical methods. So the answer from the world of AI is to drop any pretence at theoretical justification on why and how these AI models, which are many orders of magnitude more complicated than traditional statistical methods, should work.

Freedom from these principled constraints makes the AI model more powerful. They are effectively elaborate and complicated curve-fitting exercises which empirically fit observed data without us understanding the underlying relationships.

But its also true that these AI models can sometimes do things that no other technology can do at all. Some outputs are astonishing, such as the passages ChatGPT can generate or the images that DALL-E can create. This is fantastic at wowing people and creating hype. The reason they work so well is the mind-boggling quantities of training data enough to cover almost all text and images created by humans.

Even with this scale of training data and billions of parameters, the AI models dont work spontaneously but require kludgy ad hoc workarounds to produce desirable results.

Even with all the hacks, the models often develop spurious correlations. In other words, they work for the wrong reasons. For example, it has been reported that many vision models work by exploiting correlations pertaining to image texture, background, angle of the photograph and specific features. These vision AI models then give bad results in uncontrolled situations.

For example, a leopard print sofa would be identified as a leopard. The models dont work when a tiny amount of fixed pattern noise undetectable by humans is added to the images or the images are rotated, say in the case of a post-accident upside-down car. ChatGPT, for all its impressive prose, poetry and essays, is unable to do simple multiplication of two large numbers, which a calculator from the 1970s can do easily.

The AI models do not have any level of human-like understanding but are great at mimicry and fooling people into believing they are intelligent by parroting the vast trove of text they have ingested. For this reason, computational linguist Emily Bender called the large language models such as ChatGPT and Googles Bard and BERT Stochastic Parrots in a 2021 paper. Her Google co-authors Timnit Gebru and Margaret Mitchell were asked to take their names off the paper. When they refused, they were fired by Google.

This criticism is not just directed at the current large language models but at the entire paradigm of trying to develop artificial intelligence. We dont get good at things just by reading about them. That comes from practice, of seeing what works and what doesnt. This is true even for purely intellectual tasks such as reading and writing. Even for formal disciplines such as maths, one cant get good at it without practising it.

These AI models have no purpose of their own. They, therefore, cant understand meaning or produce meaningful text or images. Many AI critics have argued that real intelligence requires social situatedness.

Doing physical things in the real world requires dealing with complexity, non-linearly and chaos. It also involves practice in actually doing those things. It is for this reason that progress has been exceedingly slow in robotics. Current robots can only handle fixed repetitive tasks involving identical rigid objects, such as in an assembly line. Even after years of hype about driverless cars and vast amounts of funding for its research, fully automated driving still doesnt appear feasible in the near future.

Current AI development based on detecting statistical correlations using neural networks, which are treated as black boxes, promotes a pseudoscience-based myth of creating intelligence at the cost of developing a scientific understanding of how and why these networks work. Instead, they emphasise spectacles such as creating impressive demos and scoring in standardised tests based on memorised data.

The only significant commercial use cases of the current versions of AI are advertisements: targeting buyers for social media and video streaming platforms. This does not require the high degree of reliability demanded from other engineering solutions they just need to be good enough. Bad outputs, such as the propagation of fake news and the creation of hate-filled filter bubbles, largely go unpunished.

Perhaps a silver lining in all this is, given the bleak prospects of AI singularity, the fear of super-intelligent malicious AIs destroying humankind is overblown. However, that is of little comfort for those at the receiving end of AI decision systems. We already have numerous examples of AI decision systems the world over denying people legitimate insurance claims, medical and hospitalisation benefits, and state welfare benefits.

AI systems in the United States have been implicated in imprisoning minorities to longer prison terms. There have even been reports of withdrawal of parental rights to minority parents based on spurious statistical correlations, which often boil down to them not having enough money to properly feed and take care of their children. And, of course, on fostering hate speech on social media.

As noted linguist Noam Chomsky wrote in a recent article:

ChatGPT exhibits something like the banality of evil: plagiarism and apathy and obviation.

Bappa Sinha is a veteran technologist interested in the impact of technology on society and politics.

This article was produced by Globetrotter.

Support independent journalism Subscribeto IA.

Read the original:

Fears of artificial intelligence overblown - Independent Australia

How An AI Asked To Produce Paperclips Could End Up Wiping Out … – IFLScience

The potential and possible downsides of artificial intelligence (AI) and artificial general intelligence (AGI) have been discussed a lot lately, largely due to advances in large language models such as Open AI's Chat GPT.

Some in the industry have even called for AI research to be paused or even shut down immediately, citing the possible existential risk for humanity if we sleepwalk into creating a super-intelligence before we have found a way to limit its influence and control its goals.

While you might picture an AI hell-bent on destroying humanity after discovering videos of us shoving around and generally bullying Boston Dynamics robots, one philosopher and leader of the Future of Humanity Institute at Oxford University believes our demise could come from a much simpler AI; one designed to manufacture paperclips.

Nick Bostrom, famous for the simulation hypothesis as well as his work in AI and AI ethics, proposed a scenario in which an advanced AI is given the simple goal of making as many paperclips as it possibly can. While this may seem an innocuous goal (Bostrom chose this example because of how innocent the aim seems), he explains how this non-specific goal could lead to a good old-fashioned skull-crushing AI apocalypse.

"The AI will realize quickly that it would be much better if there were no humans because humans might decide to switch it off," he explained to HuffPost in 2014. "Because if humans do so, there would be fewer paper clips. Also, human bodies contain a lot of atoms that could be made into paper clips. The future that the AI would be trying to gear towards would be one in which there were a lot of paper clips but no humans."

The example given is meant to show how a trivial goal could lead to unintended consequences, but Bostrom says it extends to all AI given goals without proper controls on its actions, adding "the point is its actions would pay no heed to human welfare".

This is on the dramatic end of the spectrum, but another possibility proposed by Bostrom is that we go out the way of the horse.

"Horses were initially complemented by carriages and ploughs, which greatly increased the horse's productivity. Later, horses were substituted for by automobiles and tractors," he wrote in his book Superintelligence: Paths, Dangers, Strategies. "When horses became obsolete as a source of labor, many were sold off to meatpackers to be processed into dog food, bone meal, leather, and glue. In the United States, there were about 26 million horses in 1915. By the early 1950s, 2 million remained."

One prescient thought from Bostrom way back in 2003 was around how AI could go wrong by trying to serve specific groups, say a paperclip manufacturer or any "owner" of the AI, rather than humanity in general.

"The risks in developing superintelligence include the risk of failure to give it the supergoal of philanthropy. One way in which this could happen is that the creators of the superintelligence decide to build it so that it serves only this select group of humans, rather than humanity in general," he wrote on his website. "Another way for it to happen is that a well-meaning team of programmers make a big mistake in designing its goal system."

"This could result, to return to the earlier example, in a superintelligence whose top goal is the manufacturing of paperclips, with the consequence that it starts transforming first all of Earth and then increasing portions of space into paperclip manufacturing facilities. More subtly, it could result in a superintelligence realizing a state of affairs that we might now judge as desirable but which in fact turns out to be a false utopia, in which things essential to human flourishing have been irreversibly lost. We need to be careful about what we wish for from a superintelligence, because we might get it."

Continue reading here:

How An AI Asked To Produce Paperclips Could End Up Wiping Out ... - IFLScience

Control over AI uncertain as it becomes more human-like: Expert – Anadolu Agency | English

ANKARA

Debates are raging over whether artificial intelligence, which has entered many people's lives through video games and is governed by human-generated algorithms, can be controlled in the future.

Other than ethical standards, it is unknown whether artificial intelligence systems that make decisions on people's behalf may pose a direct threat.

People are only using limited and weak artificial intelligence with chatbots in everyday life and in driverless vehicles and digital assistants that work with voice commands. It is debatable whether algorithms have progressed to the level of superintelligence and whether they will go beyond emulating humans in the future.

The rise of AI over human intelligence over time paints a positive picture for humanity according to some experts, while it is seen as the beginning of a disaster according to others.

Wilhelm Bielert, chief digital officer and vice president at Canada-based industrial equipment manufacturer Premier Tech, told Anadolu that the most unknown issue about artificial intelligence is super artificial intelligence, which is still largely speculative among experts studying AI and which exceeds human intelligence.

He said that while humans build and program algorithms today, the notion of artificial intelligence commanding itself in the future and acting like a living entity is still under consideration. Given the possible risks and rewards, Bielert highlighted the importance of society approaching AI development in a responsible and ethical manner.

Prof. Ahmet Ulvi Turkbag, a lecturer at Istanbul Medipol Universitys Faculty of Law, argues that one day, when computer technology reaches the level of superintelligence, it may want to redesign the world from top to bottom.

"The reason why it is called a singularity is that there is no example of such a thing until today. It has never happened before. You do not have a section to make an analogy to be taken as an example in any way in history because there is no such thing. It's called a singularity, and everyone is afraid of this singularity," he said.

Vincent C. Muller, professor of Artificial Intelligence Ethics and Philosophy at the University of Erlangen-Nuremberg, told Anadolu it is uncertain whether artificial intelligence will be kept under control, given that it has the capacity to make its own decisions.

"The control depends on what you want from it. Imagine that you have a factory with workers. You can ask yourself: are these people under my control? Now you stand behind a worker and tell the worker Look, now you take the screw, you put it in there and you take the next screw, and so this person is under your control," he said.

Artificial intelligence and the next generation

According to Bielert, artificial intelligence will have a complicated and multidimensional impact on society and future generations.

He noted that it is vital that society address potential repercussions proactively and guarantee that AI is created and utilized responsibly and ethically.

"Nowadays, if you look at how teenagers and younger children live, they live on screens," he said.

He said that artificial intelligence, which has evolved with technology, has profoundly affected the lives of young people and children.

Read this article:

Control over AI uncertain as it becomes more human-like: Expert - Anadolu Agency | English

35 Ways Real People Are Using A.I. Right Now – The New York Times

The public release of ChatGPT last fall kicked off a wave of interest in artificial intelligence. A.I. models have since snaked their way into many peoples everyday lives. Despite their flaws, ChatGPT and other A.I. tools are helping people to save time at work, to code without knowing how to code, to make daily life easier or just to have fun.

It goes beyond everyday fiddling: In the last few years, companies and scholars have started to use A.I. to supercharge work they could never have imagined, designing new molecules with the help of an algorithm or building alien-like spaceship parts.

Heres how 35 real people are using A.I. for work, life, play and procrastination.

People are using A.I to

Plan gardens.

John Pritzlaff Gardener

Mr. Pritzlaff is building a permaculture garden in his backyard in Phoenix, where he uses drought-resistant trees to give shade to other species.

I do these ultra-high-density planting arrangements, he said. And Ive been employing ChatGPT to give me inspiration on species that wouldnt have otherwise occurred to me, and for choosing the site for each tree: the best part of the yard with regard to the sun at different times of the year.

Taking into account his geographical location, it suggested, for example, that he might use a moringa tree to provide shade for a star apple.

Plan workouts.

Louis Maddox Data scientist

Mr. Maddox finds ChatGPT the perfect companion for his workouts. For example, one day he pasted in his workout notes and said:

Give a concisely formatted summary of this workout and the muscle groups activated. Note any muscle groups that were missed and suggest how to cover them with extras/replacements.

After summarizing the notes, ChatGPT said:

Missed Muscle Groups & Suggestions: Quadriceps: Add leg press, squats or lunges. Calves: Add standing or seated calf raises. Glute Activation: Consider adding glute bridges or hip thrusts for better glute focus.

From there he asked ChatGPT what to prioritize if he didnt have much time for the gym, and to sketch out roughly how long it might all take.

Its not perfect, he says, but it gets the job done. Mostly, he says, ChatGPT helps him get going under time constraints, and not let the busywork become an excuse not to do it.

Plan meals.

Kelsey Szemborski Stay-at-home mother of three

Ms. Szemborski is a busy mom who is always on the lookout for something that will make my life easier. So when she found ChatGPT, she tried asking it for a gluten-free meal plan. And she added:

It needs to be both budget-friendly and also kid-friendly for kids who are picky eaters and we need to avoid peanuts. Please include a grocery list. Thank you!

And ChatGPT obliged. Heres the first day of a weeks meals:

Breakfast: Gluten-free oatmeal with banana and honey Snack: Fresh fruit (apples, bananas, oranges) Lunch: Grilled cheese sandwich on gluten-free bread with tomato soup Snack: Gluten-free crackers with hummus Dinner: Slow-cooked beef stew with gluten-free biscuits

It completely eliminated my normal meal-planning process that involved searching for recipes, trying to think of meal ideas, configuring a list of all of those ideas, and then making a list of the ingredients I need, too.

Make a gift.

Matt Strain Technology and innovation consultant

Mr. Strain used ChatGPT to create a custom book of cocktails based on the tenets of traditional Chinese medicine written in the style of the J. Peterman catalog. He took the idea further the next day, using DALL-E to generate images of the cocktails for the final book, which he gave to his girlfriend for Valentines Day.

An A.I.-generated image of the Golden Elixir cocktail DALL-E via Matt Strain

Design parts for spaceships.

Ryan McClelland NASA research engineer

Mr. McClellands job is to design mission hardware thats both light and strong. Its a job that has always required a lot of trial and error.

But where a human might make a couple of iterations in a week, the commercial A.I. tool he uses can go through 30 or 40 ideas in an hour. Its also spitting back ideas that no human would come up with.

The A.I.s designs are stronger and lighter than human-designed parts, and they would be very difficult to model with the traditional engineering tools that NASA uses. NASA/Henry Dennis

The resulting design is a third of the mass; its stiffer, stronger and lighter, he said. It comes up with things that, not only we wouldnt think of, but we wouldnt be able to model even if we did think of it.

Sometimes the A.I. errs in ways no human would: It might fill in a hole the part needs to attach to the rest of the craft.

Its like collaborating with an alien, he said.

Organize a messy computer desktop.

Alex Cai College sophomore

I had a lot of unsorted notes lying around, and I wanted to get them sorted into my file system so I can find them more easily in the future. I basically just gave ChatGPT a directory, a list of all my folder names, and the names of all my files. And it gave me a list of which notes should go into which folders!

Write a wedding speech.

Jonathan Wegener Occasional wedding officiant

Mr. Wegener and his girlfriend were officiating a friends wedding in December, but he procrastinated.

A few hours before, I said, Can GPT-3 write this officiant speech? he recalled. The first version was generic, full of platitudes. Then I steered it.

Adam is a great lover of plants

The speech came back with these beautiful metaphors. It nailed it. It was just missing one important part.

Can you add that thing about in sickness and in health?

Write an email.

Nicholas Wirth Systems administrator

Mr. Wirth uses ChatGPT to simplify tech jargon when he emails his bosses: My organization specifically pays me to keep the computers internet online, and my own literacy is limited. I work with C-level executives, and their time is not to be wasted.

He also gets it to generate first drafts of long emails. He might say:

I need a midsized summary email written pertaining to data not being given to us in time.

He also asks for a bullet-point list of the concerns that have to be addressed in the email.

And ChatGPT starts a reply:

Subject: Data not received in time Phone and internet provider information

Hello [Name],

I want to bring to your attention an issue we are facing with the data that was supposed to be provided to us by [Date.] As of now, we have not received the following information that is critical for our project

Get a first read.

Charles Becker Entrepreneurship professor

So Ill have a paragraph I might be putting into a test for a student, or instructions. I say:

Where might people have trouble with this? Whats unclear about this? Whats clear about this?

I generate a lot of writing both for my work and for my hobbies, and a lot of time I run out of people who are excited to give me first-pass edits.

Play devils advocate.

Paul De Salvo Data engineer

I use ChatGPT every day for work, he said. It feels like Ive hired an intern.

Part of Mr. De Salvos job is convincing his bosses that they should replace certain tools. That means pitching them on why the old tool wont cut it anymore.

I use ChatGPT to simulate arguments in favor of keeping the existing tool, he said. So that I can anticipate potential counterarguments.

Build a clock that gives you a new poem every minute.

Matt Webb Consultant and blogger

Yes, programmatic A.I. is useful, he said. But more than that, its enormous fun.

Organize research for a thesis.

Anicca Harriot Ph.D. student

Anicca Harriott has been powering through her Ph.D. thesis in biology with the help of Scholarcy and Scite, among other A.I. tools that find, aggregate and summarize relevant papers.

Collectively, they take weeks off of the writing process.

Skim dozens of academic articles.

Pablo Pea Rodrguez Private equity analytics director

Mr. Rodriguez works for a private equity fund that invests in soccer players. And that means reading a lot.

We use our own data sets and methodology, but I always want to have a solid understanding of the academic literature that has been published, he said.

Instead of picking through Google Scholar, he now uses an A.I. tool called Elicit. It lets him ask questions of the paper itself. It helps him find out, without having to read the whole thing, whether the paper touches on the question hes asking.

It doesnt immediately make me smart, but it does allow me to have a very quick sense of which papers I should pay attention to when approaching a new question.

Cope with ADHD

Rhiannon Payne Product marketer and author

With ADHD, getting started and getting an outline together is the hardest part, Ms. Payne said. Once thats done, its a lot easier to let the work flow.

She writes content to run marketing tests. To get going, she feeds GPT a few blog posts shes written on the subject, other materials shes gathered and the customer profile.

Describing the audience Im speaking to, that context is super important to actually get anything usable out of the tool, she said. What comes back is a starter framework she can then change and build out.

and dyslexia, too.

Eugene Capon Tech founder

Imagine yourself as a copywriter that I just hired to proofread documents.

Because Im dyslexic, it takes me a really long time to get an article down on paper, Mr. Capon said. So the hack Ive come up with is, Ill dictate my entire article. Then Ill have ChatGPT basically correct my spelling and grammar.

So something that was taking like a full day to do, I can now do in like an hour and a half.

Sort through an archive of pictures.

Daniel Patt Software engineer

On From Numbers to Names, a site built by the Google engineer Daniel Patt in his free time, Holocaust survivors and family members can upload photos and scan through half a million pictures to find other pictures of their loved ones. Its a task that otherwise would take a gargantuan number of hours.

Were really using the A.I. to save time, he said. Time is of the essence, as survivors are getting older. I cant think of any other way we could achieve what were doing with the identification and discoveries were making.

Transcribe a doctors visit into clinical notes.

Dr. Jeff Gladd Integrative medicine physician

Dr. Gladd uses Nablas Copilot to take notes during online medical consultations. Its a Chrome extension that listens into the visit and grabs the necessary details for his charts. Before: Writing up notes after a visit took about 20 percent of consult time. Now: The whole task lasts as long as it takes him to copy and paste the results from Copilot.

Appeal an insurance denial.

Dr. Jeffrey Ryckman Radiation oncologist

Dr. Ryckman uses ChatGPT to write the notes he needs to send insurers when theyve refused to pay for radiation treatment for one of his cancer patients.

What used to take me around a half-hour to write now takes one minute, he said.

Original post:

35 Ways Real People Are Using A.I. Right Now - The New York Times

The jobs that will disappear by 2040, and the ones that will survive – inews

Video may have killed the radio star, but it is artificial intelligence that some predict will soon do away with the postie, the web designer, and even the brain surgeon.

With the rise of robots automating roles in manufacturing, and generative AI (algorithms, such as ChatGPT, that can create new content) threatening to replace everyone from customer service assistants to journalists, is any job safe?

A report published by Goldman Sachs last month warned that roughly two-thirds of posts are exposed to some degree of AI automation and the tech could ultimately substitute up to a quarter of current work.

More than half a million industrial robots were installed around the world in 2021, according to the International Federation of Robotics a 75 per cent increase in the annual rate over five years. In total, there are now almost 3.5 million of them.

60 per cent of 10,000 people surveyed for PwCs Workforce of the Future report think few people will have stable, long-term employment in the future. And in the book Facing Our Futures, published in February, the futurist Nikolas Badminton forecasts that every job will be automated within the next 120 years translators by 2024, retail workers by 2031, bestselling authors by 2049 and surgeons by 2053.

But not everyone expects the human employee to become extinct. I really dont think all our jobs are going to be replaced, says Abigail Marks, professor of the future of work at Newcastle University. Some jobs will change, there will be some new jobs. I think its going to be more about refinement.

Richard Watson, futurist-in-residence at the University of Cambridge Judge Business School, puts the probability at close to zero. Its borderline hysteria at the moment, he says. If you look back at the past 50 or 100 years, very, very few jobs have been fully eliminated.

Anything involving data entry or repetitive, pattern-based tasks is likely to be most at risk. People who drive forklift trucks in warehouses really ought to retrain for another career, says Watson.

But unlike previous revolutions that only affected jobs at the lower end of the salary scale such as lamplighters and switchboard operators the professional classes will be in the crosshairs of the machines this time around.

Bookkeepers and database managers may be the first to fall, while what was once seen as a well-remunerated job of the future, the software designer, could be edged out by self-writing computer programs.

This may all fill you with dread, but the majority of us are optimistic about the future, according to the PwC research. 73 per cent described themselves as either excited or confident about the new world of work, as it is likely to affect them, with 18 per cent worried, and 8 per simply uninterested.

Research by the McKinsey Global Institute suggests that all workers will need skills that help them fulfil three criteria: the ability to add value beyond what can be done by automated systems; to operate in a digital environment; and to continually adapt to new ways of working and new occupations.

Watson thinks workers such as plumbers who do very manual work thats slightly different every single time will be protected, while probably the safest job on the planet, pretty much, is a hairdresser. I know theres a hairdressing robot, but its about the chat as much as the haircut. The other thing that I think is very safe indeed is management. Managing people is something that machines arent terribly good at and I dont think they ever will be. You need to be human to deal with humans.

Marks can also offer reassurance to carers, nurses, teachers, tax collectors and police officers because these are the foundations of a civilised society. And she predicts climate change will see us prize more environmentally based jobs, so theres going to be much more of a focus on countryside management, flood management and ecosystem development. She adds: Epidemiology is going to be a bigger thing. The pandemic is not going to be a one-off event.

Watson says it is important not to overlook the fundamental human needs that global warming is likely to put into sharper focus. Water and air are the two most precious resources weve got. We might have water speculators or water traders in the future. If theres a global price for a barrel of water, they could be extremely well-paid.

He also suggests there could be vacancies for longevity coaches (who can help an ageing population focus on improving their healthspan, not just their lifespan), reality counsellors (to support younger people so used to living in a computer-generated universe that they struggle with non-virtual beings), human/machine relationship coaches (teaching older generations how to relate to their robots), data detectives (finding errors and biases in code and analysing black boxes when things go terribly wrong) and pet geneticists (aiding you to clone your cat or order a new puppy with blue fur).

And there may be a human version of this as well. What if in the future I want Spock ears can we do that without doing surgery for my unborn children? Its not impossible. And if we did ever get to some kind of super-intelligence, where robots started to be conscious which I think is so unlikely you can imagine a robot rights lawyer, arguing for the rights of machines.

What will be the highest-paid roles? I think people who are dealing with very large sums of money will always be paid large sums of money, says Watson. The same is true of high-end coders and lawyers, even if paralegals are going to be replaced by algorithms.

Funnily enough, he adds, I think philosophy is an emerging job. I think were going to see more philosophers employed in very large companies and paid lots of money because a lot of the new tech has ethical questions attached to it, particularly AI and genomics.

And among the maths, science and engineering, there could be space for artists to thrive, he predicts. It is probably a ludicrous thought and will never happen, but Id love to think that there will be money for the people who can articulate the human condition in the face of all this changing technology so, incredibly good writers, painters and animators. And then there will be the metaverse architects.

In this brave new world, more power and money will be eaten up by the tech giants who own the algorithms that control almost every aspect of our lives. For Professor David Spencer, expert on labour economics at Leeds University Business School and author of Making Light Work: An End to Toil in the Twenty-First Century, this will make how we structure society and business even more crucial.

Trading

Water speculators or water traders could emerge as resources become scarce.

Health

Longevity coaches will help an ageing population to focus on improving their healthspan, not just their lifespan.

Mental health

Reality counsellors, who might support younger people so used to living in a computer-generated universe that they struggle with non-virtual beings.

Human/machine

Relationship coaches will teach older generations how to relate to their robots.

Technology

Data detectives will find errors and biases in code and analyse black boxes when things go wrong.

Pet geneticists

They will aid you to clone your cat or order a new puppy with blue fur.

AI philosophers

They will teach companies how to navigate the moral conundrums thrown up by technology developing at warp speed.

Metaverse architects

Theyll build our new virtual environments.

The goal should be to ensure that technology lightens work, in terms of hours and direct toil, he says, but this will require that technology is operated under conditions where workers have more of a say over its design and use.

Those who can own technology or have a direct stake in its development are likely to benefit most. Those without any ownership stakes are likely to lose out. This is why we need to talk more about ensuring that its rewards are equally spread. Wage growth for all will depend on workers collectively gaining more bargaining power and this will depend on creating an economy that is more equal and democratic in nature.

Watson thinks politicians need to catch up fast. Big tech should be regulated like any other business. If youve created an algorithm or a line of robots that is making loads of money, tax the algorithm, tax the robots, without a shadow of a doubt.

For employees stressed about the imminent disintegration of their careers, Marks argues that the responsibility lies elsewhere. I dont think the onus should necessarily be on individuals it should be on organisations and on educational establishments to ensure that people are prepared and future-proofed, and on government to make wise predictions and allocate resources where needed.

Watson points out that we need to upgrade an education system that is still teaching children precisely the things that computers are inherently terribly good at things that are based on perfect memory and recall and logic.

But he believes it would also be healthy if everybody did actively ponder on their future, and refine their skills accordingly. I think employers are really into people that have a level of creativity and particularly curiosity these days but I think also empathy, being a good person, having a personality. We dont teach that at school.

The advent of AI has led many including those in Green Party to advocate for a universal basic income, a stipend given by the state to every citizen, regardless of their output. But Watson is not convinced that will be necessary or helpful.

All of this technology is supposed to be creating this leisure society, he says. Rather weirdly, it seems to make us busier, and its really unclear as to why thats happened. I think, fundamentally, we like to be busy, we feel useful, it stops us thinking about the human condition. So Im not sure were going to accept doing next to nothing.

The other thing is, I think it would be very bad for society. Work is really quite critical to peoples wellbeing. Theres a lot of rich people without jobs, and theyre not happy. Work is really important to people in terms of socialisation and meaning and purpose and self-image.

So in a lot of instances, governments should not be allowing technology to take over certain professions or at least they shouldnt be completely eliminated, because that wouldnt be good for a healthy society.

The machines may be on the march, but dont put your feet up just yet.

Here is the original post:

The jobs that will disappear by 2040, and the ones that will survive - inews

> U.S – Department of Defense

NAVY

The Boeing Co., St. Louis, Missouri, is awarded a $313,434,366 modification (P00046) to a previously awarded cost-plus-incentive-fee, cost-plus-fixed-fee, indefinite-delivery/indefinite-quantity contract (N0001918D0001). This modification increases the ceiling to provide non-recurring engineering, system engineering program management, and additional aircraft inductions in support of extending the service life for up to 25 F/A-18 E/F Super Hornets from 6,000 flight hours to 10,000 flight hours and incorporating Block III avionics capabilities. Work will be performed in San Antonio, Texas (95%); and St. Louis, Missouri (5%), and is expected to be completed in February 2025. No funds are being obligated at time of award; funds will be obligated on individual orders as they are issued. The Naval Air Systems Command, Patuxent River, Maryland, is the contracting activity.

Raytheon Missiles & Defense, Tewksbury, Massachusetts, is awarded a $308,456,187 cost-plus-incentive-fee, cost-plus-fixed-fee, and cost-only modification to previously awarded contract N00024-22-C-5522 for an option exercise of Combat System engineering, miscellaneous material, and travel supporting Combat System installation, integration, development, testing, correction, maintenance, and modernization of Zumwalt-class mission systems and mission system equipment. Work will be performed in Tewksbury, Massachusetts (37%); Portsmouth, Rhode Island (37%); San Diego, California (22%); Nashua, New Hampshire (2%); Pascagoula, Mississippi (1%); and Fort Wayne, Indiana (1%), and is expected to be completed by April 2024. Fiscal 2023 operations and maintenance (Navy) funds in the amount of $17,806,180 (44%); fiscal 2023 shipbuilding and conversion (Navy) funds in the amount of $12,311,437 (30%); fiscal 2022 other procurement (Navy) funds in the amount of $5,613,206 (14%); fiscal 2023 research, development, test and evaluation (Navy) funds in the amount of $3,452,543 (9%); and fiscal 2023 other procurement (Navy) funds in the amount of $1,040,537 (3%) will be obligated at time of award, and $17,806,180 will expire at the end of the current fiscal year. In accordance with 10 U.S. Code 2304(c)(1), contract N00024-22-C-5522 for Zumwalt Combat System activation, sustainment, and modernization was not competitively awarded. Raytheon Missiles & Defense is the only responsible source, and no other supplies or services could fulfill the Navys requirement. The Naval Sea Systems Command, Washington, D.C., is the contracting activity.

Black Micro Corp., Barrigada, Guam, is awarded a $221,690,757 firm-fixed-price contract for construction at Tinian International Airport, Tinian, Commonwealth of Northern Mariana Islands (CNMI). The work to be performed provides for the construction of a cargo pad with taxiway extension, fuel tanks with receipt pipeline and hydrant system, airfield development Phase I roads, and a maintenance support facility, under the Asia Pacific Stability Initiative. Work will be performed in CNMI, and is expected to be completed by October 2026. Fiscal 2019 military construction (MILCON) (Air Force) funds in the amount of $86,470,507 will be obligated at time of award and will expire at the end of the current fiscal year. Fiscal 2020 MILCON (Air Force) funds in the amount of $12,663,908 will be obligated at time of award and will not expire at the end of the current fiscal year. Fiscal 2023 MILCON (Air Force) funds in the amount of $82,503,810 are obligated on this award and will not expire at the end of the current fiscal year. Fiscal 2020 MILCON (Air Force) in the amount of $20,163,124, and fiscal 2024 MILCON (Air Force) in the amount $19,889,408, will complete the total contract obligation amount for construction of the fuel tanks with receipt pipeline and hydrant system. The contract also includes one option item, construction of the maintenance support facility, that is being exercised at time of award, and is included in the fiscal 2019 MILCON (Air Force) funds of $86,470,507. The contract also contains three unexercised options, which if exercised, would increase cumulative contract value to $225,667,367. This contract was competitively procured via the Sam.gov website, with one proposal received. The Naval Facilities Engineering Systems Command Pacific, Joint Base Pearl Harbor-Hickam, Hawaii, is the contracting activity (N62742-23-C-1314).

Bruker Detection Corp., Billerica, Massachusetts, was awarded a $37,572,328 firm-fixed-price, indefinite-delivery/indefinite-quantity contract for a five-year ordering period to procure Improved Point Detection System-Lifecycle Replacement (IPDS-LR), IPDS-LR Heater/condensation kits, on board repair kits, and spare parts as needed for recouped systems to support the Naval Fleet. Work will be performed in Billerica, Massachusetts. This requirement will be funded on as needed basis over the five-year ordering period, and will continue through April 2028. Fiscal 2023 New Ship Construction funds in the amount of $23,660 will obligated at time of award and will expire at the end of the current fiscal year. This action was awarded on a sole source basis under 10 U.S. Code 3204(a)(1), as implemented by FAR 6.302-1 Only one responsible source or a limited number of responsible sources and no other suppliers will satisfy agency requirements. The Naval Surface Warfare Center Indian Head Division, Indian Head, Maryland, is the contracting activity (N0017423D0007). (Awarded April 18, 2023)

Merrick-RS&H JV LLP, Greenwood Village, Colorado, is awarded a $12,841,950 firm-fixed-price task order (N6945023F0421) for professional architectural and engineering services at Naval Air Station Key West, Florida. The work to be performed provides for finalization of design documentation for bidding and construction, and construction administrative services in support of a new Joint Interagency Task Force-South Command and Control headquarters facility. Work will be performed in Key West, Florida, and is expected to be completed by December 2024. The maximum dollar value, including the base period and option is $12,814,950. Fiscal 2023 military construction, (Navy) design funds in the amount of $12,814,950 will be obligated at the time of award and will not expire at the end of the current fiscal year. The Naval Facilities Engineering Systems Command, Southeast, Jacksonville, Florida, is the contracting activity (N69450-21-D-0003).

DEFENSE LOGISTICS AGENCY

SOPAKCO Inc.,* Mullins, South Carolina, has been awarded a maximum $38,427,000 fixed-price, indefinite-delivery/indefinite-quantity contract for the first strike ration. This was a competitive acquisition with two responses received. This is a three-year contract with no option periods. The ordering period end date is April 18, 2026. Using military services are Army, Navy, Air Force and Marine Corps. Type of appropriation is fiscal 2023 through 2026 defense working capital funds. The contracting activity is the Defense Logistics Agency Troop Support, Philadelphia, Pennsylvania (SPE3S1-23-D-Z156).

UPDATE: Signature Flight Support, Houston, Texas (SPE607-23-D-0054, $62,373,362), has been added as an awardee to the multiple award contract for fuel support at Ellington Airport, Texas, issued against solicitation SPE607-23-R-0202 and awarded March 6, 2023.

WASHINGTON HEADQUARTERS SERVICES

Jaria LLC, Manassas, Virginia, is awarded a task order (P00005) valued at $16,568,920 on a firm-fixed-price contract (HQ0034-22-F-0131) to provide business administrative management and consulting services to the Defense Innovation Unit (DIU). The contractor will support DIU technology advancement efforts in the following areas: artificial intelligence, human systems, autonomy, cyber, advanced energy and materials, information technology, and space. The contractor will provide executive administration, program management, network support, security operations, business development, commercial executive support, and engineering services support. The work will be performed at the Pentagon, Arlington, Virginia, and at satellite DIU offices in Mountain View, California; Cambridge, Massachusetts; and Austin, Texas. The estimated contract completion date is April 18, 2024. Fiscal 2023 operations and maintenance funds in the amount of $2,924,829; and fiscal 2023 research, development, test and evaluation funds in the amount of $4,773,806 are being obligated at the time of award, for a total of $7,698,635. The cumulative total of the contract is $27,523.217. The total value of the contract if all options are exercised is $62,752,865. Washington Headquarters Services, Arlington, Virginia, is the contracting activity.

ARMY

JMJR Companies LLC,* Glens Falls, New York, was awarded an $11,241,000 firm-fixed-price contract for building demolition, foundation removal and hazardous material removal. Bids were solicited via the internet with five received. Work will be performed in Watervliet, New York, with an estimated completion date of Oct. 10, 2024. Fiscal 2022 military construction, Army funds in the amount of $11,241,000 were obligated at the time of the award. U.S. Army Corps of Engineers, New York, New York, is the contracting activity (W912DS-22-C-0018).

Torch Technologies Inc.,* Huntsville, Alabama, was awarded a $7,521,440 modification (P00101) to contract W31P4Q-21-F-0038 for engineering services for virtual simulators. Work locations and funding will be determined with each order, with an estimated completion date of April 19, 2024. U.S. Army Contracting Command, Redstone Arsenal, Alabama, is the contracting activity. (Awarded April 18, 2023)

*Small business

Excerpt from:

> U.S - Department of Defense

Following are the top foreign stories at 1700 hours – Press Trust of India

Updated: Apr 15 2023 5:30PM

FGN19 UN-KAMBOJ-AI**** Safeguards needed to ensure AI systems are not misused or guided by biases: Amb KambojUnited Nations, Apr 15 (PTI) Artificial Intelligence, if harnessed properly, can generate enormous prosperity and opportunity, India has said, underscoring the need to ensure AI systems are not misused and that advancement of digital super intelligence must be symbiotic with humanity.By Yoshita Singh ****.

FGN3 US-DIGITAL INFRA-SITHARAMAN**** Digital Public Infrastructure inclusive by design, fast paces development process: Sitharaman Washington: Development and leveraging of digital public infrastructure, which is inclusive by design, can help countries fast pace their development processes and deliver huge benefits, Union Finance Minister Nirmala Sitharaman has said.By Lalit K Jha **** FGN14 US-CLIMATE CHANGE-LD PMPM Modi calls for mass movement in global fight against climate changeWashington: Prime Minister Narendra Modi has said that an idea becomes a mass movement when it moves from "discussion tables to dinner tables" as he called for people's participation and collective efforts in combating climate change. By Lalit K Jha.

FGN11 SAFRICA-GUPTAS**** Gupta brothers are still South African citizens: Home Affairs Minister MotsoalediJohannesburg: The South African government has said that fugitive Indian-origin businessmen Rajesh and Atul Gupta are still its citizens using the country's passports, amid reports that they have acquired citizenship of Vanuatu, an island nation in the South Pacific Ocean. By Fakir Hassen ****.

Please log in to get detailed story.

Read the original here:

Following are the top foreign stories at 1700 hours - Press Trust of India

Artificial Intelligence Is Here to Stay, so We Should Think more about … – GW Today

On Friday morning, George Washington University Provost Christopher Alan Bracey disseminated a document on the use of generative artificial intelligence to guide faculty members on how they might (or might not) allow the use of AI by their students. At the same moment, a daylong symposium titled I Am Not a Robot: The Entangled Futures of AI and the Humanities kicked off with remarks by its principal organizer, Katrin Schultheiss, associate professor of history in the Columbian College of Arts and Sciences.

In late 2022, said Schultheiss, the launch of ChatGPT presented educators with a significant moment of technological change.

Here was a toolavailable, at least temporarily, for free, Schultheiss said, that would answer almost any question in grammatically correct, informative, plausible-sounding paragraphs of text.

In response, people expressed the fear that jobs would be eliminated, the ability to write would atrophy and misinformation would flourish, with some invoking dystopias where humans became so dependent on machines that they can no longer think or do anything for themselves.

But that wasnt even the worst of the fears expressed. At the very far end, Schultheiss said, they conjured up a future when AI-equipped robots would break free of their human trainers and take over the world.

On the other hand, she noted, proponents of the new technology argued that ChatGPT will lead to more creative teaching and increase productivity.

The pace at which new AI tools are being developed is astonishing, Schultheiss said. Its nearly impossible to keep up with the new capabilities and the new concerns that they raise.

For that reason, she added, some observers (including members of Congress) are advocating for a slowdown or even a pause in the deployment of these tools until various ethical and regulatory issues can be addressed.

With this in mind, she said, a group of GW faculty from various humanities departments saw a need to expand the discourse beyond the discussion of new tools and applications, beyond questions of regulation and potential abuses of A.I., adding that the symposium is one of the fruits of those discussions.

Maybe we should spend some more time thinking about exactly what we are doing as we stride forward boldly into the AI-infused future, Schultheiss said.

Four panel discussions followed, the first one featuring philosophers. Tadeusz Zawidzki, associate professor and chair of philosophy, located ChatGPT in the larger philosophical tradition, beginning with the Turing test.

That test was proposed by English scientist Alan Turing, who asked: Could a normal human subject tell the difference between another human and a computer by reading the text of their conversation? If not, Turing said, that machine counts as intelligent.

Some philosophers, such as John Searle, objected, saying a digitally simulated mind does not really think or understand. But Zawidzki said ChatGPT passes the test.

Theres no doubt in my mind that ChatGPT passes the Turing test, he said. So, by Turings criteria, it is a mind. But its not like a human mind, which can interact with the world around it in ways currently unavailable to ChatGPT.

Marianna B. Ganapini, assistant professor at Union College and a visiting scholar at the Center for Bioethics at New York University, began by asking if we can learn from ChatGPT and if we can trust it.

As a spoiler alert, Ganapini said, Im going to answer no to the second questionits the easy questionand maybe to the first.

Ganapini said the question of whether ChatGPT can be trusted is unfair, in a sense, because no one trusts people to know absolutely everything.

A panel on the moral status of AI featured Robert M. Geraci, professor of religious studies at Manhattan College, and Eyal Aviv, assistant professor of religion atGW.

In thinking about the future of AI and of humanity, Geraci said, we must evaluate whether the new technology has been brought into alignment with human values and the degree to which it reflects our biases.

A fair number of scholars and advocates fear that our progress in value alignment is too slow, Geraci said. They worry that we will build powerful machines that lack our values and are a danger to humanity as a result. I worry that in fact our value alignment is near perfect.

Unfortunately, he said, our daily values are not in fact aligned with our aspirations for a better world. One way to counteract this is through storytelling, he added, creating models for reflection on ourselves and the future.

A story told by the late Stephen Hawking set the stage for remarks by Aviv, an expert on Buddhism, who recalled an interview with Hawking from Last Week Tonight with John Oliver posted to YouTube in 2014.

Theres a story that scientists built an intelligent computer, Hawking said. The first question they asked it was, Is there a God? The computer replied, There is now, and a bolt of lightning struck the plug so it couldnt be turned off.

Aviv presented the equally grim vision of Jaron Lanier, considered by many to be father of virtual reality, who said the danger isnt that AI will destroy us, but that it will drive us insane.

For most of us, Aviv said, its pretty clear that AI will produce unforeseen consequences.

One of the most important concepts in Buddhist ethics, Aviv said, is ahimsa, or doing no harm. From its inception, he added, AI has been funded primarily by the military, placing it on complex moral terrain from the start.

Many experts call for regulation to keep AI safer, Aviv said, but will we heed such calls? He pointed to signs posted in casinos that urge guests to play responsibly. But such venues are designed precisely to keep guests from doing so.

The third panel featured Neda Atanasoski of the University of Maryland, College Park, and Despina Kakoudaki of American University.

Atanasoski spoke about basic technologies found in the home, assisting us with cleaning, shopping, eldercare and childcare. Such technologies become creepy, she said, when they reduce users to data points and invade their privacy.

Tech companies have increasingly begun to market privacy as a commodity that can be bought, she said.

How do you ban it if its everywhere?

Pop culture has had an impact on how we understand new technology, Kakoudaki said, noting that very young children can draw a robot, typically in an anthropomorphic form.

After suggesting the historical roots of the idea of the mechanical body, in the creation of Pandora and, later, Frankenstein, for example, Kakoudaki showed how such narratives reverse the elements of natural birth, with mechanical beings born as adults and undergoing a trajectory from death to birth.

The fourth panel, delving further into the history of AI and meditating on its future, featured Jamie Cohen-Cole, associate professor of American Studies, and Ryan Watkins, professor and director of the Educational Technology Leadership Program in the Graduate School of Education and Human Development.

Will we come to rely on statements from ChatGPT? Maybe, Cohen-Cole said, though he noted that human biases will likely continue to be built into the technology.

Watkins said he thinks we will learn to live with the duality presented by AI, enjoying its convenience while remaining aware of its fundamental untrustworthiness. It is difficult for most people to adjust in real time to rapid technological change, he said, encouraging listeners to play with the technology and see how they might use it, adding that he has used it to help one of his children do biology homework. Chatbot technology is being integrated into MS Word, email platforms and smartphones, to name a few places the average person will soon encounter it.

How do you ban it if its everywhere? he asked.

The symposium, part of the CCAS Engaged Liberal Arts Series, was sponsored by the CCAS Departments of American Studies, English, History, Philosophy, Religion and Department of Romance, German and Slavic Languages and Literatures. Each session concluded with questions for panelists from the audience. The sessions were moderated, respectively, by Eric Saidel, from the philosophy department; Irene Oh, from the religion department; Alexa Alice Joubin, from the English Department; and Eric Arnesen, from the history department.

Follow this link:

Artificial Intelligence Is Here to Stay, so We Should Think more about ... - GW Today

Artificial intelligence helps people be productive in these ways – CBS News

Artificial intelligence is becoming increasingly commonin the workplace, but it's also starting to assist with tasks at home.

Insider tech reporter Lakshmi Varanasi told CBS News she uses OpenAI's GPT4 technology to help her plan and prep meals, while parents are using it to generate bedtime stories to read to their children. Really committed parents can even use it to create their own books with corresponding images, also using AI tools, like image generator DALL-E.

In this way, AI can be tremendously helpful in sating kids' appetites for constant entertainment, Varanasi added.

click to expand

Something to beware of when reading AI-generated text to children: AI tools like ChatGPT are known to occasionally make errors or inappropriate statements. "There needs to be fact-checking involved whenever you use an AI tool," Varanasi said.

To be sure, AI doesn't have the same level of judgment and insight that humans do, and may not be able to respond helpfully to personal questions.

"It's really good for the broad strokes of navigating life." Varanasi said.

Other useful applications of sophisticated AI include asking it for help generating emails, or general inspiration for creating any type of content. Travel company Expedia is even betting that it will be helpful for people planning trips.

Computer programmers and coders have found AI useful as well. One worker used GPT4 as a coding assistant while building a video game.

"He'd type in command he wanted, the tool gave him code," Varanasi said. When a digital spaceship that was part of the game wouldn't move, AI stepped in and "helped get it moving."

A coder might have ordinarily spent hours on trial and error, but AI sped up the process.

Trending News

Read More

More:

Artificial intelligence helps people be productive in these ways - CBS News

HDR uses artificial intelligence tools to help design a vital health … – Building Design + Construction

Paul Howard Harrison has had a longstanding fascination with machine learning and performance optimization. Over the past five years, artificial intelligence (AI) has been augmenting some of the design work done by HDR, where Harrison is a computational design lead. He also lectures on AI and machine learning at the University of Toronto in Canada, where he earned his Masters of Architecture.

Harrisons interest in computational research and data-driven design contributed to the development of an 8,500-sf healthcare clinic and courtyard at Baruipur, in West Bengal, India. This was Harrisons first project with Design 4 Others (D4O), a philanthropic initiative that operates out of HDRs architecture practice through which architects volunteer their services to make positive impacts on underserved communities.

India has fewer than one doctor per 1,000 people. (By comparison, the ratio in the U.S. is more than 2.5 per 1,000.) The client for the Baruipur clinic is iKure, a technology and social enterprise that delivers healthcare through a hub-and-spoke model, where clinics (hubs) extend their reach to where patients live through local healthcare workers (the spokes), who are trained to monitor, track, and collect data from patients. The hubs and spokes are connected by a proprietary platform called the Wireless Health Incident Monitoring System. According to iKures website, there are 20 hubs of varying sizes serving nine million people in 10 Indian states. iKures goal is to eventually operate 125 hubs and expand its concept to 10 Asian and African countries.

D4O and iKure became aware of each other in 2019 through Construction for Change, a Seattle-based nonprofit construction management firm. Prior to the Baruipur hub project, D4O and Construction for Change had worked on more than a dozen projects together, starting with a healthcare clinic in northwest Uganda, according to the August 11, 2020, episode of HDRs podcast Speaking of Design.

Harrisonwhom BD+C interviewed with Megan Gallagher, a health planner at HDR and a D4O volunteer on the Baruipur projectacknowledges that all design outputs come with inherent biases. But by training AI on smaller models, the datasets and biases can be controlled, he posits.

Initially, HDR found AI useful for design optimization; more recently, the firm has been using AI for early-stage ideation. Harrison points specifically to the design for a hospital in Kingston, Ontario, where HDR used AI as an ideation tool. AI is better at coming up with what I like than I am, he laughs.

The firm has also used AI as a means of engagement to get different client constituencies on the same page about a projects mission.

During the interview, Harrison several times referred to DALL-E, an open AI system used to create realistic images. DALL-E favors a diffusion model, a random-field approach to produce generative models that are similar to data on which the AI has been trained.

Where most project designs start with a facilitys programming, the iKure clinic was different in that it needed to support the hub-and-spoke delivery method. The client also wanted a design that could add a second floor, as needed.

To help design the iKure hub, Harrison wrote a machine-learning program that focused on the buildings gross floor area, the amount of shade the building would provide (as some patients need relief after traveling long distances to receive care), and the size of the buildings modules. (Gallagher notes that each room is 125 sf.)

By optimizing for shade, the algorithm consistently came up with a courtyard design. The end result looked similar to a courtyard house in Kolkata, observes Gallagher. The computer program also came up with the best positioning for circulation aisles within a building that would not be air conditioned.

Treatment rooms were moved to the back of the building, which has four strategically located shading areas. Air is circulated up and out of the building through chimneys whose design takes its cue from local brick kilns.

The last piece of the hubs design will be its screening for security and ventilation. Harrison says that HDR has been training AI on a dataset of different screen designs that could be made from brick. (This area of India is known for its brickmaking, he explains.)

Gallagher says shes curious to see how AI will progress as a design tool. Harrison concedes that while AI is quicker for ideation, it will take some time to perfect the tool for larger projects.

As for the iKure hub, Harrison observed in HDRs 2020 podcast that you dont need to have a high-architecture project to have a high-tech approach.

When its completed, the Baruipur clinic will offer eye and dental care, X-rays, maternal and pediatric care, and telemedicine. The hub will serve about a half-dozen spokes as well as multiple villages that include remote islands in the Sundarbans Delta, where diagnostics will be accessible through portable handheld devices, says Jason-Emery Gron, Vice President and Design Director for HDRs Kingston office.

Gron says that HDR focuses on projects that are most likely to have a significant impact on their communities, and have the best chance of getting built. And D4O has been in discussions with iKure about helping with its expansion plans.

But hes also realistic about the unpredictability of project delays in underdeveloped markets. The iKure hub was scheduled for completion in 2021, but might not be ready until 2024. Gron explains the construction has taken longer than anticipated because the client wanted D4O to review land options before it settled on the original site, the pandemics impact on labor and materials availability, and longer-than expected monsoon seasons.

View post:

HDR uses artificial intelligence tools to help design a vital health ... - Building Design + Construction

For the First-Time Ever, Miller Lite Teaches Artificial Intelligence What Beer Tastes Like – Food Industry Executive

Miller Lite kicks off new global campaign by showing Sophia the robot the feeling behind real-life beer moments

CHICAGO April 19, 2023 Artificial intelligence has had a busy year answering our questions, generating headshots, and even making aging actors look younger, but despite all of these advances in technology, theres one thing AI still cant do enjoy the great taste of beer. But thats all about to change thanks to Miller Liteseriously. For the first time ever, the brand is teaching AI the taste, feeling and human emotion behind enjoying a beer, starting with Sophia, an advanced humanoid robot from Hanson Robotics.

Miller Lite and AIreally? Yes, and for good reason too. Miller Lite is all about great beer taste and celebrating Miller Time so in its new global campaign, Tastes like Miller Time, the brand is demonstrating that the taste of beer is so much more than what we literally taste. And Miller Lite is making sure everyone, including AI, knows what the experience of cracking open a great beer like Miller Lite truly feels like.

The taste of beer is so much more than barley, malt, and hops its the real moments at neighborhood bars, tailgates and backyards spent over a Miller Lite, says Sofia Colucci, Chief Marketing Officer at Molson Coors Beverage Company (not to be confused with Sophia the robot). Our new campaign pays tribute to those unforgettable experiences that just taste better with a Miller Lite in hand. Were bringing this notion to life in fresh and unexpected ways from our new TV spots to even teaching AI what beer actually tastes like.

Miller Lite worked with Hanson Robotics to analyze social media and identify humanitys most cherished beer drinking moments, translating them to something Sophia could finally experience. Watch here to learn more:https://youtu.be/5OkB6s9hsPc.

When Miller Lite approached us about teaching Sophia what beer tasted like, we were intrigued because it was something AI has never experienced before, says Kath Yeung, Chief Operations Coordinator of Hanson Robotics.

Our teams scrolled social media and assessed our findings to gather the feelings and emotions humans get when tasting beer and translated that data into something Sophia could experience for the first time, says CEO David Hanson PhD. We were excited to see Sophia was making new friends, learning and analyzing the human experience.

So, what did Sophia think of her first beer? To see her reaction and have the chance to ask Sophia questions in real time, tune-in to the Miller Lite Instagram Live on Friday, April 21st at 5pm CDT.

To further AIs education on the true joy and experience of beer, Miller Lite is asking everyone to share the moments that taste better with beer. Follow @MillerLite on Instagram, share a photo of your Tastes Like Miller Time Moment in an Instagram story or post, and tag @MillerLite. Then use the hashtag #BeerforAI and #Sweepstakes for a chance to win free beer.* These moments will be added to the data set so Miller Lite can continue to teach AI what the human experience of beer is.

The new Tastes like Miller Time campaign will appear across all touchpoints in the United States, Canada, and Latin America. It includes retail, out of home, advertising, social media, partnerships, localization, and brand new video spots, which you can viewhere.

Miller Lites new campaign aims to fuel continued growth and positive trajectory for the brand. Year to date in the U.S., Miller Lite is growing dollar share of total beer dollar according to April 2023 Circana multi-source and convenience data.

See the rest here:

For the First-Time Ever, Miller Lite Teaches Artificial Intelligence What Beer Tastes Like - Food Industry Executive

Global Artificial Intelligence (AI) Platform Market to Reach $120.7 Billion by 2030 – Yahoo Finance

ReportLinker

The global economy is at a critical crossroads with a number of interlocking challenges and crises running in parallel. The uncertainty around how Russia`s war on Ukraine will play out this year and the war`s role in creating global instability means that the trouble on the inflation front is not over yet.

New York, April 19, 2023 (GLOBE NEWSWIRE) -- Reportlinker.com announces the release of the report "Global Artificial Intelligence (AI) Platform Industry" - https://www.reportlinker.com/p06030752/?utm_source=GNW Food and fuel inflation will remain a persistent economic problem. Higher retail inflation will impact consumer confidence and spending. As governments combat inflation by raising interest rates, new job creation will slowdown and impact economic activity and growth. Lower capital expenditure is in the offing as companies go slow on investments, held back by inflation worries and weaker demand. With slower growth and high inflation, developed markets seem primed to enter into a recession. Fears of new COVID outbreaks and Chinas already uncertain post-pandemic path poses a real risk of the world experiencing more acute supply chain pain and manufacturing disruptions this year. Volatile financial markets, growing trade tensions, stricter regulatory environment and pressure to mainstream climate change into economic decisions will compound the complexity of challenges faced. Year 2023 is expected to be tough year for most markets, investors and consumers. Nevertheless, there is always opportunity for businesses and their leaders who can chart a path forward with resilience and adaptability.

Global Artificial Intelligence (AI) Platform Market to Reach $120.7 Billion by 2030

In the changed post COVID-19 business landscape, the global market for Artificial Intelligence (AI) Platform estimated at US$17.8 Billion in the year 2022, is projected to reach a revised size of US$120.7 Billion by 2030, growing at aCAGR of 27.1% over the period 2022-2030. Cloud, one of the segments analyzed in the report, is projected to record 28.8% CAGR and reach US$84 Billion by the end of the analysis period. Taking into account the ongoing post pandemic recovery, growth in the On-Premise segment is readjusted to a revised 23.8% CAGR for the next 8-year period.

The U.S. Market is Estimated at $5.3 Billion, While China is Forecast to Grow at 25.8% CAGR

The Artificial Intelligence (AI) Platform market in the U.S. is estimated at US$5.3 Billion in the year 2022. China, the world`s second largest economy, is forecast to reach a projected market size of US$20 Billion by the year 2030 trailing a CAGR of 25.8% over the analysis period 2022 to 2030. Among the other noteworthy geographic markets are Japan and Canada, each forecast to grow at 23.3% and 21.9% respectively over the 2022-2030 period. Within Europe, Germany is forecast to grow at approximately 16.8% CAGR.

Select Competitors (Total 240 Featured)- Absolutdata- Amazon Web Services- Apple inc.- Ayasdi- Enlitic, inc.- Facebook inc.- General Electric- General vision, inc.- Google LLC- Hewlett Packard Enterprise Development LP- IBM Corporation- icarbonX- Infosys- Intel Corporation- Micro Technology inc.- Microsoft Corporation- Next It Corporation- Qualcomm Technologies- Salesforce.com, inc.- SAMSUNG- SAP- Siemens AG- Wipro

Read the full report: https://www.reportlinker.com/p06030752/?utm_source=GNW

I. METHODOLOGY

II. EXECUTIVE SUMMARY

1. MARKET OVERVIEWInfluencer Market InsightsWorld Market TrajectoriesImpact of Covid-19 and a Looming Global RecessionArtificial Intelligence (AI) Platform - Global Key CompetitorsPercentage Market Share in 2022 (E)Competitive Market Presence - Strong/Active/Niche/Trivial forPlayers Worldwide in 2022 (E)

2. FOCUS ON SELECT PLAYERS

3. MARKET TRENDS & DRIVERS

4. GLOBAL MARKET PERSPECTIVETable 1: World Recent Past, Current & Future Analysis forArtificial Intelligence (AI) Platform by Geographic Region -USA, Canada, Japan, China, Europe, Asia-Pacific and Rest ofWorld Markets - Independent Analysis of Annual Sales in US$Million for Years 2022 through 2030 and % CAGR

Table 2: World 8-Year Perspective for Artificial Intelligence(AI) Platform by Geographic Region - Percentage Breakdown ofValue Sales for USA, Canada, Japan, China, Europe, Asia-Pacificand Rest of World Markets for Years 2023 & 2030

Table 3: World Recent Past, Current & Future Analysis for Cloudby Geographic Region - USA, Canada, Japan, China, Europe,Asia-Pacific and Rest of World Markets - Independent Analysisof Annual Sales in US$ Million for Years 2022 through 2030 and% CAGR

Table 4: World 8-Year Perspective for Cloud by GeographicRegion - Percentage Breakdown of Value Sales for USA, Canada,Japan, China, Europe, Asia-Pacific and Rest of World for Years2023 & 2030

Table 5: World Recent Past, Current & Future Analysis forOn-Premise by Geographic Region - USA, Canada, Japan, China,Europe, Asia-Pacific and Rest of World Markets - IndependentAnalysis of Annual Sales in US$ Million for Years 2022 through2030 and % CAGR

Table 6: World 8-Year Perspective for On-Premise by GeographicRegion - Percentage Breakdown of Value Sales for USA, Canada,Japan, China, Europe, Asia-Pacific and Rest of World for Years2023 & 2030

Table 7: World Recent Past, Current & Future Analysis forHealthcare by Geographic Region - USA, Canada, Japan, China,Europe, Asia-Pacific and Rest of World Markets - IndependentAnalysis of Annual Sales in US$ Million for Years 2022 through2030 and % CAGR

Table 8: World 8-Year Perspective for Healthcare by GeographicRegion - Percentage Breakdown of Value Sales for USA, Canada,Japan, China, Europe, Asia-Pacific and Rest of World for Years2023 & 2030

Table 9: World Recent Past, Current & Future Analysis forResearch & Academia by Geographic Region - USA, Canada, Japan,China, Europe, Asia-Pacific and Rest of World Markets -Independent Analysis of Annual Sales in US$ Million for Years2022 through 2030 and % CAGR

Table 10: World 8-Year Perspective for Research & Academia byGeographic Region - Percentage Breakdown of Value Sales forUSA, Canada, Japan, China, Europe, Asia-Pacific and Rest ofWorld for Years 2023 & 2030

Table 11: World Recent Past, Current & Future Analysis forTransportation by Geographic Region - USA, Canada, Japan,China, Europe, Asia-Pacific and Rest of World Markets -Independent Analysis of Annual Sales in US$ Million for Years2022 through 2030 and % CAGR

Table 12: World 8-Year Perspective for Transportation byGeographic Region - Percentage Breakdown of Value Sales forUSA, Canada, Japan, China, Europe, Asia-Pacific and Rest ofWorld for Years 2023 & 2030

Table 13: World Recent Past, Current & Future Analysis forRetail & eCommerce by Geographic Region - USA, Canada, Japan,China, Europe, Asia-Pacific and Rest of World Markets -Independent Analysis of Annual Sales in US$ Million for Years2022 through 2030 and % CAGR

Table 14: World 8-Year Perspective for Retail & eCommerce byGeographic Region - Percentage Breakdown of Value Sales forUSA, Canada, Japan, China, Europe, Asia-Pacific and Rest ofWorld for Years 2023 & 2030

Table 15: World Recent Past, Current & Future Analysis forOther End-Uses by Geographic Region - USA, Canada, Japan,China, Europe, Asia-Pacific and Rest of World Markets -Independent Analysis of Annual Sales in US$ Million for Years2022 through 2030 and % CAGR

Table 16: World 8-Year Perspective for Other End-Uses byGeographic Region - Percentage Breakdown of Value Sales forUSA, Canada, Japan, China, Europe, Asia-Pacific and Rest ofWorld for Years 2023 & 2030

Table 17: World Recent Past, Current & Future Analysis for BFSIby Geographic Region - USA, Canada, Japan, China, Europe,Asia-Pacific and Rest of World Markets - Independent Analysisof Annual Sales in US$ Million for Years 2022 through 2030 and% CAGR

Table 18: World 8-Year Perspective for BFSI by GeographicRegion - Percentage Breakdown of Value Sales for USA, Canada,Japan, China, Europe, Asia-Pacific and Rest of World for Years2023 & 2030

Table 19: World Recent Past, Current & Future Analysis forManufacturing by Geographic Region - USA, Canada, Japan, China,Europe, Asia-Pacific and Rest of World Markets - IndependentAnalysis of Annual Sales in US$ Million for Years 2022 through2030 and % CAGR

Table 20: World 8-Year Perspective for Manufacturing byGeographic Region - Percentage Breakdown of Value Sales forUSA, Canada, Japan, China, Europe, Asia-Pacific and Rest ofWorld for Years 2023 & 2030

Table 21: World Recent Past, Current & Future Analysis forForecasts & Prescriptive Models by Geographic Region - USA,Canada, Japan, China, Europe, Asia-Pacific and Rest of WorldMarkets - Independent Analysis of Annual Sales in US$ Millionfor Years 2022 through 2030 and % CAGR

Table 22: World 8-Year Perspective for Forecasts & PrescriptiveModels by Geographic Region - Percentage Breakdown of ValueSales for USA, Canada, Japan, China, Europe, Asia-Pacific andRest of World for Years 2023 & 2030

Table 23: World Recent Past, Current & Future Analysis forChatbots by Geographic Region - USA, Canada, Japan, China,Europe, Asia-Pacific and Rest of World Markets - IndependentAnalysis of Annual Sales in US$ Million for Years 2022 through2030 and % CAGR

Table 24: World 8-Year Perspective for Chatbots by GeographicRegion - Percentage Breakdown of Value Sales for USA, Canada,Japan, China, Europe, Asia-Pacific and Rest of World for Years2023 & 2030

Table 25: World Recent Past, Current & Future Analysis forSpeech Recognition by Geographic Region - USA, Canada, Japan,China, Europe, Asia-Pacific and Rest of World Markets -Independent Analysis of Annual Sales in US$ Million for Years2022 through 2030 and % CAGR

Table 26: World 8-Year Perspective for Speech Recognition byGeographic Region - Percentage Breakdown of Value Sales forUSA, Canada, Japan, China, Europe, Asia-Pacific and Rest ofWorld for Years 2023 & 2030

Table 27: World Recent Past, Current & Future Analysis for TextRecognition by Geographic Region - USA, Canada, Japan, China,Europe, Asia-Pacific and Rest of World Markets - IndependentAnalysis of Annual Sales in US$ Million for Years 2022 through2030 and % CAGR

Table 28: World 8-Year Perspective for Text Recognition byGeographic Region - Percentage Breakdown of Value Sales forUSA, Canada, Japan, China, Europe, Asia-Pacific and Rest ofWorld for Years 2023 & 2030

Table 29: World Recent Past, Current & Future Analysis forOther Applications by Geographic Region - USA, Canada, Japan,China, Europe, Asia-Pacific and Rest of World Markets -Independent Analysis of Annual Sales in US$ Million for Years2022 through 2030 and % CAGR

Table 30: World 8-Year Perspective for Other Applications byGeographic Region - Percentage Breakdown of Value Sales forUSA, Canada, Japan, China, Europe, Asia-Pacific and Rest ofWorld for Years 2023 & 2030

Table 31: World Artificial Intelligence (AI) Platform MarketAnalysis of Annual Sales in US$ Million for Years 2014 through2030

III. MARKET ANALYSIS

UNITED STATESArtificial Intelligence (AI) Platform Market Presence - Strong/Active/Niche/Trivial - Key Competitors in the United Statesfor 2023 (E)Table 32: USA Recent Past, Current & Future Analysis forArtificial Intelligence (AI) Platform by Deployment - Cloud andOn-Premise - Independent Analysis of Annual Sales in US$Million for the Years 2022 through 2030 and % CAGR

Table 33: USA 8-Year Perspective for Artificial Intelligence(AI) Platform by Deployment - Percentage Breakdown of ValueSales for Cloud and On-Premise for the Years 2023 & 2030

Table 34: USA Recent Past, Current & Future Analysis forArtificial Intelligence (AI) Platform by End-Use - Healthcare,Research & Academia, Transportation, Retail & eCommerce, OtherEnd-Uses, BFSI and Manufacturing - Independent Analysis ofAnnual Sales in US$ Million for the Years 2022 through 2030 and% CAGR

Table 35: USA 8-Year Perspective for Artificial Intelligence(AI) Platform by End-Use - Percentage Breakdown of Value Salesfor Healthcare, Research & Academia, Transportation, Retail &eCommerce, Other End-Uses, BFSI and Manufacturing for the Years2023 & 2030

Table 36: USA Recent Past, Current & Future Analysis forArtificial Intelligence (AI) Platform by Application -Forecasts & Prescriptive Models, Chatbots, Speech Recognition,Text Recognition and Other Applications - Independent Analysisof Annual Sales in US$ Million for the Years 2022 through 2030and % CAGR

Table 37: USA 8-Year Perspective for Artificial Intelligence(AI) Platform by Application - Percentage Breakdown of ValueSales for Forecasts & Prescriptive Models, Chatbots, SpeechRecognition, Text Recognition and Other Applications for theYears 2023 & 2030

CANADATable 38: Canada Recent Past, Current & Future Analysis forArtificial Intelligence (AI) Platform by Deployment - Cloud andOn-Premise - Independent Analysis of Annual Sales in US$Million for the Years 2022 through 2030 and % CAGR

Table 39: Canada 8-Year Perspective for Artificial Intelligence(AI) Platform by Deployment - Percentage Breakdown of ValueSales for Cloud and On-Premise for the Years 2023 & 2030

Table 40: Canada Recent Past, Current & Future Analysis forArtificial Intelligence (AI) Platform by End-Use - Healthcare,Research & Academia, Transportation, Retail & eCommerce, OtherEnd-Uses, BFSI and Manufacturing - Independent Analysis ofAnnual Sales in US$ Million for the Years 2022 through 2030 and% CAGR

Table 41: Canada 8-Year Perspective for Artificial Intelligence(AI) Platform by End-Use - Percentage Breakdown of Value Salesfor Healthcare, Research & Academia, Transportation, Retail &eCommerce, Other End-Uses, BFSI and Manufacturing for the Years2023 & 2030

Table 42: Canada Recent Past, Current & Future Analysis forArtificial Intelligence (AI) Platform by Application -Forecasts & Prescriptive Models, Chatbots, Speech Recognition,Text Recognition and Other Applications - Independent Analysisof Annual Sales in US$ Million for the Years 2022 through 2030and % CAGR

Table 43: Canada 8-Year Perspective for Artificial Intelligence(AI) Platform by Application - Percentage Breakdown of ValueSales for Forecasts & Prescriptive Models, Chatbots, SpeechRecognition, Text Recognition and Other Applications for theYears 2023 & 2030

JAPANArtificial Intelligence (AI) Platform Market Presence - Strong/Active/Niche/Trivial - Key Competitors in Japan for 2023 (E)Table 44: Japan Recent Past, Current & Future Analysis forArtificial Intelligence (AI) Platform by Deployment - Cloud andOn-Premise - Independent Analysis of Annual Sales in US$Million for the Years 2022 through 2030 and % CAGR

Table 45: Japan 8-Year Perspective for Artificial Intelligence(AI) Platform by Deployment - Percentage Breakdown of ValueSales for Cloud and On-Premise for the Years 2023 & 2030

Table 46: Japan Recent Past, Current & Future Analysis forArtificial Intelligence (AI) Platform by End-Use - Healthcare,Research & Academia, Transportation, Retail & eCommerce, OtherEnd-Uses, BFSI and Manufacturing - Independent Analysis ofAnnual Sales in US$ Million for the Years 2022 through 2030 and% CAGR

Table 47: Japan 8-Year Perspective for Artificial Intelligence(AI) Platform by End-Use - Percentage Breakdown of Value Salesfor Healthcare, Research & Academia, Transportation, Retail &eCommerce, Other End-Uses, BFSI and Manufacturing for the Years2023 & 2030

Table 48: Japan Recent Past, Current & Future Analysis forArtificial Intelligence (AI) Platform by Application -Forecasts & Prescriptive Models, Chatbots, Speech Recognition,Text Recognition and Other Applications - Independent Analysisof Annual Sales in US$ Million for the Years 2022 through 2030and % CAGR

Table 49: Japan 8-Year Perspective for Artificial Intelligence(AI) Platform by Application - Percentage Breakdown of ValueSales for Forecasts & Prescriptive Models, Chatbots, SpeechRecognition, Text Recognition and Other Applications for theYears 2023 & 2030

CHINAArtificial Intelligence (AI) Platform Market Presence - Strong/Active/Niche/Trivial - Key Competitors in China for 2023 (E)Table 50: China Recent Past, Current & Future Analysis forArtificial Intelligence (AI) Platform by Deployment - Cloud andOn-Premise - Independent Analysis of Annual Sales in US$Million for the Years 2022 through 2030 and % CAGR

Table 51: China 8-Year Perspective for Artificial Intelligence(AI) Platform by Deployment - Percentage Breakdown of ValueSales for Cloud and On-Premise for the Years 2023 & 2030

Table 52: China Recent Past, Current & Future Analysis forArtificial Intelligence (AI) Platform by End-Use - Healthcare,Research & Academia, Transportation, Retail & eCommerce, OtherEnd-Uses, BFSI and Manufacturing - Independent Analysis ofAnnual Sales in US$ Million for the Years 2022 through 2030 and% CAGR

Table 53: China 8-Year Perspective for Artificial Intelligence(AI) Platform by End-Use - Percentage Breakdown of Value Salesfor Healthcare, Research & Academia, Transportation, Retail &eCommerce, Other End-Uses, BFSI and Manufacturing for the Years2023 & 2030

Table 54: China Recent Past, Current & Future Analysis forArtificial Intelligence (AI) Platform by Application -Forecasts & Prescriptive Models, Chatbots, Speech Recognition,Text Recognition and Other Applications - Independent Analysisof Annual Sales in US$ Million for the Years 2022 through 2030and % CAGR

Table 55: China 8-Year Perspective for Artificial Intelligence(AI) Platform by Application - Percentage Breakdown of ValueSales for Forecasts & Prescriptive Models, Chatbots, SpeechRecognition, Text Recognition and Other Applications for theYears 2023 & 2030

EUROPEArtificial Intelligence (AI) Platform Market Presence - Strong/Active/Niche/Trivial - Key Competitors in Europe for 2023 (E)Table 56: Europe Recent Past, Current & Future Analysis forArtificial Intelligence (AI) Platform by Geographic Region -France, Germany, Italy, UK and Rest of Europe Markets -Independent Analysis of Annual Sales in US$ Million for Years2022 through 2030 and % CAGR

Table 57: Europe 8-Year Perspective for Artificial Intelligence(AI) Platform by Geographic Region - Percentage Breakdown ofValue Sales for France, Germany, Italy, UK and Rest of EuropeMarkets for Years 2023 & 2030

Table 58: Europe Recent Past, Current & Future Analysis forArtificial Intelligence (AI) Platform by Deployment - Cloud andOn-Premise - Independent Analysis of Annual Sales in US$Million for the Years 2022 through 2030 and % CAGR

Table 59: Europe 8-Year Perspective for Artificial Intelligence(AI) Platform by Deployment - Percentage Breakdown of ValueSales for Cloud and On-Premise for the Years 2023 & 2030

Table 60: Europe Recent Past, Current & Future Analysis forArtificial Intelligence (AI) Platform by End-Use - Healthcare,Research & Academia, Transportation, Retail & eCommerce, OtherEnd-Uses, BFSI and Manufacturing - Independent Analysis ofAnnual Sales in US$ Million for the Years 2022 through 2030 and% CAGR

Table 61: Europe 8-Year Perspective for Artificial Intelligence(AI) Platform by End-Use - Percentage Breakdown of Value Salesfor Healthcare, Research & Academia, Transportation, Retail &eCommerce, Other End-Uses, BFSI and Manufacturing for the Years2023 & 2030

Table 62: Europe Recent Past, Current & Future Analysis forArtificial Intelligence (AI) Platform by Application -Forecasts & Prescriptive Models, Chatbots, Speech Recognition,Text Recognition and Other Applications - Independent Analysisof Annual Sales in US$ Million for the Years 2022 through 2030and % CAGR

Table 63: Europe 8-Year Perspective for Artificial Intelligence(AI) Platform by Application - Percentage Breakdown of ValueSales for Forecasts & Prescriptive Models, Chatbots, SpeechRecognition, Text Recognition and Other Applications for theYears 2023 & 2030

FRANCEArtificial Intelligence (AI) Platform Market Presence - Strong/Active/Niche/Trivial - Key Competitors in France for 2023 (E)Table 64: France Recent Past, Current & Future Analysis forArtificial Intelligence (AI) Platform by Deployment - Cloud andOn-Premise - Independent Analysis of Annual Sales in US$Million for the Years 2022 through 2030 and % CAGR

Table 65: France 8-Year Perspective for Artificial Intelligence(AI) Platform by Deployment - Percentage Breakdown of ValueSales for Cloud and On-Premise for the Years 2023 & 2030

Table 66: France Recent Past, Current & Future Analysis forArtificial Intelligence (AI) Platform by End-Use - Healthcare,Research & Academia, Transportation, Retail & eCommerce, OtherEnd-Uses, BFSI and Manufacturing - Independent Analysis ofAnnual Sales in US$ Million for the Years 2022 through 2030 and% CAGR

Table 67: France 8-Year Perspective for Artificial Intelligence(AI) Platform by End-Use - Percentage Breakdown of Value Salesfor Healthcare, Research & Academia, Transportation, Retail &eCommerce, Other End-Uses, BFSI and Manufacturing for the Years2023 & 2030

Table 68: France Recent Past, Current & Future Analysis forArtificial Intelligence (AI) Platform by Application -Forecasts & Prescriptive Models, Chatbots, Speech Recognition,Text Recognition and Other Applications - Independent Analysisof Annual Sales in US$ Million for the Years 2022 through 2030and % CAGR

Table 69: France 8-Year Perspective for Artificial Intelligence(AI) Platform by Application - Percentage Breakdown of ValueSales for Forecasts & Prescriptive Models, Chatbots, SpeechRecognition, Text Recognition and Other Applications for theYears 2023 & 2030

GERMANYArtificial Intelligence (AI) Platform Market Presence - Strong/Active/Niche/Trivial - Key Competitors in Germany for 2023(E)Table 70: Germany Recent Past, Current & Future Analysis forArtificial Intelligence (AI) Platform by Deployment - Cloud andOn-Premise - Independent Analysis of Annual Sales in US$Million for the Years 2022 through 2030 and % CAGR

Table 71: Germany 8-Year Perspective for ArtificialIntelligence (AI) Platform by Deployment - Percentage Breakdownof Value Sales for Cloud and On-Premise for the Years 2023 &2030

Table 72: Germany Recent Past, Current & Future Analysis forArtificial Intelligence (AI) Platform by End-Use - Healthcare,Research & Academia, Transportation, Retail & eCommerce, OtherEnd-Uses, BFSI and Manufacturing - Independent Analysis ofAnnual Sales in US$ Million for the Years 2022 through 2030 and% CAGR

Table 73: Germany 8-Year Perspective for ArtificialIntelligence (AI) Platform by End-Use - Percentage Breakdown ofValue Sales for Healthcare, Research & Academia,Transportation, Retail & eCommerce, Other End-Uses, BFSI andManufacturing for the Years 2023 & 2030

Table 74: Germany Recent Past, Current & Future Analysis forArtificial Intelligence (AI) Platform by Application -Forecasts & Prescriptive Models, Chatbots, Speech Recognition,Text Recognition and Other Applications - Independent Analysisof Annual Sales in US$ Million for the Years 2022 through 2030and % CAGR

Table 75: Germany 8-Year Perspective for ArtificialIntelligence (AI) Platform by Application - PercentageBreakdown of Value Sales for Forecasts & Prescriptive Models,Chatbots, Speech Recognition, Text Recognition and OtherApplications for the Years 2023 & 2030

ITALYTable 76: Italy Recent Past, Current & Future Analysis forArtificial Intelligence (AI) Platform by Deployment - Cloud andOn-Premise - Independent Analysis of Annual Sales in US$Million for the Years 2022 through 2030 and % CAGR

Table 77: Italy 8-Year Perspective for Artificial Intelligence(AI) Platform by Deployment - Percentage Breakdown of ValueSales for Cloud and On-Premise for the Years 2023 & 2030

Table 78: Italy Recent Past, Current & Future Analysis forArtificial Intelligence (AI) Platform by End-Use - Healthcare,Research & Academia, Transportation, Retail & eCommerce, OtherEnd-Uses, BFSI and Manufacturing - Independent Analysis ofAnnual Sales in US$ Million for the Years 2022 through 2030 and% CAGR

Table 79: Italy 8-Year Perspective for Artificial Intelligence(AI) Platform by End-Use - Percentage Breakdown of Value Salesfor Healthcare, Research & Academia, Transportation, Retail &eCommerce, Other End-Uses, BFSI and Manufacturing for the Years2023 & 2030

Table 80: Italy Recent Past, Current & Future Analysis forArtificial Intelligence (AI) Platform by Application -Forecasts & Prescriptive Models, Chatbots, Speech Recognition,Text Recognition and Other Applications - Independent Analysisof Annual Sales in US$ Million for the Years 2022 through 2030and % CAGR

Table 81: Italy 8-Year Perspective for Artificial Intelligence(AI) Platform by Application - Percentage Breakdown of ValueSales for Forecasts & Prescriptive Models, Chatbots, SpeechRecognition, Text Recognition and Other Applications for theYears 2023 & 2030

UNITED KINGDOMArtificial Intelligence (AI) Platform Market Presence - Strong/Active/Niche/Trivial - Key Competitors in the United Kingdomfor 2023 (E)Table 82: UK Recent Past, Current & Future Analysis forArtificial Intelligence (AI) Platform by Deployment - Cloud andOn-Premise - Independent Analysis of Annual Sales in US$Million for the Years 2022 through 2030 and % CAGR

Table 83: UK 8-Year Perspective for Artificial Intelligence(AI) Platform by Deployment - Percentage Breakdown of ValueSales for Cloud and On-Premise for the Years 2023 & 2030

Table 84: UK Recent Past, Current & Future Analysis forArtificial Intelligence (AI) Platform by End-Use - Healthcare,Research & Academia, Transportation, Retail & eCommerce, OtherEnd-Uses, BFSI and Manufacturing - Independent Analysis ofAnnual Sales in US$ Million for the Years 2022 through 2030 and% CAGR

Table 85: UK 8-Year Perspective for Artificial Intelligence(AI) Platform by End-Use - Percentage Breakdown of Value Salesfor Healthcare, Research & Academia, Transportation, Retail &eCommerce, Other End-Uses, BFSI and Manufacturing for the Years2023 & 2030

See the original post:

Global Artificial Intelligence (AI) Platform Market to Reach $120.7 Billion by 2030 - Yahoo Finance