Writer Meghan O’Gieblyn on AI, Consciousness, and Creativity – Nautilus

These days, were inundated with speculation about the future of artificial intelligenceand specifically how AI might take away our jobs, or steal the creative work of writers and artists, or even destroy the human species. The American writer Meghan OGieblyn also wonders about these things, and her essays offer pointed inquiries into the philosophical and spiritual underpinnings of this technology. Shes steeped in the latest AI developments but is also well-versed in debates about linguistics and the nature of consciousness.

OGieblyn also writes about her own struggle to find deeper meaning in her life, which has led her down some unexpected rabbit holes. A former Christian fundamentalist, she later stumbled into transhumanism and, ultimately, plunged into the exploding world of AI. (She currently also writes an advice column for Wired magazine about tech and society.)

When I visited her at her home in Madison, Wisconsin, I was curious if I might see any traces of this unlikely personal odyssey.

I hadnt expected her to pull out a stash of old notebooks filled with her automatic writing, composed while working with a hypnotist. I asked OGieblyn if she would read from one of her notebooks, and she picked this passage: In all the times we came to bed, there was never any sleep. Dawn bells and doorbells and daffodils and the side of the road glaring with their faces undone And so it wentstrange, lyrical, and nonsensicaltapping into some part of herself that she didnt know was there.

That led us into a wide-ranging conversation about the unconscious, creativity, the quest for transcendence, and the differences between machine intelligence and the human mind.

Why did you go to a hypnotist and try automatic writing?

I was going through a period of writers block, which I had never really experienced before. It was during the pandemic. I was working on a book about technology, and I was reading about these new language models. GPT-3 had been just released to researchers, and the algorithmic text was just so wildly creative and poetic.

So you wanted to see if you could do this, without using an AI model?

Yeah, I became really curious about what it means to produce language without consciousness. As my own critical faculty was getting in the way of my creativity, it seemed really appealing to see what it would be like to just write without overthinking everything. I was thinking a lot about the Surrealists and different avant-garde traditions where writers or artists would do exercises either through hypnosis or some sort of random collaborative game. The point was to try to unlock some unconscious creative capacity within you. And it seemed like that was, in a way, what the large language models were doing.

You have an unusual background for a writer about technology. You grew up in a Christian fundamentalist family.

My parents were evangelical Christians. My whole extended family are born again Christians. Everybody I knew growing up believed what we did. I was homeschooled along with all my siblings, so most of our social life revolved around church. When I was 18, I went to Moody Bible Institute in Chicago to study theology. I was planning to go into full-time ministry.

But then you left your faith.

I had a faith crisis when I was in Bible school, which metastasized into a series of doubts about the validity of the Bible and the Christian God. I dropped out of Bible school after two years and pretty much left the faith. I began identifying as agnostic almost right away.

But my sense is youre still extremely interested in questions of transcendence and the spiritual life.

Absolutely.I dont think anyone who grew up in that world ever totally leaves it behind. And my interest in technology grew out of those larger questions. What does it mean to be human? What does it mean to have a soul?

A couple of years after I left Bible school, I read The Age of Spiritual Machines, Ray Kurzweils book about the singularity and transhumanism. He had this idea that humans could use technology to further our evolution into a new species, what he called post-humanity. It was this incredible vision of transcendence. We were essentially going to become immortal.

The algorithmic text was just so wildly creative and poetic.

There are some similarities to your Christian upbringing.

As a 25-year-old who was just starting to believe that I wasnt going to live forever in heaven, this was incredibly appealing to think that maybe science and technology could bring about a similar transformation. It was a secular form of transcendence. I started wondering: What does it mean to be a self or a thinking mind? Kurzweil was saying our selfhood is basically just a pattern of mental activity that you could upload into digital form.

So Kurzweils argument was that machines could do anything that the human mind can doand more.

Essentially. But there was a question that was always elided: Is there going to be some sort of first-person experience? And this comes into play with mind-uploading. If I transform my mind into digital form, am I still going to be me or is it just going to be an empty replica that talks and acts like me, with no subjective experience?

Nobody has a good answer for that because nobody knows what consciousness is. Thats what got me really interested in AI, because thats the area in which were playing out these questions now. What is first-person experience? How is that related to intelligence?

Isnt the assumption that AI has no consciousness or first-person experience? Isnt that the fundamental difference between artificial intelligence and the human mind?

That is definitely the consensus, but how can you prove it? We really dont know whats happening inside these models because theyre black box models. Theyre neural networks that have many hidden layers. Its a kind of alchemy.

A sophisticated large language model like Chat GPT has accumulated a vast reservoir of language by scraping the internet, but does it have any sense of meaning?

It depends on how you define meaning. Thats tricky because meaning is a concept we invented, and the definition is contested. For the past hundred years or so, linguists have determined that meaning depends on embodied reference in the real world. To know what the word dog means, you have to have seen a dog and belong to a linguistic community where that has some collective meaning.

Language models dont have access to the real world, so theyre using language in a very different way. Theyre drawing on statistical probabilities to create outputs that sound convincingly human and often appear very intelligent. And some computational linguists say, Well, that is meaning. You dont need any real-world experience to have meaning.

What does it mean to be human? What does it mean to have a soul?

These language models are constructing sentences that make a lot of sense, but is it just algorithmic wordplay?

Emily Bender and some engineers at Google came up with the term stochastic parrots. Stochastic is a statistical set of probabilities, using a certain amount of randomness, and theyre parrots because theyre mimicking human speech. These models were trained on an enormous amount of real-world human texts, and theyre able to predict what the next word is going to be in a certain context.

To me, that feels very different than how humans use language. We typically use language when were trying to create meaning with other people.

In that interpretation, the human mind is fundamentally different than AI.

I think it is. But there are people like Sam Altman, the CEO of OpenAI, who famously tweeted, I am a stochastic parrot, and so r u. There are people creating this technology who believe theres really no difference between how these models use language and how humans use language.

We think we have all these original ideas, but are we just rearranging the chairs on the deck?

I recently asked a computer scientist, What do you think creativity is? And he said, Oh, thats easy. Its just randomness. And if you know how these models work, there is a certain amount of correlation between randomness and creativity. A lot of the models have whats called a temperature gauge. If you turn up the temperature, the output becomes more random and it seems much more creative. My feeling is that theres a certain amount of randomness in human creativity, but I dont think thats all there is.

As a writer, how do you think about creativity and originality?

I think about modernist writers like James Joyce or Virginia Woolf, who completely changed literature. They created a form of a consciousness on the page that felt nothing like what had come before in the history of the novel. Thats not just because they randomly recombined everything they had read. The nature of human experience was changing during that time, and they found a way to capture what that felt like. I think creativity has to have that inner subjective quality. It comes back to the idea of meaning, which is created between two minds.

Its commonly assumed that AI has no thinking mind or subjective experience, but how would we even know if these AI models are conscious?

I have no idea. My intuition is that if it said something that was convincing enough to show that it has experience, which includes emotion but also self-awareness. But weve already had instances where the models have spoken in very convincing terms about having an inner life. There was a Google engineer, Blake Lemoine, who was convinced that the chatbot he was working on was sentient. This is going to be fiercely debated.

Artificial general intelligence is creating something thats essentially going to be like a god.

A lot of these chatbots do seem to have self-awareness.

Theyre designed to appear that way. Theres been so much money poured into emotional AI. This is a whole subfield of AIcreating chatbots that can convincingly emote and respond to human emotion. Its about maximizing engagement with the technology.

Do you think a very advanced AI would have godlike capacities? Will machines become so sophisticated that we cant distinguish between them and more conventional religious ideas of God?

Thats certainly the goal for a lot of people developing this technology. Sam Altman, Elon Musktheyve all absorbed the Kurzweil idea of the singularity. They are essentially trying to create a god with AGIartificial general intelligence. Its AI that can do everything we can and surpass human intelligence.

But isnt intelligence, no matter how advanced, different than God?

The thinking is that once it gets to the level of human intelligence, it can start doing what were doing, modifying and improving itself. At that point it becomes a recursive process where theres going to be some sort of intelligence explosion. This is the belief.

But theres another question: What are we trying to design? If you want to create a tool that helps people solve cancer or find solutions to climate change, you can do that with a very narrowly trained AI. But the fact that we are now working toward artificial general intelligence is different. Thats creating something thats essentially going to be like a god.

Why do you think Elon Musk and Sam Altman want to create this?

I think they read a lot of sci-fi as kids. [Laughs] I mean, I dont know. Theres something very deeply human in this idea of, Well, we have this capacity, so were going to do it. Its scary, though. Thats why its called the singularity. You cant see beyond it. Its an event horizon. Once you create something like that, theres really no way to tell what it will look like until its in the world.

I do feel like people are trying to create a system thats going to give answers that are difficult to come by through ordinary human thought. Thats the main appeal of creating artificial general intelligence. Its some sort of godlike figure that can give us the answers to persistent political conflicts and moral debates.

If its smart enough, can AI solve the problems that we imperfect humans cannot?

I dont think so. Its similar to what I was looking for in automatic writing, which is a source of meaning thats external to my experience. Life is infinitely complex, and every situation is different. That requires a constant process of meaning-making.

Hannah Arendt talks about thinking and then thinking again. Youre constantly making and unmaking thought as you experience the world. Machines are rigid. Theyre trained on the whole corpus of human history. Theyre like a mirror, reflecting back to us a lot of our own beliefs. But I dont think they can give us that sense of meaning that were looking for as humans. Thats something that we ultimately have to create for ourselves.

This interview originally aired on Wisconsin Public Radios nationally syndicated showTo the Best of Our Knowledge. You can listen to the full interview with Meghan OGieblynhere.

Lead image: lohloh / Shutterstock

Posted on May 2, 2024

Steve Paulson is the executive producer of Wisconsin Public Radios nationally-syndicated show To the Best of Our Knowledge. Hes the author of Atoms and Eden: Conversations on Religion and Science. You can find his podcast about psychedelics, Luminous, here.

Cutting-edge science, unraveled by the very brightest living thinkers.

Read more here:

Writer Meghan O'Gieblyn on AI, Consciousness, and Creativity - Nautilus

Posted in Ai

Podcast: Resisting AI and the Consolidation of Power | TechPolicy.Press – Tech Policy Press

Audio of this conversation is available via your favorite podcast service.

In an introduction to a special issue of the journal First Monday on topics related to AI andpower, researchers Jenna Burrell and Jacob Metcalf argue that "what can and cannot be said inside of mainstream computer science publications appears to be constrained by the power, wealth, and ideology of a small cohort of industrialists. The result is that shaping discourse about the AI industry is itself a form of power that cannot be named inside of computer science." The papers in the journal go on to interrogate the epistemic culture of AI safety, the promise of utopia through artificial general intelligence how to debunk robot rights, and more.

To learn more about some of the ideas in the special issue, Justin Hendrix spoke to Burrell, Metcalf, and two of the other authors of papers included in it: Shazeda Ahmed and mile P. Torres.

A transcript of the discussion is forthcoming.

Read the original here:

Podcast: Resisting AI and the Consolidation of Power | TechPolicy.Press - Tech Policy Press

Posted in Ai

JPMorgan Chase Unveils AI-Powered Tool for Thematic Investing – PYMNTS.com

J.P. Morgan Chasereportedly unveiled an artificial intelligence-powered tooldesignedto facilitate thematic investing.

The tool, calledIndexGPT, delivers thematic investment baskets created withthe assistance ofOpenAIsGPT-4model, Bloomberg reported Friday (May 3).

IndexGPT creates these thematic indexes by generating a list of keywords associated with a particular theme that are then analyzed using a natural language processing model that scans news articles to identify companies involved in that space, according to the report.

The tool allows forthe selection ofa broader range of stocks, going beyond the obvious choices that are already well-known,Rui Fernandes, J.P. Morgans head of markets trading structuring, told Bloomberg.

Thematic investing, which focuses on emerging trends rather than traditional industry sectors or company fundamentals, has gained popularity in recent years, the report said.

Thematic funds experienced a surge in popularity in 2020 and 2021, with retail investors spending billions of dollars on products based on various themes. However, interest in these strategies waned amid poor performance and higher interest rates, per the report.

J.P. Morgan Chases IndexGPT aims to reignite interest in thematic investing by providing a more accurate and efficient approach, according to the report.

While AI hasbeen widely usedin the financial industry for functions such as trading, risk management and investment research, the rise of generative AI tools has opened new possibilities for banks and financial institutions, the report said.

Fernandes said he sees IndexGPT as a first step ina long-term process ofintegrating AI across the banks index offering, per the report. J.P. Morgan Chase aims to continuously improve its offerings, from equity volatility products to commodity momentum products, gradually and thoughtfully.

In another deployment of this technology in the investment space,Morgan Stanleysaid in September that it was launching anAI-powered assistantfor financial advisers and their support staff. This tool, the AI @ Morgan Stanley Assistant, facilitates access to 100,000 research reports and documents.

In the venture capital world, AI has become a tool for making savvyinvestment decisions. VC firms are using the technology to analyze vast amounts of data on startups and market trends, help the firms identify the most promising opportunities and aid them in making better-informed decisions about where to allocate their funds.

For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.

Read more from the original source:

JPMorgan Chase Unveils AI-Powered Tool for Thematic Investing - PYMNTS.com

Posted in Ai

iOS 18: Here are the new AI features in the works – 9to5Mac

2024 is shaping up to be the Year of AI for Apple, with big updates planned for iOS 18 and more. The rumors and Tim Cook himself make it clear that there are new AI features for Apples platforms in the works. Heres everything we know about the ways Apple is exploring AI features

There have been a number of rumors about the various AI features in the works inside Apple. Bloomberg has reported that Apple thinks iOS 18 will be one of the biggest iOS updates ever, headlined by a number of new AI features.

Mark Gurman reported last July that Apple created its own Large Language Model(LLM) system, which has been dubbedAppleGPT. The project uses a framework called Ajax that Apple started building in 2022 to base various machine learning projects on a shared foundation. This Ajax framework will serve as the basis for Apples forthcoming AI features across all of its platform.

9to5Macfound evidenceof Apples work on new AI and large language model technology in iOS 17.4. We reported that Apple is relying on OpenAIs ChatGPT API for internal testing to help the development of its own AI models.

Bloomberg has reported that Apples iOS 18 features will be powered by an entirely on-device large language model, which offers a number of privacy and speed benefits.

Here are some of the rumors about new AI features coming to iOS 18:

Did you know that Apple has actually already launched a number of powerful AI frameworks and models? Heres a recap of those:

During a recent Apple earnings call, Tim Cook offered a rare teaser for a future product announcement. According to Cook, Apple is spending a tremendous amount of time and effort on artificial intelligence technologies, and the company is excited to share the details of our ongoing work in that space later this year.

Its extraordinarily rare for Cook to even remotely hint at Apples plans for future product announcements. Why did he do it this time? Likely to ease the concerns of investors and analysts worried about Apple falling behind the likes of OpenAI, Google, and Microsoft. Whether the teaser is enough to calm those fears until an actual product announcement materializes remains to be seen.

Also during an earnings call recently, Cook touted the advantages that Apple has which will set its AI apart from the competition:

We believe in the transformative power and promise of AI, and we believe we have advantages that will differentiate us in this new era, including Apples unique combination of seamless hardware, software, and services integration, groundbreaking Apple Silicon with our industry-leading neural engines, and our unwavering focus on privacy, which underpins everything we create.

In a surprising twist, Bloomberg has reported that Apple is in active negotiations with Google about potentially licensing Gemini, which is Googles set of generative AI models. The report explains that Apple is specifically looking to partner on cloud-based generative AI models.

In this scenario, Apple would rely on a partner such as Google for its cloud-based features. Other features would still be powered on-device by Apples own technology.

The generative AI features under discussion would theoretically be baked into Siri and other apps. New AI capabilities based on Apples homegrown models, meanwhile, would still be woven into the operating system. Theyll be focused on proactively providing users with information and conducting tasks on their behalf in the background, people familiar with the matter said.

While Apple is said to be in active negotiations for this partnership with Google, the company has also reportedly held talks with OpenAI as well.

In fact, most recently, it was reported that Apple had resumed talks with OpenAI about a partnership. According to reports, Apple would use OpenAIs technology to power an AI-based chatbot in iOS 18.

At this point, the question is which of the many rumors will come to fruition this year.

Id be surprised if all of these rumored AI features are ready for this year. My assumption is that Apple is working on all of this stuff (and more), but will pare down the final list of features included in iOS 18. Features that dont make the cut will likely come in a later update to iOS 18 or with iOS 19 in 2025.

Apple has officially set WWDC for June 10 this year, and thats where we expect the bulk of its AI announcements to be made.

Where do you want to see Apple direct its attention toward for new AI features this year? Let us know down in the comments.

Follow Chance:Threads,Twitter,Instagram, andMastodon.

FTC: We use income earning auto affiliate links. More.

Original post:

iOS 18: Here are the new AI features in the works - 9to5Mac

Posted in Ai

Google urges US to update immigration rules to attract more AI talent – The Verge

The US could lose out on valuable AI and tech talent if some of its immigration policies are not modernized, Google says in a letter sent to the Department of Labor.

Google says policies like Schedule A, a list of occupations the government pre-certified as not having enough American workers, have to be more flexible and move faster to meet demand in technologies like AI and cybersecurity. The company says the government must update Schedule A to include AI and cybersecurity and do so more regularly.

Theres wide recognition that there is a global shortage of talent in AI, but the fact remains that the US is one of the harder places to bring talent from abroad, and we risk losing out on some of the most highly sought-after people in the world, Karan Bhatia, head of government affairs and public policy at Google, tells The Verge. He noted that the occupations in Schedule A have not been updated in 20 years.

Companies can apply for permanent residencies, colloquially known as green cards, for employees. The Department of Labor requires companies to get a permanent labor certification (PERM) proving there is a shortage of workers in that role. That process may take time, so the government pre-certified some jobs through Schedule A.

The US Citizenship and Immigration Services lists Schedule A occupations as physical therapists, professional nurses, or immigrants of exceptional ability in the sciences or arts. While the wait time for a green card isnt reduced, Google says Schedule A cuts down the processing time by about a year.

Google says Schedule A is not currently serving its intended purpose, especially as demand for new technologies like generative AI has grown, so AI and cybersecurity must be included on the list. Google says the government should also consider multiple data sources, including accepting public feedback, to regularly update Schedule A so the process is more transparent and to really reflect workforce gaps.

Since the rise of generative AI, US companies have struggled to find engineers and researchers in the AI space. While the US produces a large cohort of AI talent, there is a shortage of AI specialists in the country, Bhatia says. However, the USs strict immigration policies have made attracting people to work in American companies to build AI platforms difficult. He adds Google employees have often had to leave the US while waiting for the PERM process to finish and for their green cards to be approved.

Competition for AI talent has been intense, with companies often poaching engineers and researchers. The Information reported AI developers like Meta have resorted to hiring AI talent without interviews. Wages for AI specialists soared, with OpenAI allegedly paying researchers up to $10 million. President Joe Bidens executive order on AI mandates federal agencies to help increase AI talent in the country.

Continued here:

Google urges US to update immigration rules to attract more AI talent - The Verge

Posted in Ai

Microsoft announces significant commitments to enable a cloud and AI-powered future for Thailand – Microsoft Stories … – Microsoft

Microsoft Chairman and CEO Satya Nadella announces a new data center region in Thailand during Microsoft Build: AI Day on May 01, 2024 in Bangkok, Thailand. Photo by Graham Denholm/Getty Images for Microsoft.

Read this in Thai.

Commitments include new cloud and AI infrastructure, AI skilling opportunities, and support for Thailands growing developer community

Bangkok, May 1, 2024 Today, Microsoft announced significant commitments to build new cloud and AI infrastructure in Thailand, provide AI skilling opportunities for over 100,000 people, and support the nations growing developer community.

The commitments build on Microsofts memorandum of understanding (MoU) with the Royal Thai Government to envision the nations digital-first, AI-powered future.

Microsoft Chairman and Chief Executive Officer Satya Nadella made the announcement in front of approximately 2,000 developers and business and technology leaders at the Microsoft Build: AI Day in Bangkok on Wednesday. The event was also attended by Thai Prime Minister Srettha Thavisin, who delivered a special address.

Our Ignite Thailand vision for 2030 aims to achieve the goal of developing the countrys stature as a regional digital economy hub that significantly enhances our innovation and R&D capabilities while also strengthening our tech workforce, said Prime Minister Thavisin. Todays announcement with Microsoft is a significant milestone in the journey of our Ignite Thailand vision one that promises new opportunities for growth, innovation, and prosperity for all Thais.

Thailand has an incredible opportunity to build a digital-first, AI-powered future, said Satya Nadella, Chairman and CEO, Microsoft. Our new datacenter region, along with the investments we are making in cloud and AI infrastructure, as well as AI skilling, build on our long-standing commitment to the country and will help Thai organizations across the public and private sector drive new impact and growth.

Dhanawat Suthumpun, Managing Director of Microsoft Thailand, said: Microsoft is dedicated to helping Thailand excel as a digital economy, ensuring that the benefits of cloud and AI technologies are widespread and contribute to the prosperity and wellbeing of Thais. Together, we are laying the foundations for a future that is not only technologically advanced but also inclusive and sustainable.

Growing capacity to thrive in the AI era

Microsofts digital infrastructure commitment includes establishing a new datacenter region in Thailand. The datacenter region will expand the availability of Microsofts hyperscale cloud services, facilitating enterprise-grade reliability, performance, and compliance with data residency and privacy standards.

It follows growing demand for cloud computing services in Thailand from enterprises, local businesses, and public sector organizations. It will also allow Thailand to capitalize on the significant economic and productivity opportunities presented by the latest AI technology.

According to research by Kearney, AI could contribute nearly US$1 trillion to Southeast Asias gross domestic product by 2030, with Thailand poised to capture US$117 billion of this amount.

Ensuring a skilled, AI-ready workforce

On Tuesday, Microsoft announced a broader commitment to provide AI skilling opportunities for 2.5 million people in the Association of Southeast Asian Nations (ASEAN) member states by 2025. This training and support will be delivered in partnership with governments, nonprofit and corporate organizations, and communities in Thailand, Indonesia, Malaysia, the Philippines, and Vietnam.

Microsofts skilling commitment is expected to benefit more than 100,000 individuals in Thailand.

It will enhance the AI proficiency of those involved in the nations tourism sector through the AI Skills for the AI-enabled Tourism Industry program. The initiative is a partnership between Microsoft and Thailands Ministry of Digital Economy and Society, Ministry of Tourism and Sports, Ministry of Labour, and the nations Technology Vocational Education Training Institute. It aims to empower young entrepreneurs and youths involved in tourism businesses across minor-tier geographic provinces in all five regions of Thailand.

The program will focus on enhancing the capabilities of 500 trainers from technology vocational education training institutes in AI for Thailands tourism sector. These trainers will then equip young individuals in tourism and hospitality with AI skills. The learning module will be accessible through partners learning platforms to ensure sustainability and scalability.

The tourism initiative builds on other Microsoft-supported skilling initiatives in Thailand, including Accelerating Thailand, the ASEAN Cyber Security Programme, Code; Without Barriers, and the Junior Software Developer Program.

Microsoft will also enable the Royal Thai Government to adopt a cloud-first policy with an AI skill development program for developers and government IT personnel.

Enabling developers to harness AIs potential

Nadella highlighted the important role developers play in shaping Thailands digital-first, AI-powered future.

Microsoft will continue to help foster the growth of the countrys developer community through new initiatives such as AI Odyssey, which is expected to help 6,000 Thai developers become AI subject matter experts by learning new skills and earning Microsoft credentials.

Thailand is a rapidly growing market on GitHub, the Microsoft-owned software development, collaboration, and innovation platform. More than 900,000 Thailand-based developers used GitHub in 2023, representing 24 percent year-on-year growth.

Furthermore, many Thai organizations are boosting their productivity and accelerating innovation using Microsofts generative AI-powered solutions. For example:

Several other organizations in Thailand are working with Microsoft to explore new possibilities with AI. They include the nations largest privately held company, Charoen Pokphand Group, and leading petrochemical and refining business, PTT Global Chemical Public Company Limited.

Microsoft also collaborates with Thailands National Cyber Security Agency to provide information on internet safety, cyber threats and vulnerabilities, and other related guidance to enhance the nations cybersecurity posture in the AI era. The Ministry of Finance, meanwhile, is using the power of AI to enhance cross-agency data collaboration, which will unlock deeper insights that support policy development towards a more financially inclusive economy for Thailand.

To learn more about Satya Nadellas visit and how Microsoft empowers organizations in the ASEAN region with AI, visit news.microsoft.com/thailand-visit-2024.

About Microsoft

Microsoft (Nasdaq MSFT @microsoft) creates platforms and tools powered by AI to deliver innovative solutions that meet the evolving needs of our customers. The technology company is committed to making AI available broadly and doing so responsibly, with a mission to empower every person and every organization on the planet to achieve more.

Tags: AI, Cloud, digital skills

See the original post here:

Microsoft announces significant commitments to enable a cloud and AI-powered future for Thailand - Microsoft Stories ... - Microsoft

Posted in Ai

Microsoft announces US$2.2 billion investment to fuel Malaysia’s cloud and AI transformation – Microsoft Stories Asia – Microsoft

Microsoft Chairman and CEO Satya Nadella announces a $2.2 billion investment to advance new cloud and AI infrastructure in Malaysia during the Microsoft Build: AI Day on May 02, 2024 in Kuala Lumpur, Malaysia. Photo by Graham Denholm/Getty Images for Microsoft.

Read this in Bahasa Malaysia and Mandarin.

Investment includes building digital infrastructure, creating AI skilling opportunities, establishing a national AI Centre of Excellence, and enhancing the nations cybersecurity capabilities

Kuala Lumpur, May 2, 2024 Today, Microsoft announced it will invest US$2.2 billion over the next four years to support Malaysias digital transformation the single largest investment in its 32-year history in the country.

Microsofts investment includes:

The investment demonstrates Microsofts commitment to developing Malaysia as a hub forcloud computingand related advanced technologies, including generative AI. This will support the nations productivity, competitiveness, resilience, and economic growth.

We are committed to supporting Malaysias AI transformation and ensure it benefits all Malaysians, said Satya Nadella, Chairman and CEO, Microsoft. Our investments in digital infrastructure and skilling will help Malaysian businesses, communities, and developers apply the latest technology to drive inclusive economic growth and innovation across the country.

YB Senator Tengku Datuk Seri Utama Zafrul Abdul Aziz, Malaysias Minister of Investment, Trade & Industry said, Microsofts 32-year presence in Malaysia showcases a deep partnership built on trust. Indeed, Malaysias position as a vibrant tech investment destination is increasingly being recognized by world-recognized names due to our well-established semiconductor ecosystem, underscored by our value proposition that this is where global starts.

Microsofts development of essential cloud and AI infrastructure, together with AI skilling opportunities, will significantly enhance Malaysias digital capacity and further elevate our position in the global tech landscape. Together with Microsoft, we look forward to creating more opportunities for our SMEs and better-paying jobs for our people, as we ride the AI revolution to fast-track Malaysias digitally empowered growth journey.

We are honored to collaborate with the government to support their National AI Framework, which enhances the countrys global competitiveness. This strategic emphasis on AI not only boosts economic growth but also promotes inclusivity by bridging the digital divide and ensuring everyone gets a seat at the table, so every Malaysian can thrive in this new digital world. As a result, Malaysia is steadily establishing itself as a regional hub for digital innovation and smart technologies, embodying a forward-thinking approach that prioritizes sustainable development and societal well-being through digital transformation, said Andrea Della Mattea, President of Microsoft ASEAN.

Expanding Malaysias digital capacity to seize AI opportunities

The digital infrastructure investment builds on Microsofts Bersama Malaysia (Together with Malaysia) initiative, announced in April 2021, to support inclusive economic growth. This included plans to establish the companys first datacenter region in the country.

The investment announced today will enable Microsoft to meet the growing demand for cloud computing services in Malaysia. It will also allow Malaysia to capitalize on the significant economic and productivity opportunities presented by the latest AI technology.

According to research by Kearney, AI could contribute nearly US$1 trillion to Southeast Asias gross domestic product (GDP) by 2030, with Malaysia poised to capture US$115 billion of this amount.

Equipping people with skills to thrive in the AI era

On Tuesday, Microsoft announced a broader commitment to provide AI skilling opportunities for 2.5 million people in the Association of Southeast Asian Nations (ASEAN) member states by 2025. This training and support will be delivered in partnership with governments, nonprofit and business organizations, and communities in Malaysia, Indonesia, the Philippines, Thailand, and Vietnam.

Microsofts skilling commitment is expected to benefit 200,000 people in Malaysia by providing:

The commitment builds on Microsofts other recent skilling activities in Malaysia, including its success in providing digital skills to more than 1.53 million Malaysians as part of the Bersama Malaysia initiative.

Partnering with government to strengthen AI and cybersecurity capabilities

Microsoft will continue to partner with the Government of Malaysia to enhance the nations digital ecosystem through several initiatives. These include establishing a national AI Centre of Excellence in collaboration with agencies in Malaysias Ministry of Digital to drive AI adoption across key industries, while ensuring AI governance and regulatory compliance. They also include pioneering AI adoption in the public sector through projects with:

Microsoft will also collaborate with the National Cyber Security Agency of Malaysia (NACSA) through the Perisai Siber (Cyber Shield) initiative to enhance the countrys cybersecurity capabilities. The collaboration will focus on promoting security and resilience in the public sector through security assessments and capacity building.

In addition, Microsoft will look to support NACSA in its role as Malaysias lead agency for cybersecurity matters, as it formulates the next stage of the nations cybersecurity strategy. The two organizations will also explore deeper collaborations in developing cybersecurity skills through initiatives such as Microsofts Ready4AI&Security program.

Empowering developers to harness AIs potential

Microsoft will continue to help foster the growth of Malaysias developer community through new initiatives such as AI Odyssey, which is expected to help 2,000 Malaysian developers become AI subject matter experts by learning new skills and earning Microsoft credentials.

Malaysia is a rapidly growing market on GitHub, the Microsoft-owned software development, collaboration, and innovation platform. Almost 680,000 of the nations developers used GitHub in 2023, representing 28 percent year-on-year growth.

Furthermore, many Malaysian organizations are boosting their productivity and accelerating innovation using Microsofts generative AI-powered solutions. For example:

To learn more about Satya Nadellas visit and how Microsoft is empowering organizations in the ASEAN region with AI, visit news.microsoft.com/malaysia-visit-2024.

Leadership statements

YB Rafizi Ramli, Minister of Economy

The advent of ChatGPT created a new vertical in the startup world. As more companies embrace the power of AI, having the right digital infrastructure in Malaysia is key to future-proofing our nations economy. Microsofts investment will help accelerate the adoption of generative AI, building a pipeline of AI-driven startups, and benefitting our economy through increased productivity and higher wages.

YB Gobind Singh Deo, Minister of Digital

As a nation, we are focused on accelerating digitalization and fostering a culture of innovation alongside technological advancement to level the playing field for all Malaysians to prosper in an inclusive digital economy. Microsofts investment is a significant step in our journey towards becoming a digitally inclusive society. It underscores the importance of partnership in driving nationwide digital transformation and reinforces our commitment to equipping Malaysians with the infrastructure, advanced tools, and skills they need to thrive in the digital age.

YB Fahmi Fadzil, Minister of Communications

Microsofts significant investment in Malaysia recognises and supports the governments efforts in building an inclusive digital ecosystem for the country. We are excited to continue partnering with technology leaders like Microsoft to foster a space where Malaysians can seamlessly connect, learn, and benefit from our nations digital transformation.

YB Chang Lih Kang, Minister of Science, Technology & Innovation

Todays investment by Microsoft exemplifies a dynamic public-private partnership aimed at enhancing the socio-economic status and quality of life in Malaysian communities. As we embrace AIs potential, we commend Microsofts commitment to responsible AI, which aligns with our vision for advancing technology in Malaysia responsibly and inclusively.

Laurence Si, Managing Director, Microsoft Malaysia

With rising demand for Cloud and AI, Microsofts investment announced today underscores our commitment to building a robust digital ecosystem in the country. From driving more innovations born in Malaysia, to fostering an ecosystem of skilled talents and enhancing cybersecurity capabilities for Malaysian organizations, we are dedicated to our role as a trusted technology partner to the nation.

Mr. Sikh Shamsul Ibrahim Sikh Abdul Majid, Chief Executive Officer, Malaysian Investment Development Authority (MIDA)

We are excited to deepen our partnership with Microsoft as they strengthen their commitment by establishing a cloud and AI infrastructure and supporting our vibrant developer community in Malaysia. This strategic collaboration underscores our dedication to innovation and regional industry growth. By leveraging Microsofts expertise, we aim to accelerate economic development, create jobs, and enhance industry competitiveness through digital transformation. We believe we can achieve more together and further advance our partnership. This investment not only reinforces Malaysias position as a leading digital hub but also marks a promising start in attracting more companies to embark on this digital journey with us, promoting inclusive growth and prosperity nationwide.

Ir. Dr. Megat Zuhairy Megat Tajuddin, Chief Executive Officer, National Cyber Security Agency (NACSA)

Microsofts collaboration with NACSA on Perisai Siber is pivotal as one of our strategic partnerships with industry players in establishing a secure digital infrastructure for our nation. Together, our goal is to bolster security and resilience, beginning with the public sector, to ultimately strengthen the nations cybersecurity capabilities.

Ts. Mahadhir Aziz, Chief Executive Officer, Malaysia Digital Economy Corporation (MDEC)

Microsofts commitment to Malaysia demonstrates confidence in our nations digital future. Through this investment in cloud and AI infrastructure, local organizations can tap into more opportunities to upscale and innovate, further propelling Malaysias aspirations for regional leadership in the digital economy.

About Microsoft

Microsoft (Nasdaq MSFT @microsoft) creates platforms and tools powered by AI to deliver innovative solutions that meet the evolving needs of our customers. The technology company is committed to making AI available broadly and doing so responsibly, with a mission to empower every person and every organization on the planet to achieve more.

Tags: AI, Cloud, digital skills

Continue reading here:

Microsoft announces US$2.2 billion investment to fuel Malaysia's cloud and AI transformation - Microsoft Stories Asia - Microsoft

Posted in Ai

This Seemingly AI-Generated Car Article On Yahoo Is A Good Reminder That AI Is An Idiot – The Autopian

Here at The Autopian, we have some very stern rules when it comes to the use of Artificial Intelligence (AI) in the content we produce. While our crack design team may occasionally employ AI as a tool in generating images, well never just use AI on its own to do anything not just for ethical reasons, but because we often want images of specificcars, and AI fundamentally doesnt understand anything. When an AI generates an image of a car, it has no idea if that car ever actually existed or not. An AI doesnt have ideas at all, in fact its just scraped data being assembled with a glorified assembly of if-then-else commands.

This is an even bigger factor in AI-generated copy. Well never use it because AI has no idea what the hell its writing about, and so has no clue if anything is actually true, and since ChatGPT has never driven a car, I dont really trust its insights into anything automotive.

These sort of rules are hardly universal in our industry, though, so if we ever wanted confirmation that our no-AI-copy rule was the right way, were lucky enough to be able to get such reassurance pretty easily. For example, all we have to do is read this dazzlingly shitty article re-published over on Yahoo Finance about the worst cars people have owned.

Maybe its not AI? Maybe this Kellan Jansen is an actual writer who actually wrote this, and in that case, I feel bad both for this coming excoriation and about whatever happened to them to cause them to be in the state they seem to be in. The article is shallow and terrible and gleefully, hilariously wrongin several places.

I guess I should also note that we dont use AI because the 48K Sinclair Spectrum workstations we use here dont quite have the power to run any AI. Well, we do have one AI that we use on them, our Artificial Ignorance system that we employ to get just that specialje ne sais quoi in every post we write. Oh, and our AI (Artificial Indignation) tools help with our hot takes, too. So, two.

Okay, but lets get back to the Yahoo Finance article, titled The Worst Car I Ever Owned: 9 People Share Which Vehicles Arent Worth Your Money, which is a conceptually lazy article that is just taking the responses to a Reddit post called Whats the worst car you have personally owned? which makes this story basically just a re-write of a Reddit post. It seems like the Reddit post was fed into whatever AI half-assed its way through generating the article, based on these results.

The results are, predictably, shitty, but also still worthy of pointing out because comeon. Theres this, for example:

BMWs are a frequent source of frustration for car owners on Reddit. Just ask userHurr1canE_.

They bought a 2023 BMW BRZ and almost immediately started experiencing problems. Their turbo started blowing white smoke within two weeks of buying the car, and the engine blew up within 5,000 miles.

The Reddit user also had these issues with the car:

Other users mention poor experiences with BMW X3s and 540i Sport Wagons. Its enough to suggest you think carefully before making one of these your next vehicle.

The fuck? What is a BMW BRZ? This is such a perfect example of why AI-generated articles are garbage: they make shit up. Maybe thats anthropomorphizing the un-sentient algorithm too much, but the point is that its writing, with all the confidence of a drunk uncle about to belly-flop into a pool, about a car that simply does not exist.

And, if you look at the Reddit post, its easy to see what happened:

The Redditor had their current car, a 2023 [Subaru] BRZ in their little under-name caption (their flair), and the dumb AI processed that into the mix, and, being a dumb computer algorithm that doesnt know from cars or clams, conflated the car being talked about with the one the poster actually owns. You know, like how a drooling simpleton might.

Theres more of this, too. Like this one:

Ah, yes, the F10 550i. So many of us have been burned by that F10 brand, have we not? Or, at least, we would have, if such a brand existed, which it doesnt. What seems to have happened here is the AI found a user complaining about a 2011 F10 550i but didnt know enough to realize this was a user talking about their BMW 5 series, and yes, F10 refers to the 5-series cars made between 2010 to 2016, but nobody would refer to this car out of context in a general-interest article on a financial sitewithoutmentioning BMW, would they? I mean, no human would, but we dont seem to be dealing with a human, just a dumb machine.

Even if we ignore the made-up car makes and models, the vague and useless issues listed, and the fact that the article is nothing more than a re-tread of a random Reddit post, theres no escaping that this entire thing is useless garbage, an unmitigated waste of time. What is learned by reading this article? What is gained? Nothing, absolutely nothing.

And its not like this is on some no-name site; it was published on Yahoo! Finance, well, after first appearing on GOBankingRates.com, that mainstay of automotive journalism. It all just makes me angry because there are innocent normies out there, reading Yahoo! Finance, maybe with some mild interest in cars, and now their heads are getting filled with information that is simplywrong.

People deserve better than this garbage. And this was just something innocuous; what if some overpaid seat-dampener at Yahoo decides that theyll have AI write articles about actually driving or something that involves actual safety, and theres no attempt made to confirm that the text AI poops out has any basis in fact at all?

We dont need this. AI-generated crapticles like these are just going to clog Google searches and load the web up full of insipid, inaccurate garbage, and thatsmyjob, dammit.

Seriously, though, were at an interesting transition point right now; these kinds of articles are still new, and while I dont know if theres any way we can stop the internet from becoming polluted with this sort of crap, maybe we can at least complain about it, loudly. Then we can say we Did Something.

(Thanks, Isaac!)

Read more:

This Seemingly AI-Generated Car Article On Yahoo Is A Good Reminder That AI Is An Idiot - The Autopian

Posted in Ai

Apple AI Could Produce ‘Really Really Good’ Version of Siri – PYMNTS.com

What ifApples voice assistant Siri was really, really, really good?

That question is at the heart of much of the tech giants artificial intelligence (AI) research, according to a report Sunday (May 5) by The Verge reviewing those efforts.

For example, a team of Apple researchers has been trying to develop a way to use Siri without having to use a wake word.

Rather than waiting for the user to say Hey Siri or Siri, the voice assistant would be able to intuit whether someone was speaking to it.

This problem is significantly more challenging than voice trigger detection, the researchers did acknowledge per the report, since there might not be a leading trigger phrase that marks the beginning of a voice command.

The Verge report added that this could be why another research team came up with a system to more accurately detect wake words. Another paper trained a model with better understanding of rare words, which are in many cases not well understood by assistants.

Apple is also working on ways to make sure Siri understands what it hears. For example, the report said, the company developed a system called STEER (Semantic Turn Extension-Expansion Recognition) that is designed improve users back-and-forth communication with an AI assistant by trying to determine when the user is asking a follow-up question and when they are asking a new one.

The report comes at a time when Apple appears to be taking as PYMNTS wrote last week a measured approachto its AI efforts.

Among its projects is the ReALM (Reference Resolution As Language Modeling) system, which simplifies the complex process of understanding screen-based visual references into a language modeling task using large language models.

On the one hand, if we havebetter, faster customer experience, theres a lot of chatbots that just make customers angry, AI researcher Dan Faggella, who is not affiliated with Apple, said in an interview with PYMNTS. But if in the future, we have AI systems that can helpfully and politely tackle the questions that are really quick and simple to tackle and can improve customer experience, it is quite likely to translate to loyalty and sales.

The voice tech sector is on the rise. According to research by PYMNTS Intelligence, theres a notable interest amongconsumers in this technology, with more than half (54%) saying they look forward to using it more in the future due to its rapidity.

For all PYMNTS AI coverage, subscribe to the dailyAINewsletter.

See original here:

Apple AI Could Produce 'Really Really Good' Version of Siri - PYMNTS.com

Warren Buffett Warns of AI Use in Scams – PYMNTS.com

Berkshire HathawaysWarren Buffett has compared the development of artificial intelligence (AI) to the atomic bomb.

Just like that invention, the multibillionaire said Saturday (May 4) at Berkshires annual meeting, AI could producedisastrous resultsfor civilization.

We let a genie out of the bottle when we developed nuclear weapons, said Buffett, whose comments were reported by The Wall Street Journal (WSJ). AI is somewhat similar its part way out of the bottle.

While Buffett acknowledged his understanding of AI was limited, he argued he still had cause for concern, discussing a recent sighting of a deepfake of his voice and image. This leads him to believe AI will allow scammers to more effectively pull off their crimes.

If I was interested ininvesting in scamming, its going to be the growth industry of all time, he said.

The WSJ report noted that Buffetts comments come amid a debate among business leaders about how AI will impact society. And while not everyone compares the technology to the atomic bomb, there are those who worry AI will wipe out white-collar jobs.

Others see the upside to AI, like JPMorgan Chase CEO Jamie Dimon has said AI could invent cures for canceror allow more people in future generations to live to 100 years old.

It will create jobs. It will eliminate some jobs. It will make everyone more productive, Dimon said in a recent WSJ interview.

It is also transforming how companies train and upskill their employees, PYMNTS wrote last week, providing personalized learning experiences that can cut costs and improve efficiency.

The global AI-in-education market is projected to expand from $3.6 billion in 2023 to around $73.7 billion by 2033, according to a report from Market.US. But in spite of this impressive forecast, online education companyChegg, which has invested in AI tools, recently saw a decline in stock, something that underscores the sectors volatility.

Generative AI can provide alevel of personalizationin learning that is nearly impossible to achieve without this advanced technology, Ryan Lufkin, global vice president of strategy at the education technology company Instructure, told PYMNTS.

This means we can quickly assess what an employee knows and teach directly to their knowledge gaps, reducing the amount of time spent learning and improving time-to-productivity.

For all PYMNTS AI coverage, subscribe to the dailyAINewsletter.

Here is the original post:

Warren Buffett Warns of AI Use in Scams - PYMNTS.com

HHS shares its Plan for Promoting Responsible Use of Artificial Intelligence in Automated and Algorithmic Systems by … – HHS.gov

Today, the U.S. Department of Health and Human Services (HHS) publicly shared its plan for promoting responsible use of artificial intelligence (AI) in automated and algorithmic systems by state, local, tribal, and territorial governments in the administration of public benefits. Recent advances in the availability of powerful artificial intelligence (AI) in automated or algorithmic systems open up significant opportunities to enhance public benefits program administration to better meet the needs of recipients and to improve the efficiency and effectiveness of those programs.

HHS, in alignment with OMB Memorandum M-24-10, is committed to strengthening governance, advancing responsible innovation, and managing risks in the use of AI-enabled automated or algorithmic systems. The plan provides more detail about how the rights-impacting and/or safety-impacting risk framework established in OMB Memorandum M-24-10 applies to public benefits delivery, provides information about existing guidance that applies to AI-enabled systems, and lays out topics that HHS is considering providing future guidance on.

Read the original here:

HHS shares its Plan for Promoting Responsible Use of Artificial Intelligence in Automated and Algorithmic Systems by ... - HHS.gov

New to the Lou: Ai, No Artificial Intelligence – PawPrint

I cant believe Ive never touched on my new found love and appreciation for a classic Japanese cuisine, SUSHI!

As a kid, I was never one to try new things. Ill have chicken tenders, mac and cheese, or strawberries, please; It wasnt until I turned around 18, or about my senior year of high school, that I finally started to branch out and try a few new things here and there.

I started off small. I tried lemonade for the first time at the county fair and man my life was changed forever. I remember that pivotal moment in my picky-eater career, when I began getting excited about trying new things.

Sushi is a new dish that I recently gave a try. I had always wanted to try it, but thought for sure that it wouldnt be for me. Im so glad I pushed myself to try it because now its one of my favorite dinners to grab!

Last night, my boy Skywalker Mann and I tried Sushi Ai, located in Clayton, Missouri. But dont worry, thats only one of five locations in the St. Louis area.

The recommendation came from a fellow PawPrint classmate, Liz Santimaw, a senior Business Administration student. Liz said I love Sushi Ai, and I trust her judgement so I knew it was worth a try.

Photo courtesy of Maddie Hill. This photo is from the Sushi Ai location at 471 Lafayette Center Drive in Manchester.

Walker got the all you can eat sushi deal that they offer, which was $23.99. They have over 40 rolls to choose from. He went with the Snow White roll, the Volcano roll, and the American Dream roll. On a little more mellow side, I went with the classic Crab roll and the Shrimp Tempura roll.

What sets Sushi Ai apart is that all of the sushi is prepared in house, with the chefs in view for the customers. This creates an experience for customers, and makes the restaurant feel authentic.

When you arrive, you are handed a paper menu and a pen, with a list of all the rolls and combinations they offer. Once you decide on the rolls you want to order, you write a checkmark or X next to it. Your server will come and take your paper menu, which they then give to the chefs.

Photo courtesy of Maddie Hill. Pictured from left to right: American Dream Roll, Snow White Roll, Volcano Roll, Shrimp Tempura Roll and Spicy Crab Roll.

Walker says the all you can eat sushi is an absolute steal for the price. He is sure right, when you choose a minimum of three rolls, you have essentially got the bang for your buck in the all you can eat. However, Sushi Ai does charge for any uneaten pieces of sushi, up to $1.00 per piece.

If this write-up can encourage you to do anything, its to branch out and explore new foods, no matter how scary or unappetizing they may seem! You know what they say, you never know until you try.

Here is Sushi Ais extended menu, and website to learn more.

See more here:

New to the Lou: Ai, No Artificial Intelligence - PawPrint

Enhancing Developer Experience for Creating Artificial Intelligence Applications – InfoQ.com

For one company, large language models created a breakthrough in artificial intelligence (AI) by shifting to crafting prompts and utilizing APIs without a need for AI science expertise. To enhance developer experience and craft applications and tools, they defined and established principles around simplicity, immediate accessibility, security and quality, and cost efficiency.

Romain Kuzniak spoke about enhancing developer experience for creating AI applications at FlowCon France 2024.

Scaling their first AI application to meet the needs of millions of users presented a substantial gap, Kuzniak said. The transition required them to hire data scientists, develop a dedicated technical stack, and navigate through numerous areas where they lacked prior experience:

Given the high costs and extended time to market, coupled with our status as a startup, we had to carefully evaluate our priorities. There were numerous other opportunities on the table with potentially higher returns on investment. As a result, we decided to pause this initiative.

The breakthrough in AI came with the emergence of Large Language Models (LLMs) like ChatGPT, which shifted the approach to utilizing AI, Kuzniak mentioned. The key change that LLMs brought was a significant reduction in the cost and complexity of implementation:

With LLMs, the need for data scientists, data cleansing, model training, and a specific technical infrastructure diminishes. Now, we could achieve meaningful engagement by simply crafting a prompt and utilizing an API. No need for AI science expertise.

Kuzniak mentioned that enhancing the developer experience is as crucial as improving user experience. Their goal is to eliminate any obstacles in the implementation process, ensuring a seamless and efficient development flow. They envisioned the ideal developer experience, focusing on simplicity and effectiveness:

For the AI implementation, weve established key principles:

Kuzniak mentioned that their organizational structures are evolving in the face of the technology landscapes. The traditional cross-functional teams comprising product managers, designers, and developers, while still relevant, may not always be the optimal setup for AI projects, as he explained:

We should consider alternative organizational models. The way information is structured and its subsequent impact on the quality of outcomes, for example, has highlighted the need for potentially new team compositions. For instance, envisioning teams that include AI product managers, content designers, and prompt engineers could become more commonplace.

Kuzniak advised applying the same level of dedication and best practices to improve the internal user experience as you would for your external customers. Shift towards a mindset where your team members consider their own ideal user experience and actively contribute to creating it, he said. This approach not only elevates efficiency and productivity, but also significantly enhances employee satisfaction and retention, he concluded.

InfoQ interviewed Romain Kuzniak about developing AI applications.

InfoQ: How do your AI applications look?

Romain Kuzniak: Our AI applications are diverse, with a stronger focus on internal use, particularly given our nature as an online school generating substantial content. We prioritize making AI tools easily accessible to the whole company, notably integrating them within familiar platforms like Slack. This approach ensures that our staff can leverage AI seamlessly in their daily tasks.

Additionally, weve developed a prompts catalogue. This initiative encourages our employees to leverage existing work, fostering an environment of collective intelligence and continuous improvement.

Externally, weve extended the benefits of AI to our users through the introduction of a student AI companion for example. This tool is designed to enhance the learning experience by providing personalized support and guidance, helping students navigate their courses more effectively.

InfoQ: What challenges do you currently face with AI applications and how do you deal with them?

Kuzniak: Among the various challenges we face with AI applications, the most critical is resisting the temptation to implement AI for its own sake, especially when it adds little value to the product. Integrating AI features because theyre trendy or technically feasible can divert focus from what truly matters: the value these features bring to our customers. Weve all encountered products announcing their new AI capabilities, but how many of these features genuinely enhance user experience or provide substantial value?

Our approach to this challenge is rooted in fundamental product management principles. We continuously ask ourselves what value we aim to deliver to our customers and whether AI is the best means to achieve this goal. If AI can enhance our offerings in meaningful ways, well embrace it. However, if a different approach better serves our users needs, were equally open to that.

See the rest here:

Enhancing Developer Experience for Creating Artificial Intelligence Applications - InfoQ.com

How Artificial Intelligence Is Making 2000-Year-Old Scrolls Readable Again – Smithsonian Magazine

Emily Lankiewicz / Vesuvius Challenge

When Mount Vesuvius erupted in 79 C.E., it covered the ancient cities of Pompeii and Herculaneum under tons of ash. Millennia later, in the mid-18th century, archeologists began to unearth the city, including its famed libraries, but the scrolls they found were too fragile to be unrolled and read; their contents were thought to be lost forever.

Only now, thanks to the advent of artificial intelligence and machine learning, scholars of the ancient world have partnered with computer programmers to unlock the contents of these priceless documents. In this episode of Theres More to That, science journalist and Smithsonian contributor Jo Marchant tells us about the yearslong campaign to read these scrolls. And Youssef Naderone of the three winners of last years Vesuvius Challenge to make these clumps of vulcanized ash readabletells us how he and his teammates achieved their historic breakthrough.

A transcript is below. To subscribe to Theres More to That, and to listen to past episodes on the complex legacy of Sojourner Truth, how Joan Baez opened the door for Taylor Swift, the problem of old forest roads and more, find us on Apple Podcasts, Spotify or wherever you get your podcasts.

Youssef Nader: My name is Youssef Nader, I am a PhD student at the Free University of Berlin, and today Im speaking to you from Alexandria.

Chris Klimek: Youssef spends most of his time in Berlin, but we caught him while he was visiting family in Alexandria, Egyptwhich is a very busy traffic city. He said he was five stories up, and it still sounded like he was on the street.

Nader: We arrived in Alexandria somewhere around 2 a.m. in the morning, so I got some sleep and I woke up to have the interview, basically.

Klimek: Youssef grew up in Cairo, so from a young age he was surrounded by ancient history.

Nader: Papyrus was invented by ancient Egyptians almost 5,000 years ago, so learning about papyrus making, and how the ancient Egyptians went around documenting their history, is something you learn about very early on, and something that sticks with you. Its very common to have souvenirs from Egypt, which is like papyrus with some hieroglyphs and some writings, and its a very common souvenir or gift that we bring people from here, and I brought my friends a couple of times. So yeah, its sort of a cultural heritage.

Klimek: Today, Youssef is a PhD student who works with machine learning and A.I.

Nader: I do work with image data, but I usually work with 2D images, like photos you take of your dog and stuff like that.

Klimek: One day, Youssef heard about something called the Vesuvius Challenge. It involved some unreadable ancient scrolls and the hope that some A.I. expert might be able to help, with a reward of $700,000.

Nader: It had all of the interesting elements: Papyrus, which rings a bell for an Egyptian, of course; playing around with historical data of 2,000 years ago just on my laptop is not something you come by very often; very interesting technical problem; a big monetary prize. It was just all of the right elements that make it worthwhile.

Klimek: It was a big challenge, but Youssef decided he was up to the task.

Klimek (to Nader): Have you ever seen one of the scrolls in person?

Nader: I have. I recently visited and I got to see scrolls up close. And its crazy. I could not believe that this is the same thing Im working on on my computer, because it doesnt look like there is hope. When you look at the scroll up close, it really looks like a piece of charcoal, and the sheets look like they merged together, its just one, and theyre very, very small. One of the scrolls was just my finger tall, so it was really crazy to think that this is what were working on and we were reading. Its a little bit of science fiction.

Klimek: From Smithsonian magazine and PRX Productions, this is Theres More to That, the show where we may not welcome our robot overlords, but we are willing to let them help us read historically significant ancient papyrus scrolls. In this episode, we learn more about the Vesuvius Challenge, what happened and what A.I. means for the future of archaeology. Im Chris Klimek.

Klimek: What are the Herculaneum scrolls, and why are they important?

Jo Marchant: Theyre a collection of carbonized papyrus scrolls from around 2,000 years ago, ancient Roman times, that were buried by the eruption of the Vesuvius volcano. The same one that buried Pompeii.

Klimek: Jo Marchant is a Smithsonian contributor whos covered this story for several years now.

Marchant: Often, the scrolls are described as the only intact library we have that survives from the ancient world. Because they were buried by the volcano, youve got these carbonized scrolls that were kept underground for all that time, so they have survived. But the only problem is you cant unwrap them to read them without destroying them. So, theyve been this big archaeological mystery since they were discovered in the 18th century.

Klimek: What do they look like now?

Marchant: Some of them have been pulled apart, and are basically crumbled into dust and theyre in hundreds of pieces, but there are a few hundredthe worst, most charred cases, if you like that were left intact as a lost cause. Theyve been described as saggy, brown burritos, is one of the least rude descriptions that Ive heard. Theyre kind of crumpled, crushed, wrinkled. They look like nothing. They were thought, supposedly, to have been pieces of coal by the workmen who first uncovered them in the 18th century, so they just really look like very sorry objects indeed. You would not think that you were going to get a lot of information out of them.

Klimek: Do we know how they were recognized when they were found as carbonized scrolls? It sounds like they could have easily been mistaken for something else.

Marchant: Yeah, a lot of them were supposedly just thrown away, or burned, even, for heat by the workmen, these 18th-century workmen who had first uncovered them.

Klimek: What these workmen had discovered was an ancient library buried underground since the Vesuvius eruption in 79 A.D.

Marchant: The library itself was situated in this luxury Roman villa on the shore of the Bay of Naples. It possibly belonged to Julius Caesars father-in-law at one point, this beautiful villa with walkways, columns, statues, works of art, courtyards, this luxury residence. The workmen are digging tunnels, essentially, through the site, uncovering it, find these lumps, initially just think that theyre coal, burn them, throw them away. To be honest with you, I dont know exactly how it was first realized that that was not what these things were, that they were actually incredibly precious. But once that was realized, then there was incredible interest, then, in trying to read them. This was a really unique, spectacular find. We just dont have literary written sources from the classical world. Most of the works of literature or philosophy or whatever it is that we have have been copied and therefore selected through the centuries. But to actually have these original pieces from the time is just really, really incredible. So, there was all sorts of efforts to try and open these scrolls, most of which ended up being very, very destructive.

Klimek: What else has hindered efforts to read the scrolls, aside from the fact that they fall apart if you try to physically unroll them?

Marchant: Yeah, so technically, this is an incredibly difficult challenge. There have been attempts to open them, and essentially you end up with hundreds of pieces or strips, because its incredibly thin, this papyrus, you might have hundreds of rolls. So, imagine its tearing off in strips, but then youve got different layers then stick together. So, each of your strips might consist of a different number of layers, and then youve got to try and piece those together as a jigsaw. So, there has been a lot of work going on among papyrologists to try and decipher, translate, interpret those pieces, sticking the bits back together. But then they were kind of put aside as a lost course. I think a lot of people thought that those were never going to be read, they were just going to sit there in the library archive.

Klimek: As Jo mentioned, the scrolls were incredibly fragile, but thats really just the beginning of why researchers were so stumped. First, how could they separate all the layers of paper?

Marchant: Youve got to find a way of looking inside them, working out where the surfaces of the papyrus are, and then reading the ink. Theyre so crumpled, and youve got all of these layers, some of them are stuck together, rolled very tightly. How do you even image and find the surfaces?

Klimek: Yeah. Then there was the ink itself.

Marchant: A lot of ink from ancient papyri has got iron in it, so if you X-ray, that ink will glow very brightly. But the problem with this ink is its just carbon and water. It has exactly the same density in X-ray scans as the papyrus. So, you can do your X-rays, you can do beautiful 3D scanning, whatever youre going to do. But its like doing an X-ray of a body: Youre looking for the bones, but the bones are completely transparent; the ink doesnt show up.

Klimek: Enter Brent Seales, a professor at the University of Kentucky.

Marchant: Hes a computer scientist, so hes not a classicist, quite an unusual person to be spearheading this attempt to read these ancient scrolls. But he was originally interested in computer vision and then got interested in, how could you use algorithms to flatten out images? One of the first things that Brent Seales worked on was a very old copy of Beowulf in Old English that was kept in the British Library. Part of the problem when you take photographs of very old manuscripts like that is its all kind of warped, and sort of folded and cockled. The surface isnt flat, so if you just take a photograph of it, youre not going to be able to see all of the writing. So, the idea was to develop software where you could scan the not flat three-dimensional surface, and then flatten it out, so that you would have a nice, flat surface, you could read all the writing.

So, then moving from there to actually virtually unwrapping something that was rolled up. And a few years ago, the team did that on an old scroll from Ein Gedi, on the shores of the Dead Sea, that was burned by fire in the sixth century A.D. And they took the CT scans of that and were able to then virtually unwrap that surface, and see that, written inside, was actually some text from the Book of Leviticus. So, that was an incredible advance.

Klimek: Then in 2005, a colleague showed Brent Seales the Herculaneum scrolls.

Marchant: And he told me that that just blew his mind, just the scale of that challenge, and the potential for the information that you could find. But hes quite interesting, in that he isnt so interested specifically in some of these ancient Greek and Roman sources that most papyrologists would be interested in, hes actually a devout Christian, and he is really interested in the origins of Christianity. The volcano erupted in 79 A.D., these scrolls were buried, so this was the time when Christianity was just beginning, and the philosophers in ancient Greece and Rome, in that world, wouldve been very aware of what was happening, probably interested in this new religion that was starting up. But he told me that what he was really dreaming of, really interested in, is finding out more information about that. Can we find information from early Christian sources?

Theres the huge technical challenges, but one of the biggest problems hes actually had is getting access to the scrolls to even study them, and to try to develop these techniques, because theyre incredibly precious and incredibly fragile. So, curators who are in charge of these collections, the last thing they want to do is give them to some computer scientist who wants to carry them off to a particle accelerator somewhere and send beams of X-rays through them. This is something thats taken nearly 20 years to really come together.

Klimek: I love this. Can I borrow this irreplaceable treasure of yours? Ill bring it right back, I just need to run it through my particle accelerator first.

Marchant: Exactly, exactly.

Klimek: Itll be fine.

Marchant: And Ive spoken to curators, and theyd say you breathe on these things, they will fall apart. They are so fragile. So, it really is a kind of perfect storm of difficulties.

Klimek: Remarkably, the scrolls were eventually taken to a particle accelerator in the U.K. for 3D scanning.

Marchant: Youre making a 3D reconstruction of that volume, and then you have to go through, really painstakingly, slice by slice, and kind of mark where all the surfaces are. If you think about looking at one of these scrolls in a cross section, youll see a spiral of where the papyrus is all wound together, and you have to mark where all those surfaces are, and then what Brent Seales and his team did was work on software for algorithms that could take that data and then unwrap that spiral into flat surfaces. So, you get a kind of flat image of what that surface looks like in the CT scan, that you can then work on and try and look for the ink.

But as I mentioned, the ink in those images is transparent; you cant see it. So, that then was the next challenge. How are you actually going to make that ink visible? They had one tiny fragment which had one letter on it, sigma, and they were able to carry that to the Diamond Light Source in Oxfordshire, and the idea was that just using that one letter, they were trying to come up with imaging techniques, and thats where, a few years ago, they had the idea of using machine learning, these artificial intelligence techniques, to try to do that.

If you take some of the papyri that has been opened, some of these fragments, and you train your machine learning algorithm, you show it, This is what ink looks like, and this is what not ink, just the blank papyrus, looks like. You can teach it to be able to tell the difference, so then you can run that same algorithm on your CT scans from inside the wrapped-up scroll. That was the approach, but they realized that this was going to be an incredibly labor intensive a lot of work to do this. And I think thats the point at which Nat Friedman, the Silicon Valley entrepreneur, he had heard about the Herculaneum scrolls and contacted Brent Seales to say, Right, whats happening with this? Is there anything that I can do to help? And that was the origins of this Vesuvius Challenge competition.

Klimek: Nat Friedman is the former CEO of GitHub, an online platform where computer programmers collaborate.

Marchant: And this whole project, actually, I find fascinating, because of the different worlds that come together. Youve got the computer scientists. Youve got these classicists and papyrologists who have their own culture and world. Youve got the curatorstheyre just really wanting to keep everything safe, theyre conservators. So, very different motives, very different cultures that these people are coming from. If you think of papyrologists, often it will take them years, decades to do a translation and edit an edition of a particular source. Theyre so painstaking, theyre working character by character, just trying to work everything out. And then youve got the Silicon Valley entrepreneurs coming in, going, Speed is everything! We are going to solve this now! And you throw those two worlds together, I find it completely fascinating how, actually, in this case, thats actually worked really well. Its really triggered a lot of progress and creativity.

Klimek: So, how does all of this bring us, in 2023, to the Vesuvius Challenge?

Marchant: Nat Friedman told me that during the pandemic, during lockdown, hes looking for things to do, like we all were, looking for distractions. Starts reading about ancient Rome, getting very interested in that whole world, finds out about the Herculaneum scrolls through just Googling, Wikipedia, all of this. Eventually comes across an online talk by Brent Seales talking about all of the work, and this problem with not being able to see the ink, and how he thinks that machine learning, artificial intelligence, might be the answer to that. And Nat said, from this talk, it sounded like Brent was pretty much there. He was going to solve it pretty soon, so he just thinks, Oh, I look forward to finding out what happens with that. Then, a couple of years later, it was like, Oh, they dont seem to have read the scrolls yet.

So, he got in touch with Brent Seales to invite him to a retreat where a lot of tech figures, funders, that sort of whole community get together. Seales initially just ignored the email, just didnt really believe who it was from. So, it took a bit of chasing, but he eventually realized that yes, this was Nat Friedman who was trying to get in touch with him. He went along to this retreat. Its a camp-out in the woods in Northern California, where they all sit around fires and discuss projects, and, I dont know, important decisions in the tech world get made. But nobody was actually interested in funding this project.

So, Nat Friedman, afterward, is thinking, I dont want this guy to go home with nothing, after I promised him that wed be able to do something to help his project. Basically, he said, Why dont we do it as a competition? He and his longtime funding partner, Daniel Gross, put forward initial funding for the competition, and the idea was that you make all of your data open source to public, just put it out there, and then you set goals for people who can make different advances toward reading the scrolls. So, things like first person to detect ink, first person to detect a word, first person to read a whole passage. You set all of these different minds onto the challenge at once.

And the actual design of the competition is really interesting and really clever, I think, because rather than just having one prize and everyones working alone, because youve got these progress prizes, every time somebody wins a progress prize, all of their work, all of their data, all of their algorithms get made public. So, the way that Brent put it to me is you level everybody up, then, so everybody has the advantage of that, and then they all start working on the next challenge.

I asked Brent Seales, actually, was that difficult? If youve worked on a project for nearly 20 years, and your dream was you were going to be the person to read the scrolls, is that a hard decision to make, then, to say, Actually, its not going to be me. Im going to do this prize. Im going to make everything Ive done so far, everything Ive worked for, all of our software, all of our data, lets just make it public, put it out there, and then someone else can come and do that last step, and they will be the person to read it. Can you imagine? How hard. And he said yeah, it was really difficult. The whole team had to talk about that together, and make sure that they were all OK with that.

Seales also said something else to me: He said often with archeology, and Ive come across this with other stories Ive written, actually, that somebody decides that theyre going to be the one to solve a mystery or whatever it is, make a discovery, and its almost like the ego takes over, its theirs, and theyre going to be the one to have all the glory. And he said this was almost a way to prove to himself that he wasnt that person. That hes doing it so that the scrolls can be read.

They put everything out there, made it public, launched the award toward the beginning of 2023, and it all went from there. I think they had more than a thousand teams, in the end, from all over the world, like China, Ukraine, Australia, U.S., Egypt, and they were all on this Discord, this chat platform for gamers, discussing latest advances and questions, because they were just releasing little flat images of the surfaces inside these scrolls, a little piece at a time. And then what the entrants for the Vesuvius Challenge were doing was then they would take those segments, those flat segments, and use those to then train their machine learning models to try and recognize that ink.

Klimek: Were there any unsuccessful avenues that were part of this that were included in your reporting? Any attempts that didnt pan out?

Marchant: I think there were lots of teams trying different things, trying to train their algorithms in different ways. So, one thing that Seales thought they might be able to do was to train the algorithm on the letters from the parts of the scrolls that have been read, but that ended up really not working very well. It seems that you have to train your algorithm on the same scroll of the scans that youre trying to read, which is obviously very difficult, because you cant see the ink. How are you going to do that?

One of the first real key breakthroughs, there was an ex-physicist called Casey Handmer. He was actually looking at the images that were coming out from inside this scroll visually, and just spending hours and hours poring over them. He was convinced that if a machine learning algorithm could see a difference, a lot of those are trained based on the human visual system. So, he was thinking, If a machine can see it, it must be possible for a human to see it, if we just look carefully enough. So, hes pouring over these images and eventually notices this very strange, very subtle difference in texture.

So, normally in the CT scans, you can sort of see the woven strands of the papyrus, and then in some places there was this Its described as being like cracked mud on a riverbed, those geometric kind of cracks you get. So, they called it crackle. He was trying to look at this, trying to work out where it was, and then realized in one place, it seemed as if it was forming the shape of a letter. So, he was like, Oh my goodness, this is the ink. This is not showing up as a different color, its not glowing bright or anything, but theres just this very, very subtle difference in the texture of the surface where the ink is sitting on the papyrus. And he was awarded the First Ink Prize for doing that. So, then other competitors were able to use that to train their algorithms. Now theyve got a foothold, theyve got something to start training their algorithms on the difference between ink and not ink.

Klimek: After that, the race was on. Who would find the first word to read from the Herculaneum scrolls?

Klimek (to Nader): Can you give us a simple definition of what machine learning is?

Nader: Machine learning is about how to teach a statistical model to map your input data to some output result that you want. For the Vesuvius Challenge especially, we wanted to teach the A.I. model what ink looks like.

Klimek: Nader again.

Nader: So, you give the A.I. model some small images, some patches of the image, because the segments are really huge, its like hundreds of thousands of pixels by hundreds of thousands of pixels. Its crazy resolution. So, you take a small piece, you show it to the A.I. model, and the A.I. model needs to say, I see ink in this small piece or not. And to train this, you need some examples to show it to begin with. So, we tell it, OK, this is what ink looks like. This is what ink doesnt look like. And you show it these examples, and then its able to learn, OK, how do I differentiate between the two? And then it notices, OK, theres this pattern on top of the papyrus that looks quite like cracks, that maybe this I can use to detect the signal.

And of course there were very interesting problems, because to begin with, we cant see the ink ourselves, so it didnt have the data that we can show to the A.I. model to say, OK, this is what ink looks like. And it took a lot of experiments and a lot of ways to find a first footing of ink from small pieces that fell off the scrolls: first two letters. How do we go from two letters to 2,000 letters? You train an A.I. model to learn these two letters that you found, and it has a slightly better idea of what letters look like, so it finds another ten letters. You take those 12 letters now, and you train a new one with the 12 letters. The new A.I. is better, so it finds maybe 20-something letters.

And the beginning was incremental. I would usually just take the predictions from an A.I. model, like, OK, these are letters. I would paint over them in Photoshop to make some examples of what ink is, so just like a black and white image, and I would give it to the next A.I. model. Of course, my drawing is not very accurate, and it was a question of how do you allow the A.I. model to disagree when you have some mislabeled stuff? How do you guarantee that the A.I. model is not hallucinating, not making up letters? And we had to operate on a very, very small scale, such that the letter is never seen by the A.I. model. It only predicts pixel level: ink, no ink, ink, no ink. And then we, as humans, when we look at the big picture, we see, OK, yeah, this is actually Greek, this is what it means.

Klimek: This is how you can have confidence in one set of findings before you move on to the next set. Youre verifying the machine learning conclusions with human eyes before you feed those discoveries back into the A.I. to try to solve the next set.

Nader: Yeah. So, in the training phase, I was verifying this by my own eyes, which, Im no expert in Greek, I actually dont know any Greek. So, I was just looking at what makes sense as a writing, like any kind of written language. You have some ink deposits, and you draw a letter in some shape. It makes sense that the letters are all on a single row, it doesnt make sense that theres scrambled rows; fixed-size columns, stuff like that. I go to sleep thinking about the Vesuvius Challenge. I wake up, check some stuff, continue working, eat, sleep, then repeat. I wasnt even getting proper sleep because Im going to bed and thinking, OK, did I actually try that thing? Maybe I have a different idea, maybe I should do this. And I run something overnight, and I check in the morning if it worked or not. So yeah, we were grateful that the first words that we found was not something like, and, the, for example. That wouldve been underwhelming. It had some meaning, it had some kind of zest to it, and I think that was really cool.

Klimek: Youssef was one of two people to find that first word. It wasdrum roll, pleasepurple.

Marchant: So, that was the first word, purple. Which is lovely, I love that it was just such a rich, evocative word.

Klimek: Marchant.

Marchant: So, immediately that said to the papyrologists, We think this is a new work weve never seen before. Because purple is quite a rare word. Purple, porphyras, is the name of a dye. It was made from sea snails, so very expensive, difficult to make, so used to dye the emperors robes. This was a sign of wealth, luxury, rank. Its just this lovely sort of Yeah, just evocative word. So, that was the First Letters Prize, awarded in October to Luke Farritor, who got the first place for that prize.

Klimek: Luke Farritor was a 21-year-old computer science student at the University of Nebraska. Youssef won second place. The two reached out to one another after the announcement, eventually deciding to team up. They were joined by a third student named Julian Schilliger. Together, the three set their sights on the next phase of the competition.

Marchant: When the whole challenge was set up in March 2023, they had this big $700,000 Grand Prize for reading the first passages from the scroll. And a deadline was set for that prize, which was the 31st December 2023, so the end of that year. Nat Friedman said it was getting nearer and nearer to the end of the year, and theyre not getting any entries for this Grand Prize. They were getting pretty worried. They were starting to send out messages going, So, hows everyone getting on? Let us know your progress!

Klimek: Entrants to the Vesuvius Challenge worked right down to the wire. Youssef and his teammates were no exception.

Klimek (to Nader): What were your last few days like, prior to the deadline?

Nader: They were quite sleepless. I was trying to make sure that Im not submitting on the last day, which I usually do in every other thing. I knew that a lot of people would be submitting at the very last day or the very last minute. I was also not sure about There was a time factor. If you get to the threshold of winning first, you win. I was not sure: Where are we on that? Do we have the best models? Where are we? You dont know about other teams. And so you also want to guarantee that youre first, in case theres a tie. So, there was the time factor and the quality factor, and youre trying to, OK, do I submit now? Do I try to make it better over the next week? Is it getting better? Its not getting better. And I made one submission 22nd December, and one 30th December, so, one day before the end of the competition.

I was just planning to go back to Egypt to visit my family after the long haul of the Vesuvius Challenge. It was the date after I arrived in Egypt. They sent us an email, saying, Hey, the evaluation process is still ongoing, wed like to meet with you guys. Of course, were in different time zones, and they wanted to make sure were all in one meeting when they tell us the news. So, we didnt know that we were getting the announcement, and we were suspicious. OK, why do you need all three of us in a meeting? We were like, We can answer the questions over email. Julian was saying, Yeah, it doesnt make sense.

We went to the meeting, and then they were asking us normal questions, and we were like, OK, yeah, maybe its still ongoing. And then Nat was like, How would you guys feel if we told you that you won the Vesuvius Grand Prize? And it was like, What? And I think it took us a couple of days for it to sink in, actually, that we actually won. And we were in disbelief, but we were ecstatic, and it just felt amazing.

Marchant: The three of them working together, theyd actually read, I think it was more than 2,000 characters from this scroll, more than 5 percent of the entire scrolls. And these are really big, long, long scrolls. And it was discovered that it was a work of philosophy by an ancient Greek philosopher called Philodemus. And that in itself was not a huge surprise, because of the scrolls that had been attempts made to open them and partially read, a lot of those scrolls were written in Greek and were philosophy works by Philodemus. He was a follower of Epicurus who founded the Epicurean school of Greek philosophy. They thought everything in nature was made of atoms that swerve and collide. And theres so many works, actually, of Epicurean philosophy that they think that that part of the library was probably the working library of this philosopher, Philodemus.

And it seems to be a work on pleasure, and the senses, and on what gives us pleasure, possibly relating to music. Its mentioning the color purple, its mentioning the taste of capers. Theres a character called Xenophantus who is mentioned, who is possibly, theres a Xenophantus known who was a very famous flute player, who apparently his playing was so evocative and stirred the heart so much that his playing always caused Alexander the Great to immediately reach for his weapons. So, you get a sense of all these lovely sensory sources of pleasure that are being mentioned in this piece. So yeah, papyrologists are really, really excited about that. But then also what this means for what else we could be reading from now.

Klimek: I asked Youssef what other archaeological problems hed like to see machine learning tackle.

Nader: I think there are very interesting projects of machine learning in archaeology, even outside of reading a scroll. I think there has been discussions of using similar techniques to read writings on wrappings of mummies. I know of one other project in our university that has to do with using 3D reconstruction and imaging for archaeological sites, using drones to scan the sites, and figure out structures and stuff. There are some interesting problems that are either really hard to solve, or require a lot of man effort, and A.I. could really help us speed things up.

Klimek: Do you think most people who dont have your specialized background and education, do people understand generally what artificial intelligence is?

Nader: Artificial intelligence has been getting a lot of bad reputation recently, also because of how it has been used. I think sometimes people think its a lot smarter than it actually is, and some people think its a lot dumber than it may be. I believe its a very interesting tool, depends really how you use it. A lot of the fear and concern from A.I. comes from not treating it as a tool, but as an entity of its own that wants to do either good or bad. But the good or bad is basically coming from the human operating the tool. I think theres a lot of debates coming from the world-leading experts in A.I. about what actually are the risks, and how to interpret what we are doing. So, its still kind of an ongoing process, but there is some awareness of, OK, there is this new technology that is shaping the world.

And Im glad that the Vesuvius Challenge came at this time, because it also shows, yeah, you can do harm with A.I., but you can also do so much good, and so much benefit to mankind. So, some people are starting to think, Yeah, maybe this is not really as bad as we thought. Or, We could really use this for our own good.

Klimek: Thank you, Youssef, this has been fascinating.

Nader: Yeah, thank you, Chris.

Klimek: To read more of Smithsonian magazines coverage of the Vesuvius Challenge, check out the links in our show notes. And as always, wed like to send you off with a dinner party fact. This time, we bring you a brief anecdote about another fragile thing that lives buried, not under ash, but under ice.

Megan Gambino: Hi, Im Megan Gambino, and Im a senior web editor at Smithsonian magazine. I recently edited a story about ice worms. I had no idea what these things were until this story, and theyre tiny, about inch-long, worms that live in glacial ice. Theyre actually the only macroscopic animals that live in glaciers. But what I found interesting about them is that theyre both hardy and fragile at the same time. And what I mean by this is they can live for years without food, and they live at freezing temperatures, and yet they can only survive at this tiny temperature range, hovering right around 32 degrees Fahrenheit. Any colder, they get hypothermia; any warmer, they get room temperature, their membranes melt. So, I found that they were this interesting critter that was both tough and delicate at the same time.

Klimek: Theres More to That is a production of Smithsonian magazine and PRX Productions. From the magazine, our team is me, Debra Rosenberg and Brian Wolly. From PRX, our team is Jessica Miller, Genevieve Sponsler, Adriana Rozas Rivera, Ry Dorsey and Edwin Ochoa. The executive producer of PRX Productions is Jocelyn Gonzales. Our episode artwork is by Emily Lankiewicz. Fact-checking by Stephanie Abramson. Our music is from APM Music.

Im Chris Klimek. Thanks for listening.

Get the latest History stories in your inbox?

Continued here:

How Artificial Intelligence Is Making 2000-Year-Old Scrolls Readable Again - Smithsonian Magazine

Presenting the First-Ever AI 75: Meet the Most Innovative Leaders in Artificial Intelligence in Dallas-Fort Worth – dallasinnovates.com

One honoree has implemented generative artificial intelligence to help airline customers book flights. Others are using AI to create new cancer drugs, or to advance healthcare for underserved populations. Still others are developing the technology to manage traffic networks, detect and respond to cyberthreats, and accelerate real-world AI learning by up to 1,000 times.

All these breakthroughs are happening in Dallas-Fort Worth, which is uniquely positioned as a burgeoning hub for applied AI and advanced AI research. This is all-important, because the AI revolution is reshaping the global economy andaccording to a Deloitte survey of top executiveswill be the key to business success over the next five years.

Thats why Dallas Innovates, in partnership with the Dallas Regional Chamber (DRC) and Dallas AI, has compiled the following, first-ever AI 75 list of the most innovative people in artificial intelligence in DFW. Consisting of academics, entrepreneurs, researchers, consultants, investors, lawmakers, thought leaders, and corporate leaders of every stripe, our 2024 AI 75 spotlights the visionaries, creators, and influencers making waves in AI in seven categories.

Online nominations for the inaugural list were opened in February, focusing on North Texans making significant contributions to AI, whether through innovative research, catalytic efforts, or transformative solutions. Nominees were reviewed for demonstrated excellence in key criteria, including recent AI innovations, adoption impacts, industry technological advancement, thought leadership, future potential, and contributions to society.

The editors of Dallas Innovates, including Co-Founder and Editor Quincy Preston, led the nomination review and honoree selection process. Aamer Charania and Babar Bhatti of Dallas AI and the DRCs Duane Dankesreiter provided strategic guidance and input on the editors selections across award categories.

The inaugural class of AI honorees is set to be announced live on Thursday, May 2, at the DRCs Convergence AI event at the Irving Convention Center. The AI 75 is supported in part by the City of Plano, the University of Texas at Dallas, and Amazech.

Because this is the first year for Dallas Innovates AI 75, we know there must be other AI leaders you need to know who are deserving of future consideration. We invite and welcome your feedback on the 2024 programas well as your suggestions for next years list.

RENOWNED IN REALTY Naveena Allampalli Senior Director AI/Gen AI Solutions and Chief AI Architect, CBRE

Allampalli is a leader in the fields of AI, machine learning, and cloud solutions at Dallas-based CBRE, a commercial real estate services and investment company. A frequent conference speaker on AI and AI applications, she has advised AI startup companies, served as a mentor for data scientists, and was recognized as a woman leader in AI and emerging technologies by Fairygodboss, a career community for women. Allampalli, who previously was director of AI/ML and financial analytics for IT services and consulting company Capgemini, holds a masters degree in computers and mathematics focusing on computer science and artificial intelligence.

RETAILING REVOLUTIONARY Sumit Anand Chief Strategy and Information Officer, At Home

Anand is part of the executive team and is responsible for leading the technology and cybersecurity capabilities, among other things, for At Home, a Dallas-based chain of home dcor stores. There, he has partnered with Tata Consulting Services to leverage Tatas patented Machine First Delivery Model to create standardized processes and, eventually, run At Homes infrastructure operation with AIOps. Up next: leveraging AI with AR and VR to help merchants visualize product assortments. Says Tatas Abhinav Goyal: Sumit is a strategic thinker and sets the vision for the organization. In 2023, Anand was named to the Forbes CIO Next List of 50 top technology leaders who are transforming and accelerating the growth of their businesses.

TEAM TRANSFORMER Jorge Corral South Market-AI Lead, Accenture

Corral leads the large Data and AI Transformation team for Accentures South Market unit. That unit helps Fortune 500 companies digitize their enterprises in a number of areas, including productivity, supply chains, and growth outcomes. To hasten the effort, Accenture said in 2023 that it would invest $3 billion globally into generative AI technology over three years. Recently, Corral spoke on Bringing Business Reinvention to Life in the Era of Gen AI at HITEC Live!, a gathering of the Hispanic Technology Executive Council. Hes also among the expert speakers appearing at Convergence AI, the Dallas-Fort Worth regions first-ever conference dedicated to enterprise AI.

INDUSTRY INFLUENCER Robert Hinojosa Sr. Director, AI/ML, Staples

Before becoming senior director, AI/ML at Staples, Hinojosa worked at Ashley Furniture Industries. As the chief AI officer and vice president of IT at Ashley, he led the manufacturer/retailers AI transformation across the company, overseeing its data science function, its enterprise innovation lab, and the AI upskilling of its workforce. Before Ashley he was CTO at Irving-based Cottonwood Financial and a software engineering leader at Fort Worth-based Pier 1 Imports. He currently serves as an industry advisor on various academic boards, including at Texas Christian University and The University of Texas at Arlington.

AUTONOMY ACE Chithrai Mani CEO, Digit7

Under the leadership of Mani, who has an extensive background in AI and digital transformation, Digit7 has become a trailblazer in the field of self-checkout and autonomous stores. Leveraging his experience in artificial intelligence and machine learning, Digit7s cutting-edge systems like DigitKart and DigitMart have been pivotal in shaping the future of the global retail and hospitality industries. Before becoming CEO of Richardson-based Digit7, Mani served as the chief technology and innovation officer at InfoVision Inc., where he helped drive innovation and digital transformation for Fortune 500 companies. The much-sought-after tech influencer is a frequent keynote speaker on topics related to AI and ML, and an emerging-tech evangelist.

ENGINEERING ORIGINATOR Shannon Miller EVP and President of Divergent Solutions, Jacobs

Miller, a 26-year veteran of Dallas-based Jacobs, is the point person for a collaboration between the company and Palantir Technologies to harness the power of artificial intelligence in critical infrastructure, advanced facilities, and supply chain management applications. Last year, for example, Miller explained in a YouTube video how Jacobs was harnessing the power of Palantirs comprehensive AI solution in wastewater treatment to optimize decision-making and long-term planning. As president of Divergent Solutions, Miller is responsible for delivering next-generation cloud, cyber, data, and digital solutions for the companys customers and partners globally. She has a bachelor of science degree in chemical engineering and petroleum refining from the Colorado School of Mines.

CASHIER-LESS CREATIVE Shahmeer Mirza Senior Director of Data, AI/ML and R&D, 7-Eleven

A public speaker and inventor with a robust patent portfolio, Mirza is responsible for data engineering, artificial intelligence and machine learning, and innovation at Irving-based 7-Eleven. Earlier at the company, he led an interdisciplinary team that delivered a fully functional, AI-powered solution from prototype to full scale in less than a year. The so-called Checkout-Free tech solution tracks what customers take and automatically charges them, making for a frictionless shopping experience in a cashier-less store. Before joining 7-Eleven, Mirza was a senior R&D engineer with Plano-based PepsiCo, where he piloted projects to demonstrate the long-term impact of AI/Machine Learning in R&D.

TWINS POWER Timo Nentwich Executive Vice President and CFO, Siemens Digital Industries Software

Nentwich has made a significant impact on AI through his role as Siemens EVP and head of finance. His recent projects have centered around development of the Siemens Xcelerator portfolio, a key part of Siemens transformation into a Software as a Service company. The portfolio is designed to help engineering teams create and leverage digital twins, harnessing the potential of advanced AI-driven predictive modeling. A partnership between Plano-based Siemens and NVIDIA is intended to take the industrial metaverse to the next level, enabling companies to create digital twins that connect software-defined AI systems from edge to cloud. Nentwich, a native of Germany, holds an MBA from Great Britains Open University Business School.

ENGINEERING EMINENCE Justin J. Nguyen Head of CS Data Engineering and Analytics, Chewy

Nguyen is an accomplished leader with a strong background in AI, analytics, and data engineering. As head of data and analytics at Chewy, he has improved the companys operational efficiencies using AI and designed anti-fraud algorithms. He has demonstrated his thought leadership in the field with articles in multiple publications, peer-reviewed research papers, symposiums, and podcastsincluding hosting Chewys AI in Action podcast. In 2022, he was recognized in CDO Magazines 40 Under Forty Data Leaders. With undergraduate and graduate degrees from the Georgia Institute of Technology, Nguyen previously was a senior director and head of enterprise data and AI at Irving-based 7-Eleven.

TECH TSAR Joe Park Chief Digital and Technology Officer, Yum! Brands

Park has been a leader in the integration and advancement of an AI-first mentality within Yum! Brands, which owns several quick-service restaurant chains including Plano-based Pizza Hut. Hes helped develop and deploy innovative technologies aimed at enhancing kitchen operations, improving the tech infrastructure, and bolstering digital sales growth. For example, Parks team oversaw the rollout of an AI-based platform for optimizing and managing the entire food preparation process, from order through delivery. Yum! has doubled its digital sales since 2019 to about 45% of total sales, thanks in part to his AI initiatives. Park, who joined Yum! in 2020 as its first chief innovation officer, previously was a VP at Walmart.

DIGITAL DOER Joshua Ridley Co-Founder and CEO, Willow

Ridley, a serial entrepreneur, leads Willow, a global technology company whose tagline is Digital twins for the built world. The companys AI-driven, digital-twin software platform analyzes and manages data to power smart buildings at scale. Launched in Australia in 2017, Willow relocated to Dallas in 2022 and has partners and customers including Johnson Controls, Walmart, Microsoft, and Dallas-Fort Worth International Airport. The companys collaboration with D-FW Airport, which includes creating a digital twin for the maintenance and operation of assets including Terminal D, was called a game-changer for our industry by an airport official. Ridley previously founded a pioneering Australian digital design/construction firm and a company that leveraged the internet to deliver building services.

CYBER SOVEREIGN Shri Prakash Singh Head of Data Science & Analytics, McAfee

Singh is a prominent thought leader in AI, particularly in the context of cybersecurity. His position at Dallas-based McAfee has him playing an increasingly significant role in detecting and responding to cyber threats, which are constantly evolving and growing in sophistication. Singh has shared his expertise in AI and data science at a number of public forums, including at last years Dallas CDAO Executive Summit. At a private, executive boardroom session there, he discussed opportunities for data-driven innovation, among other things. In 2023, AIM Research named Singh one of the countrys 100 Most Influential AI Leaders.

BANKING BRAIN Subhashini Tripuraneni Managing Director, JPMorgan Chase

As a managing director, Tripuraneni serves as JPMorgan Chase & Co.s global head of people analytics and AI/ML. In that role, she leads machine learning initiatives and applies artificial intelligence to enhance the giant financial institutions critical business processes. Previously the head of AI for Dallas-based 7-Eleven, Tripuraneni was recognized as one of the top women aiding AI advancement in Texas in 2020. She has spoken widely about the use of AI in retailing and banking and co-authored Hands-On Artificial Intelligence on Amazon Web Services, a book aimed at data scientists and machine learning engineers.

DATA DOYEN Vincent Yates Partner and Chief Data Scientist, Credera

Yates serves as the chief data scientist and a partner at Credera, an Addison-based company that helps transform enterprises through data from strategy to implementation. One of Crederas analytics platforms, for example, leverages generative AI to provide marketers with insights and personalized consumer experiences. Previously, he held leadership roles at companies including GE Digital, Zillow Group, Uber, and Microsoft. Yates is a member of the Global AI Council, where hes contributed to developing a framework to assess AI readiness. He has spoken widely about the economic impacts of GenAIespecially in customer operations, marketing, R&D and software engineeringand has addressed the challenges executives face aligning AI with their business objectives.

AUTHENTICATION ILLUMINATOR Milind Borkar Founder and CEO, Illuma Labs

Borkar is founder and CEO at Illuma, a fintech serving credit unions and other financial institutions with voice authentication and fraud prevention solutions. The Plano-based software companys AI, machine learning, and voice authentication technologies derived from its R&D contracts with the U.S. Department of Homeland Security. Illuma says its flagship product, Illuma Shield, utilizes AI and advanced signal processing, among other things, to achieve much faster and more accurate authentication compared to traditional methods. Borkar, who previously worked at Texas Instruments, graduated from the Georgia Institute of Technology with a Ph.D and masters degree in electrical and computer engineering.

AGI PATHFINDER John Carmack Founder, Keen Technologies

The legendary game developer and VR visionary shifted gears in 2022 by founding Dallas-based Keen, intent on independently pursuing his next grand quest: the achievement of artificial general intelligence. Last fall, Carmack announced a new partnership for his pioneering, out-of-the-mainstream effort with Richard Sutton, chief scientific advisor at the Alberta Machine Intelligence Institute. Now the two are focused on developing an AGI prototype by 2030, including establishing and advancing AGI signs of life. Its likely that the important things that we dont know are relatively simple, Carmack has said. In 2023, he was a keynote speaker at the Future of AI Summit hosted by the Capital Factory Texas Fund at the George W. Bush Presidential Library in Dallas.

REAL-WORLD AI REVOLUTIONIST Dave Copps Co-Founder and CEO, Worlds

Serial entrepreneur Copps, whos been building artificial intelligence in North Texas for more than 15 years, is one of the regions most accomplished and respected AI pioneers. With a string of successful startups, including Brainspace, Copps latest venture is a leader in the field of real-world AI. Dallas-based Worlds, co-founded with President Chris Rohde and CTO Ross Bates, recently launched its latest groundbreaking platform called WorldsNQ, creating the Large World Model (LWM) concept in AI. The breakthrough technology, ushered to completion by Bates, leverages existing cameras and IoT sensors to improve and measure physical operations through LWMs and radically accelerates AI learningby 100 to 1,000 timeswithout needing human annotation. This enables systems to continually learn and adapt from their surroundings autonomously. Copps, a University of North Texas grad who hosts the Worlds of Possibility podcast and speaks about AI at conferences worldwide, received EYs regional Entrepreneur Of The Year award in 2022.

SUPPLY-CHAIN SAGE Skip Howard Founder, Spacee

Howard, the mastermind behind Dallas-based Spacee, is blazing a trail in the retail and hospitality industries with AI solutions for real-world challenges. By leveraging computer vision AI, robotics, and spatial augmented reality, Spacee is transforming how businesses operate and engage with customers. Its Deming shelf-mounted robot tracks inventory in real-time, while the HoverTouch platform turns any surface into an interactive touchscreen. Howards vision extends beyond Spacee, as he helps nurture other AI-oriented tech ventures seeking a community of like-minded companies. A sought-after speaker and key contributor to The University of Texas at Dallas Center for Retail Innovation and Strategy Excellence (RISE), Howard also bridges the gap between academia and the STEM side of retail. In 2019, his industry expertise earned him recognition as a finalist for EYs regional Entrepreneur Of The Year award.

RETAILING WUNDERKIND Ravi Shankar Kumar Co-Founder and CTO, Theatro

Kumar has pioneered multiple AI and generative AI technologies at Theatro, a Richardson-based software company offering a mobile app platform for hourly retail workers. As CTO and co-founder, he was instrumental in developing an award-winning application for Tractor Supply called Hey Gura, for example, enabling store associates to seamlessly access detailed information about products, processes, and policies. He also has led the development of prototypes and new projects that utilize GenAI to initiate diverse digital workflows, and worked to establish initiatives ensuring that Theatros AI applications are ethical and unbiased. Kumar has more than 40 patents in analytics, voice technology, and AI, and Theatro has been ranked No. 1 in technology innovation for four straight years by RIS News.

REWILDING RINGLEADER Ben Lamm Co-Founder and CEO, Colossal

Lamms breakthrough company, Dallas-based Colossal, is putting AI on the map for genetics. The serial entrepreneur co-founded the VC-backed company to focus on genetic engineering, reproductive technology, and AI solutions in support of renowned geneticist George Churchs de-extinction efforts. Recently Colossal has been leveraging synthetic biology as well as software and hardware to bring back the woolly mammoth. As a prelude to that so-called rewilding effort, the company has partnered with an elephant orphanage in Africa, deploying AI to study elephant herd dynamics. Lamm has appeared as a thought leader on innovation and technology in publications such as The Wall Street Journal and The New York Times. He previously founded multiple successful tech companies including AI startup Hypergiant.

VENTURE VISIONARY Richard Margolin Associated Partner, Highwire Ventures

A serial entrepreneur and researcher, Margolin is an associated partner at Highwire Ventures, a strategy-led, Dallas-based consulting firm where he builds AI tools for evaluating investment deals. He also co-founded and, until last November, was CEO of Dallas-based RoboKind. RoboKind is an EdTech company that designs and builds facially expressive robots that facilitate learning for STEM education and individuals with autism. Margolin is a Forbes Technology Council member, a 2017 Tech Titan, and a 2019 winner of EYs regional Entrepreneur Of The Year award. More recently, he was a presenter at the Global AI Summit 2022 in Riyadh, Saudi Arabia.

ONCOLOGY UPSTART Panna Sharma President, CEO, and Director, Lantern Pharma

As president, chief executive, and director of Dallas-based Lantern Pharma, Sharma is the driving force behind Lanterns use of AI in oncology drug discovery and development. The companys proprietary AI and machine learning platform, called RADR, leverages billions of data points and more than 200 advanced ML algorithms to personalize and develop targeted cancer therapies. Under Sharmas leadership, Lantern has made strides in using AI to reduce the time and cost associated with oncology drug development. Using AI, he told a reporter, the cost of drug development could be slashed from the typical $2 billion or more to less than $200 million. Before joining Lantern, Sharma was president and CEO of Cancer Genetics, where he helped expand the companys global footprint.

MULTIBILLION-DOLLAR MAN Sanjiv S. Sidhu Co-Founder and Chairman, o9 Solutions

Pioneering technologist and thought leader Sidhu continues to shape the future of AI applications after creating two multibillion-dollar companies in Dallas: i2 Technologies and o9 Solutions (the former with co-founder Ken Sharma, the latter with Chakri Gottemukkala). Supply chain software company i2 came out of Sidhus foundational work in computerized simulations for manufacturing processes at Texas Instruments AI lab. o9the name stands for optimization to the highest number, he sayshas developed a dynamic business-planning platform that enables AI-driven decision-making. Sidhu has shared his insights on the role of AI in transforming business operations in podcasts, feature interviews, and appearances at prominent industry events.

PILOTLESS PARAGON Patrick Strittmatter Director of Engineering, Shield AI

Strittmatter, whos director of engineering at Shield AI, has years of experience in engineering and product design. Shield is currently building Hivemind, an AI pilot, which it says will enable swarms of drones and aircraft to operate autonomously without GPS, communications, or a pilot. Strittmatters previous employers include Amazon Lab126, where he led product design teams for Kindle E-readers and IoT products, and BlackBerry, where he served as mechanical platform manager. He holds an MBA from The University of Texas at Dallas Naveen Jindal School of Management and a bachelor of science degree in mechanical engineering from Texas A&M University.

RADAR RISK-TAKER Dmitry Turbiner Founder and CEO, General Radar

Turbiner says the startup he founded and serves as CEO is to the militarys large space radars that detect aerial threats what the commercial company SpaceX is to NASA. While the main customers so far for General Radars advanced, AI-enhanced commercial aerospace radar have been in the defense industry, the groundbreaking technology has applications in other areas, including for providing early hazard warnings in autonomous cars. Turbiner previously was an engineer at NASAs Jet Propulsion Laboratory, where 12 of his phased array antennas continue to orbit on a group of six heavy NASA satellites. He has a bachelor of science degree in electrical engineering from MIT and a partial masters of science in electrical engineering from Stanford University.

AUTOMATION ADVOCATE David C. Williams Assistant Vice President, AT&T

As the leader of hyper-automation at Dallas-based AT&T, Williams develops and deploys AI tools and strategies and advocates for responsible AI use and digital inclusivity. His organization has solved multiple business challenges with AI innovations, including projects to fine-tune AT&Ts software-defined network platform and to summarize conversations between the companys agents and customers. Williams, a sought-after speaker on tech topics, also has authored two patentsfor reprogrammable RFID and bridging satellite and LTE technology. They illustrate his ability to innovate across different technology domains, an essential quality for creating comprehensive AI solutions leveraging data from diverse sources and systems.

IMAGE INNOVATOR Feng (Albert) Yang Founder & Advisor, Topaz Labs

Yang, a seasoned entrepreneur and expert in video and signal processing, founded Topaz Labs, a provider of deep learning-based image and video enhancement for photographers and video makers. Under his leadership, the Dallas-based company has developed cutting-edge, AI-driven software tools that improve sharpening, noise reduction, and upscaling of images and videos. A former assistant professor at Chinas Harbin Institute of Technology, Yang remains a Topaz advisor but stepped down as the companys president and CTO in 2021. Says one of his former employees: He was the core developer of the code base for every single Topaz product. He is a doer and not a talker.

CAPITAL IDEAS Bryan Chambers Co-Founder and President, Capital Factory

Chambers is a leading, high-profile proponent of AIs transformative potential, proactively building and nurturing a supportive ecosystem for the space as the Capital Factorys co-founder and president. An accelerator and venture capital fund thats described as the center of gravity for Texas entrepreneurs, Capital Factory has invested in a number of AI-focused companies, including Dallas-based Worlds and Austins Big Sister AI. Under Chambers, Capital Factory also has hosted a wide variety of AI events, co-working sessions, and challenges. Among them were The Future of AI Summit in Dallas last September, Februarys AI Salon at Old Parkland in Dallas, and the $100,000 AI Investment Challenge held during the companys 2019 Defense Innovation Summit in Austin.

BRILLIANT BILLIONAIRE Mark Cuban Founder, Mark Cuban Companies

Cuban, the influential, nationally known billionaire entrepreneur, has long championed the AI ecosystem through multiple initiatives, investments, and educational efforts. Hes stressed the importance of AI education for everyone, including business owners and employees, and founded the Mark Cuban Foundations Intro to AI Bootcamp program for underserved high school students. AI-powered companies Cuban has invested in include Node, a San Francisco-based startup, and British company Synthesia. The minority owner of the NBAs Dallas Mavericks has predicted that AI will have a more significant impact than any technology in the last 30 yearsand has warned that those who dont understand or learn about AI risk becoming a dinosaur.

PRAGMATIC PIONEER David Evans Managing Partner, Sentiero Ventures

Evans Dallas-based VC firm, Sentiero Ventures, invests in seed-stage, AI-enabled SaaS companies that improve customer experiences or enhance financial performance. Evans, a veteran technologist and serial entrepreneur, was exposed early to AI while working on a NASA project in the late 1990s. Widely considered one of the most knowledgeable, best-connected thought leaders and speakers on Dallas-Fort Worths AI scene, hes said Sentiero is most interested in founders with velocity, revenue, and pragmatic strategies for growth. In the last year, hes led investments in the likes of Velou, a San Francisco-based e-commerce enablement startup, and Montreal-based Shearwater Aerospace. Sentieros successful exits include Dallas-based MediBookr, a healthcare navigation and patient engagement platform.

HOUSE HEADMAN Rep. Giovanni Capriglione Texas House of Representatives

Capriglione, a Republican House member representing District 98 with an IT background, co-chairs a new state advisory board on artificial intelligence with Sen. Tan Parker, R-Flower Mound. The states Artificial Intelligence Advisory Council will submit a report to the Legislature by Dec. 1 that makes policy recommendations to protect Texans from harmful impacts of AI and promotes an ethical framework for the use of AI by state agencies. Capriglione says he recognizes AIs potential for streamlining and fiscal efficiencies. But, in a video recorded for the National Conference of State Legislatures, he added, We have to be careful with how its being used, that its being done in a similar way as humans would have done it, and that we measure outcomes.

ALLIANCE ALPHA Victor Fishman Executive Director, Texas Research Alliance

Fishmans in the middle of the action as executive director of the Richardson-based Texas Research Alliance, which works to ensure that North Texas industry, universities, and governments are able to leverage research and innovation resources to grow and lead their markets. Last fall, the alliance was pivotal in the first-ever DFW University AI Collaboration Symposium, where Fishman was a participant. Our students are running towards AI, so we need to run faster, he told the symposium. Lets keep this momentum to create more collaboration between our North Texas universities and partners, turning Dallas-Fort Worth into a leader for AI research. The alliance also was instrumental in proposals supporting the Texoma Semiconductor Tech Hub, led by Southern Methodist University.

SENATORIAL CHARGER Sen. Tan Parker Texas State Senate

Parker, a North Texas businessman and Republican state senator representing District 12, was named earlier this year to the states Artificial Intelligence Advisory Council, where he serves as co-chair with state Rep. Giovanni Capriglione, R-Southlake. The council, created by the Legislature during last years session, studies and monitors AI systems developed or used by state agencies. The council also is charged with assessing the need for an AI code of ethics in state government and recommending any administrative actions state agencies should take regarding AI. Its supposed to submit a report about its findings to the Legislature by Dec. 1. Observers say the council could recommend a new office for AI policy-making, or advise designating an existing state agency to craft policy for artificial intelligence.

FAIRWAY FUTURIST Kevin J. Scott CTO/CIO, PGA of America

Scott is a thought leader driving AI adoption and comprehension inside the PGA and the broader golf industry, as well as among Dallas-Fort Worth technology executives. While hes set the PGAs internal strategy for efficiency improvementsincluding enhancing the organizations app user experience with AIScott says the Frisco-based group is planning additional AI initiatives that are more sophisticated and comprehensive. At the same time, hes helped guide industry leaders in crafting their own AI roadmaps via workshops and public speaking appearances. A former senior director for innovation and advanced products at ESPN, Scott also has worked with Frisco ISD to facilitate AI meetups and mentoring for its students.

NEWSLETTER NOTABLE Deepak Seth Creator/Founder, DEEPakAI: AI Demystified Product DirectorTrading Architecture Platform Strategy & Support, Charles Schwab

Seth has been recognized as a Top 50 global thought leader and influencer for generative AI, in part on the strength of his DEEPakAI: AI Demystified newsletter. The weekly LinkedIn missive with more than 1,500 subscribers has played a key role in making AI more accessible and understandable to a wide audience, as well as opening up discussion about inclusivity, ethics, and the practical applications of AI. In addition to his thought leadership, Seth has been a driving force in the adoption and integration of AI technologies at Charles Schwab, where he initiated C-suite engagement for generative AI adoption, analyzing risks and tools, developing use cases, and achieving a proof of concept that enhanced customer service response by 35%. The product director also pioneered a cross-functional AI-driven employee engagement initiative targeting a 15% boost in agent engagement metrics, earning recognitions with the Presidents Innovation Award. An adjunct professor at Texas Christian University, Seth has been a featured guest on multiple webcasts.

COLLABORATION CATALYST Dan Sinawat Founder, AI CONNEX

After arriving in North Texas from Singapore a few years ago, Sinawat has made a name for himself as a visionary leader in the local AI community with a decidedly global perspective. His enthusiasm for the space led him to found AI CONNEX, a Frisco-based community and networking group for AI enthusiasts, experts, and professionals. Under Sinawats proactive leadership, the group put together an accelerator program for early-stage startups as well as various events promoting AI and technology. The latter included one in partnership with Toyota Connected that focused on innovation in the auto industry. Sinawat also has conducted podcasts and been active in AI panel discussions and on social media.

PATENTED EXPERT Stephen Wylie Lead AI Engineer, Thryv

Wylie, who leads all AI and ML projects at Grapevine-based Thryv, has more than 130 software patents in the AI, AR/VR, Blockchain, and IoT spaces. Above all, he has said, I innovate. A Google Developer Expert in AI/ML, Wylie has long been an active AI educator, writing a blog and speaking frequently before user groups and conferences locally (at UT Dallas and TCU), nationally, and internationally. His talks, aimed at engineers, have included topics such as fusing AI with augmented reality and how AI will affect the careers of future engineers and practitioners. Wylie worked earlier as a senior software engineer for rewardStyle and Capital One.

TRANSIT VISIONARY Khaled Abdelghany Professor of Civil and Environmental Engineering, Southern Methodist University

Abdelghany is a faculty member at SMUs Lyle School of Engineering and a pivotal member of the National Academies Transportation Research Board committee on AI and advanced computing. His groundbreaking work, documented in multiple peer-reviewed journals, focuses mainly on the use of advanced analytics and AI to make infrastructure more efficient, resilient, and equitable. He has developed and adopted AI techniques in such problem domains as real-time traffic network management and urban growth predictions, for example. Over the last year, he secured a three-year, $1.2 million federal grant to develop advanced AI models for traffic-signal operations, elevating SMUs reputation as a hub for cutting-edge AI research and innovation.

PROFESSORIAL PACESETTER Lee Brown Associate Professor of Management and Director of Research, Texas Womans University

In his roles as a management professor and research director at Texas Womans University in Denton, Brown is continually learning about and exploring ways to apply generative AI technologies to improve classroom engagement and workplace efficiency. In addition to incorporating AI in MBA courses, enabling students to take advantage of AI for data analysis, decision-making, and strategic planning, he has spoken extensively at TWU and at outside conferences about how AI can revolutionize educational methodologies, promoting innovation and inclusion. Says Brown: My commitment to leveraging AI in education is driven by a vision of creating an empowered and adaptable workforce, capable of contributing meaningfully to our increasingly interconnected and technologically driven world.

INSIGHT ICON Gautam Das Associate Dean for Research and Distinguished University Chair Professor, The University of Texas at Arlington

Das is an internationally known computer scientist with more than three decades of experience in AI, ML, and Big Data analytics. His research has been supported by grants totaling more than $9 million, and he has published 200-plus papers, many of them featured at top conferences and in premier journals. As a professor and associate dean for research at The University of Texas at Arlingtons college of engineering, he is responsible for promoting research excellence across the college and at UTA. Over the years Das research has included all aspects of AI, machine learning, and data mining. Hes currently working in areas such as machine learning approaches for approximate query processing, and fairness and explainability in data management systems.

SOLUTION STRATEGIST Douglas DeGroot Director, Center for Applied AI and Machine Learning, The University of Texas at Dallas

DeGroot is co-founder and director of the Center for Applied AI and Machine Learning (CAAML) at UT Dallas in Richardson. The center was founded in 2019 to help Texas companies and organizations use leading-edge artificial intelligence and machine learning to enhance their products, services, and business processes. So far, the industry-facing, applied R&D center has been involved in about 10 projects for companies including Vistra Corp., Rayburn Corp., Infovision, and Nippon Expressway Co. The projects have been diverse, from building an explainable machine learning system to coming up with an optimal model for electricity pricing. In addition, DeGroot has shared insights on solving industry problems using AI and ML at community events such as The IQ Brew. At CAAML, he works closely with Co-Director Gopal Gupta, a professor of computer science known for his expertise in automated reasoning, rule-based systems, and artificial intelligence.

AI STEWARD Yunhe Feng Director, Responsible AI Lab, The University of North Texas

As an assistant professor in UNTs department of computer science and engineering, Feng directs the Denton schools 2-year-old Responsible AI Laboratory, which advances AI research with a focus on Responsible AI, Generative AI, and Applied AI. He also co-directs UNTs masters program in artificial intelligencethe first of its kind in Texas. He has published research on a variety of AI topics and co-authored a paper on the impact of ChatGPT on streaming media. Financial Times and Business Insider have reported on his work, and hes served as a guest lecturer for various courses, including AI for Social Good. Last year, Feng received the IEEE Smart Computing Special Technical Community Early Career Award for his contributions to smart computing and responsible computing.

TCU TRAILBLAZER Beata Jones John V. Roach Honors Faculty Fellow at Neeley School of Business, Texas Christian University

Jones is a professor of practice in business information systems at TCUs Neeley School of Business in Fort Worth. As a regular contributing writer for Forbes, she often explores the transformative potential of AI in higher education and its impact on business. (Another TCU Neeley faculty member, marketing instructor Elijah Clark, also is a regular Forbes contributing writer on AI.) Jones has written for the publication about how generative AI tools are transforming academic research by fact-checking, for example, or by supporting data visualization or by offering feedback on drafts. Jones, whos been passionate about artificial intelligence since her teenage years, has bachelors and masters degrees from New Yorks Baruch College and a Ph.D. from The Graduate Center at City University of New York.

BIOMEDICAL BARD Guanghua Xiao Mary Dees McDermott Hicks Chair, UT Southwestern Medical Center

Xiao has made significant contributions to the field of medical AI, especially in the application of artificial intelligence to pathology and cancer research. He holds the Mary Dees McDermott Hicks Chair in Medical Science at UT Southwestern in Dallas. Hes also a professor in UTSWs Peter ODonnell Jr. School of Public Health, Biomedical Engineering, and the Lyda Hill Department of Bioinformatics. Xiaos research has focused on AI models that enhance cancer understanding and treatment through advanced image analysis and bioinformatics tools. His key contributions have included developing an AI model called Ceograph, for analyzing cells in tissue samples to predict cancer outcomes, and helping develop the ConvPath software tool. It uses AI to identify cancer cells from lung cancer pathology images.

MASTER MOTIVATOR Amy Blankson Co-Founder and Chief Evangelist, Digital Wellness Institute

Blankson, a wellness author and motivational speaker on overcoming the fear of AI through the lens of neuroscience and fearless optimism, co-founded the Digital Wellness Institute, a Dallas-based tech-based startup. As the institutes chief evangelist, she uses AI to benchmark the digital wellness of the institutes organizations and speaks externally about the evolving future of AI in the workplace. A graduate of Harvard and the Yale School of Management, Blankson has been a contributing member of the Institute of Electrical and Electronics Engineers Standards for Artificial Intelligence. Earlier this year she presented at SHRMs inaugural AI+HI (Human Intelligence) Project conference, where she discussed AI and the Future of Happiness at Microsofts campus in Silicon Valley.

GAMING GUIDE Corey Clark CTO and Co-Founder, BALANCED Media Technology

Clark co-founded and serves as CTO at BALANCED Media Technology, whose tech infrastructure fuses AI and machine learning with data to bring purpose to play through distributed computing and human computational gaming. Securing nearly $30 million from various funding sources, the Allen-based company was co-founded with CEO Robert Atkins. The motivation behind establishing BALANCED was to leverage the intersection of human intelligence and machine learning through video games to solve complex problems, particularly in the medical field. This innovative approach works to develop, among other things, AI-powered games that aim to combat human trafficking and treat ocular diseases more efficiently. Clark also is an assistant professor of computer science at Southern Methodist University and deputy director for research at the SMU Guildhall. Last year, he contributed to a research paper on eXplainable artificial intelligence, focusing on making AI decision-making more transparent.

AI INCLUSIVIST Pranav Kumar Digital Project Manager, LERMA/ Agency

View original post here:

Presenting the First-Ever AI 75: Meet the Most Innovative Leaders in Artificial Intelligence in Dallas-Fort Worth - dallasinnovates.com

OPINION: Artificial intelligence, the usefulness and dangers of AI – Coast Report

The entrance to Paramount Pictures in Hollywood.

As my mind wanders to AI, robots, and machines replacing humans, realizations enter and I see it not as AI troubles, but as humans abuse the systems we create. I see the decline and lack of efforts in schools, and how hesitant people are for social interaction and the affliction to connection.

Each day there are new advancements in AI. But if it is learning, and each day it is getting to know more things, and its getting smarter, then it wasnt smart to begin with. How are we supposed to believe in something when it does not have the full capable potential of its vast atmosphere?

There are talks that it is a scary thing. Something is happening. But where is this thing? What does it look like? Does it have four eyeballs? Does it hide in the closet or underneath my bed at night? No. All it is is a technique and a machine to operate and make our lives easier for no reason. The true villain behind it is us.

The ones who use it, use it incorrectly. The people who tend to think that taking corners and skipping the lessons we have learned, and parental figures have seen and felt before us somehow doesnt matter anymore. We strive for new advancements with no idea where they will lead, and somehow, that is a resource.

We have been doing it for over 100 years and yet companies are wanting to make a fully self-driving car. The reasons why does not enter into my atmosphere since there is no valid reason. We are capable of writing an essay, creating math solutions, driving cars, and having surgery for a knee replacement. Yet as time passes, humans find ways for humans to do less. With that, comes laziness, lack of common sense, and street smarts vanish.

By doing the things on my own and creating solutions, it gave me a sense of self-respect. The realizations that I have the potential to do the things that I want to strive for, which is excellence.

To be your own person with imagination and self-fulfilling creativity is to see happiness and sadness at their best and worse. To understand determination, anguish, grief and unadulterated bliss.

Yet some choose not to.

Its the opposite of an opportunity to make ones life better. Yet they vanish like a supernatural ghost you see in the distance. Or a political figures good nature when they start to run for office.

The speech in which they speak is loud and ruthless. Harsh, yet dull with a banal sense of sophistication. They postpone any type of meaningful discussions.

I choose, consciously, to be different. I challenge and take charge. I avoid talking when I do not know. Possibly taking away that one vestigial piece of truth the opposition speaks.

After all that, theres still some nameless undistinguishable apprehension in their unconscious mind that I have so easily picked out. That smile. That wave. That cheers of a plastic cup of and glaring pessimistic view they have on the world.

It is something that they do without. Its something that I have and its something that I have noticed.

Otherwise known as self-respect.

I do have some relevance in this topic. Last semester in my critical thinking class, nonfiction, I and three others were asked to present a topic of an ethical crisis. We chose artificial intelligence. My nine-page paper that came shortly after that was also my final paper of the semester included 10 pros and 10 cons of AI.

We broke into several categories including entertainment and education.

Education

Each time a new class for the semester begins, the class is given or told to look at the syllabus on the website and to see what is to be done and what is forbidden. Lately, more and more and then all of the classes I endure are promptly educating the students of Al and the use of cheating.

No passing notes in class.

No phones in class.

No use of AI in class.

The evolution of teachers habitual demands.

Now students can formulate ideas and have a starting point of creativity. Ask AI how to manage test anxiety. Ask for steps on how to prepare me for transferring colleges, and how to find an internship for creative writing.

Yet, students are having the ease without the idea of repercussions for the abuse of AI.

The lack of creativity it can cause may lead to students not developing properly. I assume it began with isolation and students finding it easier to not engage fully with teachers and peers. Now we are all back to normal and we assume we shall strive for connectivity.

Yet some are not capable without their AI to guide them. They are now relying on it.

Just last week I saw on TV a robot parents can buy to help their child learn social skills and communication.

The choice is not whether or not students learn the homework and know the material given to them, but now it is about whether they strive for excellence or fall behind.

Critical thinking is not just to dive deeper into ideas. It is to find a topic, idea, and solution to deconstruct it and keep asking questions until you or the other person breaks down into oblivion where the answers cannot justify the questions being asked, and there are no more answers to give.

Common sayings will say something along the lines of going beneath the surface level, or the tip of the iceberg or some of those bland sayings that are something entirely other. The people who use those are the ones who need the understanding of critical thinking.

But I can not describe what the surface level is. Each scenario is different. One may not have to go extremely deep into understanding the topics, ideas, and/or solutions. One can not pre-plan the surface level. One must learn to evolve while the conversation unfolds.

Over time, the one questioning, the questioner/critical thinker, evolves the ability to articulate high-level criticism. The criticism should not be negative, but to help yourself and the one you are trying to evaluate. But I do believe sometimes negative and harsh realism is imperative. Take a hammer to a rock and smash, breaking, exposing each particle until you see and can extract the gold from the inside.

That is the purpose of each moment. Questioning to an extreme, harshly or quietly, and gently pursuing and constantly spiraling into the clarity you both can subconsciously agree on. You both will know it, the critical thinking, is done because there will be a quiet sense of revelation.

If one stops the dedication to think, critique, and define, then one's creativity is dead.

AI can not be a pillar of learning without knowing the consequences. One must maintain an understanding of how to properly use it.

In my journey to find answers, I conducted a Q&A interview with Tara Giblin, the Acting Vice President of Instruction at Orange Coast College, about her ideas and thoughts about AI in education.

Q: How do you see the use of AI in todays education system?

A: As an administrator, I have heard many conversations by faculty about how AI is changing or will change the way they teach. Our OCC Faculty Senate has placed high importance on discussing AI weekly because it is impacting faculty in many different ways. Right now we are just learning about its capabilities and trying to understand the pluses and minuses that come with any new technology. AI certainly has the power to make our workplace more efficient, but we are in the early stages of figuring out how it fits into the classroom.

Q: How do you or how have you used rules or policies for student's negative attraction to it? (Cheating)

A: "Not working directly in the classroom, I dont have first-hand experience. However, I hear faculty talking about how they are developing policies in their classrooms to help students understand how AI fits into their learning and how to guide students away from using AI as a substitute for learning or producing original material. I have heard suggestions like having students do their writing assignments in class with spontaneous prompts, so they will do original work or as teachers, using AI to generate responses to questions in front of the class then asking the class to critique these answers and analyze how they might tell the difference between AI and original work. This raises awareness of the downfalls of AI generated answers."

I was also intrigued to find my past Ethics professor for him to share his thoughts and ideas about AI. Professor Phillip Simpkin shares his ideas.

Q: How do you see the use in today classrooms?

A: "I see it in a large and growing way. It is being applied more and more. It is just going to increase. For good or for bad its going to be everywhere. The computer browsers were already kind of AI, to quickly find things. And that is where it is really nice. There is two uses for me. I tell my students it can help you to become a lazier, worst student or can help you become a better student. It can help you become a lazier student because it can do the work for you and then you are not going to learn. And that's is the most troubling part for me is, on the other hand, it can help you and that is really an important part too. You can put your essay in and help you find your grammar mistakes. But the bad issue with this is when they say, find all my grammar mistakes and fix them for me. But now that's where you dont learn anything from the activity. At first people were wowed by it but you see how mechanical and clunky ChatGPT is. It will be over verbose and overly eloquent when you dont really need it to be, metaphors that have no business for being in it. I am worried about that people do stuff for them that they should be doing on their own."

Q: Do you think that teachers advise students how to use AI?

A: "I sit in a classroom and I say here is a question, what is an answer. And most of the time I get silence now. Not even a soul wants to say anything, and I may get one or two talkative students. At the same time, it's the smartest student in the class, and they can come up with ten different answers. I hope my students listen and copy it down. And they listen to the next person and copy it down. The AI now can be that student or conversation with for a back and forth. And I think that's a good use. So I am stuck. I send my students home all the time and I try to have them generate great ideas and I can force them to do that. A lightbulb moment may happen. But lots of students don't feel that creative for whatever reason and that could get them out of that hole."

Q: Why are student attracted to AI and finding out answers?

A:"It makes life easier. But there is more to it. There is an actual attraction that the computer can do it for you and it's very tempting to see what it is and how it works. Anything can save you time. Right now we have a crisis of expertise. People dont know who the authorities are or proper authority. Whos expertise to take serious or not. So I feel like the AI for them seems like it will tell them truths. And in many ways it does pretty good. But right now it is strictly just a machine."

Hollywood

It has been around for a while, but it is slowly becoming a threat. I think of Toy Story from 1995 and see how amazing and groundbreaking it was. Then I see Toy Story 4 in 2019 and I am taken back by the accomplishments.

Some grab it to see the new visual effects and new heights, and some see it as taking away jobs. Yet AI has many aspects to the Hollywood industry. AI will not just write a script and have it done in a few minutes. It is still learning how to manage emotions and rise and fall structure. But we, humans, still need to control the rate it grows. The robots will not suddenly take over our lives. But why do we strive so assiduously to create things that something else could do for us?

Writers, actors and directors go on strike to spread awareness of their concerns. They are passionate and full of rules of their own.

But only part of the strike was dedicated to artificial intelligence. People become frantic, emotions get lost, the heights of the mindset of the abandoned job is close. But there is no level of any type of consideration for replacement of jobs. AI is still getting built with new algorithms. AI is still being considered.

There is some perpetual fear, but it is obfuscated by the truth. The reality behind all that is dull. Theres nothing behind it. Theres nothing behind it because there never was. The idea that AI will take over anyones job of writing anytime soon is not part of our atmosphere. AI is not detailed enough to show what it could truly be. Yet us humans have the ability to make it grow. Shall we?

I also conducted an Q&A interview with Actor Makai Michael about AI.

Q: How do you see the use of AI in todays film industry?

A: "When it comes to AI in post production, object removal and scene stabilization for sound design, I find that this is more understandable for me. I am not immersed in the world of editing so editors may have a completely different stance than me. I could see this as taking jobs away from editors who are highly skilled which truly is devastating. As an actor, hearing about the industry executives wanting to use AI to exploit background actors' work is awful. I would not ever wish to see that happen to my work."

Q: Because the growth of AI, do you see yourself being a part of any films that are heavy with CGI?

A: "Although I am pretty against the use of AI in film, ESPECIALLY AI being able to use the likeness of actors and exploiting their work for the benefit of producers and directors, I could possibly see myself in films that use CGI. As an actor I take pride in being a part of projects that build worlds and spaces for others to get lost and seek comfort in, and oftentimes that requires some CGI work. I believe if the CGI is used for world building/ setting building for the most part then it is alright."

Q: How did you feel and perceive the writers strike that happened last year?

A: "Since I am still very new to being involved in the industry side of acting I do not have the biggest range of knowledge when it comes to the strike. I perceived it to be a fight for a better income and a fight AGAINST the use of AI to recreate actors' work, time and time again. From what I have gathered it seems like the thoughts are split on whether we went forwards, backwards, or stayed the same in terms of making a change. I thought it was inspiring watching the actors and writers stand in unison against a system that often plays unfairly."

Q: Do you think AI will have a place for character development or script writing?

A: "My stance will always stay firm until I am convinced otherwise, and my stance is no. I do think that people WILL use AI, but I almost wish we never got to this point at all. I think its a cheap, and soulless way to make projects. The best cinema in history came from someone who sat down and had to think of it all themselves or with the help of collaborators, not robots."

Q: Do you think AI will create a more enhancing experience with new innovations in a movie theater or home theater?

A: "Though I am against AI scriptwriting, and AI extra doubling, I do think that AI may be able to enhance movie theater or home theater experiences. I think of AR, augmented reality or VR, virtual reality. No matter my stance on the situation, AI truly is the biggest cultural phenomenon at the moment and people are going to want to test its limits and that is understandable. When it starts to kill the heart of human creativity is when it starts to kill my love for art."

Continued here:

OPINION: Artificial intelligence, the usefulness and dangers of AI - Coast Report

NSF Funds Groundbreaking Research Project to ‘Democratize’ AI – Northeastern University

Groundbreaking research by Northeastern University will investigate how generative AI works and provide industry and the scientific community with unprecedented access to the inner workings of large language models.

Backed by a $9 million grant from the National Science Foundation, Northeastern will lead the National Deep Inference Fabric that will unlock the inner workings of large language models in the field of AI.

The project will create a computational infrastructure that will equip the scientific community with deep inferencing tools in order to develop innovative solutions across fields. An infrastructure with this capability does not currently exist.

At a fundamental level, large language models such as Open AIs ChatGPT or Googles Gemini are considered to be black boxes which limits both researchers and companies across multiple sectors in leveraging large-scale AI.

Sethuraman Panchanathan, director of the NSF, says the impact of NDIF will be far-reaching.

Chatbots have transformed societys relationship with AI, but how they operate is yet to be fully understood, Panchanathan says. With NDIF, U.S. researchers will be able peer inside the black box of large language models, gaining new insights into how they operate and greater awareness of their potential impacts on society.

Even the sharpest minds in artificial intelligence are still trying to wrap their heads around how these and other neural network-based tools reason and make decisions, explains David Bau, a computer science professor at Northeastern and the lead principal investigator for NDIF.

We fundamentally dont understand how these systems work, what they learned from the data, what their internal algorithms are, Bau says. I consider it one of the greatest mysteries facing scientists today what is the basis for synthetic cognition?

David Madigan, Northeasterns provost and senior vice president for academic affairs, says the project will help address one of the most pressing socio-technological problems of our time how does AI work?

Progress toward solving this problem is clearly necessary before we can unlock the massive potential for AI to do good in a safe and trustworthy way, Madigan says.

In addition to establishing an infrastructure that will open up the inner workings of these AI models, NDIF aims to democratize AI, expanding access to large language models.

Northeastern will be building an open software library of neural network tools that will enable researchers to conduct their experiments without having to bring their own resources, and sets of educational materials to teach them how to use NDIF.

The project will build an AI-enabled workforce by training scientists and students to serve as networks of experts, who will train users across disciplines.

There will be online and in-person educational workshops that we will be running, and were going to do this geographically dispersed at many locations taking advantage of Northeasterns physical presence in a lot of parts of the country, Bau says.

Research emerging from the fabric could have worldwide implications outside of science and academia, Bau explains. It could help demystify the underlying mechanisms of how these systems work to policymakers, creatives and others.

The goal of understanding how these systems work is to equip humanity with a better understanding for how we could effectively use these systems, Bau says. What are their capabilities? What are their limitations? What are their biases? What are the potential safety issues we might face by using them?

Large language models like Chat GPT and Googles Gemini are trained on huge amounts of data using deep learning techniques. Underlying these techniques are neural networks, synthetic processes that loosely mimic the activity of a human brain that enable these chatbots to make decisions.

But when you use these services through a web browser or an app, you are interacting with them in a way that obscures these processes, Bau says.

They give you the answers, but they dont give you any insights as to what computation has happened in the middle, Bau says. Those computations are locked up inside the computer, and for efficiency reasons, theyre not exposed to the outside world. And so, the large commercial players are creating systems to run AIs in deployment, but theyre not suitable for answering the scientific questions of how they actually work.

At NDIF, researchers will be able to take a deeper look at the neural pathways these chatbots make, Bau says, allowing them to see whats going on under the hood while these AI models actively respond to prompts and questions.

Researchers wont have direct access to Open AIs Chat GPT or Googles Gemini as the companies havent opened up their models for outside research. They will instead be able to access open source AI models from companies such as Mistral AI and Meta.

What were trying to do with NDIF is the equivalent of running an AI with its head stuck in an MRI machine, except the difference is the MRI is in full resolution. We can read every single neuron at every single moment, Bau says.

But how are they doing this?

Such an operation requires significant computational power on the hardware front. As part of the undertaking, Northeastern has teamed up with the University of Illinois Urbana-Champaign, which is building data centers equipped with state-of-the-art graphics processing units (GPUs) at the National Center for Supercomputing Applications. NDIF will leverage the resources of the NCSA DeltaAI project.

NDIF will partner with New Americas Public Interest Technology University Network, a consortium of 63 universities and colleges, to ensure that the new NDIF research capabilities advance interdisciplinary research in the public interest.

Northeastern is building the software layer of the project, Bau says.

The software layer is the thing that enables the scientists to customize these experiments and to share these very large neural networks that are running on this very fancy hardware, he says.

Northeastern professors Jonathan Bell, Carla Brodley, Bryon Wallace and Arjun Guha are co-PIs on the initiative.

Guha explains the barriers that have hindered research into the inner-workings of large generative AI models up to now.

Conducting research to crack open large neural networks poses significant engineering challenges, he says. First of all, large AI models require specialized hardware to run, which puts the cost out of reach of most labs. Second, scientific experiments that open up models require running the networks in ways that are very different from standard commercial operations. The infrastructure for conducting science on large-scale AI does not exist today.

NDIF will have implications beyond the scientific community in academia. The social sciences and humanities, as well as neuroscience, medicine and patient care can benefit from the project.

Understanding how large networks work, and especially what information informs their outputs, is critical if we are going to use such systems to inform patient care, Wallace says.

NDIF will also prioritize the ethical use of AI with a focus on social responsibility and transparency. The project will include collaboration with public interest technology organizations.

Read the original here:

NSF Funds Groundbreaking Research Project to 'Democratize' AI - Northeastern University

Small is the new BIG in artificial intelligence – ET BrandEquity – ETBrandEquity

Representative image (iStock) There are similarities between the cold war era and current times. In the former, there was a belief that alliances having stronger nuclear arms will wield larger global influence. Similarly, organizations (and nations) in the existing era believe that those controlling the AI narrative, will control the global narrative. Moreover, scale was, and is, correlated with superiority; there is a belief that bigger is better.

Global superpowers competed in the cold war on whose nuclear systems are largest (highest megaton weapons), while in the current era, large technology incumbents and countries are competing on who can build the largest model, with highest number of parameters. Open AIs GPT 4 took global pole position last year, brandishing a model that is rumored to have over 1.5 trillion parameters. The race is not just about prestige; it is rooted in the assumption that larger models understand and generate human language with significant accuracy and nuance.

Democratization of AI One of the most compelling arguments for smaller language models lies in their efficiency. Unlike their larger counterparts, these models require significantly less computational power, making them accessible to a broader range of users. This democratization of AI technology could lead to a surge in innovation, as small businesses and individual developers gain the tools to implement sophisticated AI solutions without the prohibitive costs associated with large models. Furthermore, the operational speed and lower energy consumption of small models offer a solution to the growing concerns over the environmental impact of computing at scale.

Large language models popularity can be attributed to their ability to handle a vast array of tasks. Yet, this Jack-of-all-trades approach is not always necessary or optimal. Small language models can be fine-tuned for specific applications, providing targeted solutions that can outperform the generalist capabilities of larger models. This specialization can lead to more effective and efficient AI applications, from customer service bots tailored to a company's product line to legal assistance tools tailored on a countrys legal system.

On-device Deployment

The Environmental Imperative The environmental impact of AI development is an issue that cannot be ignored. The massive energy requirements of training and running large language models pose a significant challenge in the search for sustainable technology development. Small language models offer a path forward that marries the incredible potential of AI with the urgent need to reduce our carbon footprint. By focusing on models that require less power and fewer resources, the AI community can contribute to a more sustainable future.

As we stand on the cusp of technological breakthroughs, it's important to question the assumption that bigger is always better. The future of AI may very well lie in the nuanced, efficient, and environmentally conscious realm of small language models. These models promise to make AI more accessible, specialized, and integrated into our daily lives, all while aligning with the ethical and environmental standards that our global community increasingly seeks to uphold.

Their partnerships with leading mobile OEMs globally which cover 63 per cent of the global Android market helps Fintech brands to feature their apps on alternative app platforms. They also offer guidance throughout the campaign lifecycle for expanded reach and new revenue opportunities. Furthermore, some new age app growth companies have also launched their proprietary tools which fine-tune campaigns in real-time across mobile OEM inventory, aligning them with performance goals for enhanced Return On Ad Spend (ROAS).

Read more:

Small is the new BIG in artificial intelligence - ET BrandEquity - ETBrandEquity

Ways to think about AGI Benedict Evans – Benedict Evans

In 1946, my grandfather, writing as Murray Leinster, published a science fiction story called A Logic Named Joe. Everyone has a computer (a logic) connected to a global network that does everything from banking to newspapers and video calls. One day, one of these logics, Joe, starts giving helpful answers to any request, anywhere on the network: invent an undetectable poison, say, or suggest the best way to rob a bank. Panic ensues - Check your censorship circuits! - until they work out what to unplug. (My other grandfather, meanwhile, was using computers tospy on the Germans, and then the Russians.)

For as long as weve thought about computers, weve wondered if they could make the jump from mere machines, shuffling punch-cards and databases, to some kind of artificial intelligence, and wondered what that would mean, and indeed, what were trying to say with the word intelligence. Theres an old joke that AI is whatever doesnt work yet, because once it works, people say thats not AI - its just software. Calculators do super-human maths, and databases have super-human memory, but they cant do anything else, and they dont understand what theyre doing, any more than a dishwasher understands dishes, or a drill understands holes. A drill is just a machine, and databases are super-human but theyre just software. Somehow, people have something different, and so, on some scale, do dogs, chimpanzees and octopuses and many other creatures. AI researchers have come to talk about this as general intelligence and hence making it would be artificial general intelligence - AGI.

If we really could create something in software that was meaningfully equivalent to human intelligence, it should be obvious that this would be a very big deal. Can we make software that can reason, plan, and understand? At the very least, that would be a huge change in what we could automate, and as my grandfather and a thousand other science fiction writers have pointed out, it might mean a lot more.

Every few decades since 1946, theres been a wave of excitement that sometime like this might be close, each time followed by disappointment and an AI Winter, as the technology approach of the day slowed down and we realised that we needed an unknown number of unknown further breakthroughs. In 1970 the AI pioneer Marvin Minsky claimed that in from three to eight years we will have a machine with the general intelligence of an average human being, but each time we thought we had an approach that would produce that, it turned out that it was just more software (or just didnt work).

As we all know, the Large Language Models (LLMs) that took off 18 months ago have driven another such wave. Serious AI scientists who previously thought AGI was probably decades away now suggest that it might be much closer.At the extreme, the so-called doomers argue there is a real risk of AGI emerging spontaneously from current research and that this could be a threat to humanity, and calling for urgent government action. Some of this comes from self-interested companies seeking barriers to competition (This is very dangerous and we are building it as fast as possible, but dont let anyone else do it), but plenty of it is sincere.

(I should point out, incidentally, that the doomers existential risk concern that an AGI might want to and be able to destroy or control humanity, or treat us as pets, is quite independent of more quotidian concerns about, for example, how governments will use AI for face recognition, or talking about AI bias, or AI deepfakes, and all the other ways that people will abuse AI or just screw up with it, just as they have with every other technology.)

However, for every expert that thinks that AGI might now be close, theres another who doesnt. There are some who think LLMs might scale all the way to AGI, and others who think, again, that we still need an unknown number of unknown further breakthroughs.

More importantly, they would all agree that they dont actually know. This is why I used terms like might or may - our first stop is an appeal to authority (often considered a logical fallacy, for what thats worth), but the authorities tell us that they dont know, and dont agree.

They dont know, either way, because we dont have a coherent theoretical model of what general intelligence really is, nor why people seem to be better at it than dogs, nor how exactly people or dogs are different to crows or indeed octopuses. Equally, we dont know why LLMs seem to work so well, and we dont know how much they can improve. We know, at a basic and mechanical level, about neurons and tokens, but we dont know why they work. We have many theories for parts of these, but we dont know the system. Absent an appeal to religion, we dont know of any reason why AGI cannot be created (it doesnt appear to violate any law of physics), but we dont know how to create it or what it is, except as a concept.

And so, some experts look at the dramatic progress of LLMs and say perhaps! and other say perhaps, but probably not!, and this is fundamentally an intuitive and instinctive assessment, not a scientific one.

Indeed, AGI itself is a thought experiment, or, one could suggest, a place-holder. Hence, we have to be careful of circular definitions, and of defining something into existence, certainty or inevitably.

If we start by defining AGI as something that is in effect a new life form, equal to people in every way (barring some sense of physical form), even down to concepts like awareness, emotions and rights, and then presume that given access to more compute it would be far more intelligent (and that there even is a lot more spare compute available on earth), and presume that it could immediately break out of any controls, then that sounds dangerous, but really, youve just begged the question.

As Anselm demonstrated, if you define God as something that exists, then youve proved that God exists, but you wont persuade anyone. Indeed, a lot of AGI conversations sound like the attempts by some theologians and philosophers of the past to deduce the nature of god by reasoning from first principles. The internal logic of your argument might be very strong (it took centuries for philosophers to work out why Anselms proof was invalid) but you cannot create knowledge like that.

Equally, you can survey lots of AI scientists about how uncertain they feel, and produce a statistically accurate average of the result, but that doesnt of itself create certainty, any more than surveying a statistically accurate sample of theologians would produce certainty as to the nature of god, or, perhaps, bundling enough sub-prime mortgages together can produce AAA bonds, another attempt to produce certainty by averaging uncertainty. One of the most basic fallacies in predicting tech is to say people were wrong about X in the past so they must be wrong about Y now, and the fact that leading AI scientists were wrong before absolutely does not tell us theyre wrong now, but it does tell us to hesitate. They can all be wrong at the same time.

Meanwhile, how do you know thats what general intelligence would be like? Isaiah Berlin once suggested that even presuming there is in principle a purpose to the universe, and that it is in principle discoverable, theres no a priori reason why it must be interesting. God might be real, and boring, and not care about us, and we dont know what kind of AGI we would get. It might scale to 100x more intelligent than a person, or it might be much faster but no more intelligent (is intelligence just about speed?). We might produce general intelligence thats hugely useful but no more clever than a dog, which, after all, does have general intelligence, and, like databases or calculators, a super-human ability (scent). We dont know.

Taking this one step further, as I listened to Mark Zuckerberg talking about Llama 3, it struck me that he talks about general intelligence as something that will arrive in stages, with different modalities a little at at a time. Maybe people will point at the general intelligence of Llama 6 or ChatGPT 7 and say Thats not AGI, its just software! We created the term AGI because AI came just to mean software, and perhaps AGI will be the same, and we'll need to invent another term.

This fundamental uncertainty, even at the level of what were talking about, is perhaps why all conversations about AGI seem to turn to analogies. If you can compare this to nuclear fission then you know what to expect, and you know what to do. But this isnt fission, or a bioweapon, or a meteorite. This is software, that might or might not turn into AGI, that might or might not have certain characteristics, some of which might be bad, and we dont know. And while a giant meteorite hitting the earth could only be bad, software and automation are tools, and over the last 200 years automation has sometimes been bad for humanity, but mostly its been a very good thing that we should want much more of.

Hence, Ive already used theology as an analogy, but my preferred analogy is the Apollo Program. We had a theory of gravity, and a theory of the engineering of rockets. We knew why rockets didnt explode, and how to model the pressures in the combustion chamber, and what would happen if we made them 25% bigger. We knew why they went up, and how far they needed to go. You could have given the specifications for the Saturn rocket to Isaac Newton and he could have done the maths, at least in principle: this much weight, this much thrust, this much fuel will it get there? We have no equivalents here. We dont know why LLMs work, how big they can get, or how far they have to go. And yet, we keep making them bigger, and they do seem to be getting close. Will they get there? Maybe, yes!

On this theme, some people suggest that we are in the empirical stage of AI or AGI: we are building things and making observations without knowing why they work, and the theory can come later, a little as Galileo came before Newton (theres an old English joke about a Frenchman who says thats all very well in practice, but does it work in theory). Yet while we can, empirically, see the rocket going up, we dont know how far away the moon is. We cant plot people and ChatGPT on a chart and draw a line to say when one will reach the other, even just extrapolating the current rate of growth.

All analogies have flaws, and the flaw in my analogy, of course, is that if the Apollo program went wrong the downside was not, even theoretically, the end of humanity. A little before my grandfather, heres another magazine writer on unknown risks:

I was reading in the paper the other day about those birds who are trying to split the atom, the nub being that they haven't the foggiest as to what will happen if they do. It may be all right. On the other hand, it may not be all right. And pretty silly a chap would feel, no doubt, if, having split the atom, he suddenly found the house going up in smoke and himself torn limb from limb.

Right ho, Jeeves, PG Wodehouse, 1934

What then, is your preferred attitude to risks that are real but unknown?? Which thought experiment do you prefer? We can return to half-forgotten undergraduate philosophy (Pascalss Wager! Anselms Proof!), but if you cant know, do you worry, or shrug? How do we think about other risks? Meteorites are a poor analogy for AGI because we know theyre real, we know they could destroy mankind, and they have no benefits at all (unless theyre very very small). And yet, were not really looking for them.

Presume, though, you decide the doomers are right: what can you do? The technology is in principle public. Open source models are proliferating. For now, LLMs need a lot of expensive chips (Nvidia sold $47.5bn in the last 12 months and cant meet demand), but on a decades view the models will get more efficient and the chips will be everywhere. In the end, you cant ban mathematics. On a scale of decades, it will happen anyway. If you must use analogies to nuclear fission, imagine if we discovered a way that anyone could build a bomb in their garage with household materials - good luck preventing that. (A doomer might respond that this answers the Fermi paradox: at a certain point every civilisation creates AGI and it turns them into paperclips.)

By default, though, this will follow all the other waves of AI, and become just more software and more automation. Automation has always produced frictional pain, back to the Luddites, and the UKs Post Office scandal reminds us that you dont need AGI for software to ruin peoples lives. LLMs will produce more pain and more scandals, but life will go on. At least, thats the answer I prefer myself.

Here is the original post:

Ways to think about AGI Benedict Evans - Benedict Evans

Warren Buffett Discusses Apple, Cash, Insurance, Artificial Intelligence (AI), and More at Berkshire Hathaway’s Annual … – The Motley Fool

Berkshire is bolstering its cash reserves and passing on riskier bets.

Tens of thousands ofBerkshire Hathaway(BRK.A -0.56%) (BRK.B 0.07%) investors flocked to Omaha this past week for the annual tradition of listening to Warren Buffett muse over the conglomerate's business, financial markets,and over 93 years of wisdom on life. But this year's meeting felt different.

Longtime vice chairman Charlie Munger passed away in late November. His wry sense of humor, witty aphorisms, and entertaining rapport with Buffett were missed dearly. But there were other noticeable differences between this meeting and those of past years -- namely, a sense of caution.

Let's dive into the key takeaways from the meeting and how it could influence what Berkshire does next.

Image source: The Motley Fool.

The elephant in the room was Berkshire's decision to trim its stake inApple (AAPL 5.98%) during the first quarter. Berkshire sold over 116 million shares of Apple in Q1, reducing its position by around 12.9%. It marks the company's largest sale of Apple stock since it began purchasing shares in 2016 -- far larger than the 10 million or so shares Berkshire sold in Q4.

Buffett addressed the sale with the first answer in the Q&A session: "Unless something dramatic happens that really changes capital allocation and strategy, we will have Apple as our largest investment. But I don't mind at all, under current conditions, building the cash position. I think when I look at the alternatives of what's available in equity markets, and I look at the composition of what's going on in the world, we find it quite attractive."

In addition to valuation concerns, market conditions, and wanting to build up the cash position, Buffett also mentioned the federal rate on capital gains, which Buffett said is 21% compared to 35% not long ago and even as high as 52% in the past. Fears that the tax rate could go up based on fiscal policies and a need to cut the federal deficit is another reason why Buffett and his team decided to book gains on Apple stock now instead of risking a potentially higher tax rate in the future.

Buffett has long spoken about the faith Berkshire shareholders entrust in him and his team to safeguard and grow their wealth. Berkshire is known for being fairly risk-averse, gravitating toward businesses with stable cash flows like insurance, railroads, utilities, and top brands like Coca-Cola (KO 0.29%), American Express (AXP -0.74%), and Apple. Another asset Berkshire loves is cash.

Berkshire's cash and U.S. treasury position reached $182.3 billion at the end of the first quarter, up from $163.3 billion at the end of 2023. Buffett said he expects the cash position to exceed $200 billion by the end of the second quarter.

You may think Berkshire is stockpiling cash because of higher interest rates and a better return on risk-free assets. But shortly before the lunch break, Buffett said that Berkshire would still be heavily in cash even if interest rates were 1% because Berkshire only swings at pitches it likes, and it won't swing at a pitch simply because it hasn't in a while. "It's just that things aren't attractive, and there are certain ways that could change, and we will see if they do," said Buffett.

The commentary is a potential sign that Berkshire is getting even more defensive than usual.

Berkshire's underlying business is doing exceptionally well. Berkshire's Q1 operating income skyrocketed 39.1% compared to the same period of 2023 -- driven by larger gains from the insurance businesses and Berkshire Hathaway Energy (which had an abnormally weak Q1 last year). However, Buffett cautioned that it would be unwise to simply multiply insurance income by four for the full year, considering it was a particularly strong quarter and Q3 tends to be the quarter with the highest risk of claims.

A great deal of the Q&A session was spent discussing the future of insurance and utilities based on new regulations; price increases due to climate change and higher risks of natural disasters; and the potential impact of autonomous driving reducing accidents and driving down the cost of insurance.

Ajit Jain, Berkshire's chairman of insurance operations, answered a question on cybersecurity insurance, saying the market is large and profitable and will probably get bigger but just isn't worth the risk until there are more data points. There was another question on rising insurance rates in Florida, which Berkshire attributed to climate change, increased risks of massive losses, and a difficult regulatory environment, making it harder to do business in Florida.

An advantage is that Berkshire prices a lot of its contracts in one-year intervals, so it can adjust prices if risks begin to ramp and outweigh rewards. Or as Jain put it, "Climate change, much like inflation, done right, can be a friend of the risk bearer."

As for how autonomous driving affects insurance, Buffett said the problem is far from solved, that automakers have been considering insurance for a while, and that insurance can be "a very tempting business when someone hands you money, and you hand them a little piece of paper." In other words, it isn't as easy as it seems. Accident rates have come down, and it would benefit society if autonomous driving allowed them to drop even further, but insurance will still be necessary.

Buffett's response to a question on the potential of artificial intelligence (AI) was similar to his response from the 2023 annual meeting. He compared it to the atomic bomb and called it a genie in a bottle in that it has immense power, but we may regret we ever let it out.

He discussed a personal experience he had where he saw an AI-generated video of himself that was so lifelike that his kids nor his wife would be able to discern if it really was him or his voice except for the fact that he would never say the things in the video. "if I was interested in investing in scamming, its going to be the growth industry of all time," he said.

Ultimately, Buffett stayed true to his longtime practice of keeping within his circle of competence, saying he doesn't know enough about AI to predict its future."It has enormous potential for good and enormous potential for harm, and I just don't know how that plays out."

Despite the cautious sentiment, Buffett's optimism about the American economy and the stock market's ability to compound wealth over time was abundantly clear.

Oftentimes, folks pay too much attention to Berkshire's cash position as a barometer of its views on the stock market. While Berkshire keeping a large cash position is certainly defensive, it's worth understanding the context of its different business units and the history of a particular position like Apple.

Berkshire probably never set out to have Apple make up 40% of its public equity holdings. Taking some risk off the table, especially given the lower tax rate, makes sense for Berkshire, especially if it believes it will need more reserve cash to handle changing dynamics in its insurance business.

In terms of life advice, the 93-year-old Buffett said that it's a good idea to think of what you want your obituary to read and start selecting the education paths, social paths, spouse, and friends to get you where you want to go. "The opportunities in this country are basically limitless," said Buffett.

We can all learn a lot from Buffett's steadfast understanding of Berkshire shareholders' needs and the hard work that goes into selecting few investments and passing on countless opportunities.

In investing, it's important to align your risk tolerance, investment objectives, and holdings to achieve your financial goals and stay even-keeled no matter what the market is doing. In today's fast-paced world riddled with rapid change, staying true to your principles is more vital than ever.

Read more from the original source:

Warren Buffett Discusses Apple, Cash, Insurance, Artificial Intelligence (AI), and More at Berkshire Hathaway's Annual ... - The Motley Fool