Artificial intelligence has spread lies about my good name, and I’m here to settle the score Kansas Reflector – Kansas Reflector

Artificial intelligence lies.

Everyone knows this by now, of course. Programs such as ChatGPT and Googles AI overviews routinely generate nonsense when queried by users. Tech enthusiasts call these mistakes hallucinations, as though AI just needs to sober up and come to its senses. I dont see it that way.

Because AI has started fibbing about me and my family.

Last week, my husband received a spam email from a salesman. It included a history of our last name, as follows:

The last name Wirestone is believed to have originated in Germany. It is a locational surname, meaning it was likely given to individuals based on where they lived. The name Wirestone may have derived from a place name that no longer exists or has changed over time.

The surname Wirestone first appeared in records in the late 19th and early 20th centuries in the United States, with immigrants from Germany bringing the name over. Some variations of the surname include Wierstien, Wierstone, and Wierston.

Today, the surname Wirestone is relatively rare and is primarily found in the United States. Individuals with this last name can be found in various states across the country, but they are most concentrated in the Midwest region.

The only problem with this account is that it is entirely incorrect.

I know this firsthand because the last name Wirestone didnt exist before 2010, when my husband and I made it up. We took the letters from our original last names and arranged them to create a new one. We also considered Cointower and McWren as options.

At the time, we researched to make sure that no one else had the last name of Wirestone. No one did. A marketing company bore the name Wire Stone, but that seemed sufficiently separate for our purposes. We lived in New Hampshire at the time, and the state had just legalized same-sex marriage. We wanted to share a single last name, and we wanted to share that last name with our son.

I even wrote a column mentioning this back in 2013! (Yes, Ive been churning out copy for a long time.)

But when it comes to large language models, the facts dont matter.

The email my husband received looked like the work of ChatGPT to me, so I headed over and put that AI through its paces. Sure enough, it generated loads of lies about my last name, all of them along the same lines. Heres a paragraph from one, this time including a linguistic breakdown:

The last name Wirestone is not as common as some others, but it does have a history rooted in Germanic origins. Wire likely comes from the Middle High German word wir, meaning wire or metal, indicating a possible occupational origin for individuals who worked with wire or metal. Stone suggests a connection to a place or geographical feature, possibly indicating someone who lived near a notable stone or rocky area.

Sounds authoritative! Also, completely false.

You might ask how AI generates something so completely bananas. Its because AI cant tell the difference between true and false. Instead, a complex computer program plays probabilistic language guessing games, betting on what words are most likely to follow other words. If an AI program hasnt been trained on a subject unusual last names, for instance it can conjure up authoritative-seeming but false verbiage.

ChatGPT later spawned a different etymologyfor our last name:

The surname Wirestone appears to have German origins. It is derived from the Old High German name Wiro, which means warrior or army, and stein, which means stone. Thus, the surname Wirestone likely originated as a combination of these elements, possibly indicating someone who was strong like a stone in battle or had characteristics associated with a warrior.

To summarize: My ancestors were either metalworkers who lived near rocky outcroppings or toughened fighters.

You might dismiss this all as mere silliness. I would agree with you, except that leaders have decided over the past year that AI will transform the global economy.

Google, which has become the default source of definitive world knowledge, began employing AI in its search results. Users soon reported that Google was telling them tosmoke cigarettes while pregnant, add glue to theirhome-baked pizza,sprinkle used antifreezeon their lawns, and boil mint in order tocure their appendicitis, according to Slate. The company has since rolled back some of the changes.

Facebook has tacked gaudy AI features across the platform. In the meantime, it managed to block Kansas Reflector and remove every link we had ever posted. Users who attempt to share our stories still report problems doing so, even though we were assured in April by spokesman Andy Stone that the problem had been corrected.

All the while, OpenAI, the company behind ChatGPT, continues to raise money and investor expectations ever higher about the future of its technology.

Yet were not living in the future. Were living in the now, and AI has massively underperformed in every instance where users asked it to perform accurately and reliably. Writing blender instructions in the style of the King James Bible is a fun party trick. But folks turn to the internet to answer real, pressing questions about their world.

I can tell you firsthand, from information I know personally, that the technology does not deliver.

Ten years ago, if you searched Google for information about my last name, you would find links to my work, the marketing company and the column I had written. You would be able to figure out the truth of the situation.

Now, that column has fallen prey to link rot. Those curious about Wirestone may well turn to ChatGPT, as students have done since the technology made its debut. They will be fed lies. The experience of a curious person online has therefore degraded, not improved. Perhaps AI technology will improve in the months and years to come. Perhaps not.

In the meantime, treat the output of opaque AI systems with extreme skepticism. Follow actual news reported and written and edited by actual humans. Visit Kansas Reflectors website. Subscribe to our newsletter.

Focus on reality, and leave the hallucinations behind.

Clay Wirestone is Kansas Reflector opinion editor. Through its opinion section, Kansas Reflector works to amplify the voices of people who are affected by public policies or excluded from public debate. Find information, including how to submit your own commentary, here.

See the article here:

Artificial intelligence has spread lies about my good name, and I'm here to settle the score Kansas Reflector - Kansas Reflector

Hey, Artificial Intelligence Fans! 3 Long-Term AI Stocks to Load Up on Now. – InvestorPlace

Over the past two years, artificial intelligence (AI) has been the key trend that many investors have focused on. The amount of technological innovation with AI has caused waves among users everywhere. Most have tried out ChatGPT or other generative AI models and come to the same conclusion: AI is smart and is certainly going to be a resource we all utilize moving forward.

Questions around how AI will be used aside, certain companies are uniquely poised to benefit from the surge in AI application growth over time. These three long-term AI stocks may not be surprising to many. Im focusing on the best of the best in this sector in this piece. However, its worth noting that quality matters in this space. In my view, these are the three companies with sustainable AI tailwinds I think the market is right to focus on right now.

Source: Piotr Swat / Shutterstock.com

Founded over three decades ago, semiconductor giant Nvidia (NASDAQ:NVDA) is certainly a company many investors have focused on for a variety of reasons. This chip juggernaut has seen previous surges tied to growth in gaming, crypto and a range of other technological advancements. Computing power demand has risen over time in a relatively exponential fashion, with different driers each time.

Thus, investors shouldnt be surprised to see the company pop on a surge in interest around AI. This catalyst is as real as many of the companys previous catalysts, but many think theres a much longer runway to this particular technology (and for good reason).

On Tuesday, June 18, Nvidia replaced Microsoft (NASDAQ:MSFT) as the worlds most valuable company. Shares rose after the news broke out, rising 3.6%. Currently, Nvidia has a market cap of $2.9 trillion, surpassing both Microsoft and Apple (NASDAQ:AAPL).

Over the past year, NVDA stock has seen a 178% increase due to its successful Q1 earnings report last May. Impressively, this stock has also seen a nine-fold increase since 2022, and its most recent rally can be almost entirely tied to the rise of generative AI. To add more positive news, Nvidias 10-for-1 stock split improved its chances of joining the Dow soon.

Source: T. Schneider / Shutterstock.com

Super Micro Computer (NASDAQ:SMCI) shares rose 10% on Thursday, driven by strong Broadcom (NASDAQ:AVGO) earnings, positive Oracle (NYSE:ORCL) news and AI stock momentum. With surging AI demand driving demand for server hardware and solutions, the server specialists stock has surged nearly 200% this year.

Super Micro Computers rack-scale systems, integrating power, storage, cooling and software, support high-performance Nvidia and AMD (NASDAQ:AMD) AI chips. This demand drove its sales to $3.9 billion in the last fiscal quarter, a 200% increase. Earnings per share surged 308% to $6.65, benefiting from the growing need for complex data processing.

Moreover, the company expanded its manufacturing capabilities globally, including in San Jose, Taiwan and Malaysia. They aimed to increase monthly rack production to 5,000, up from 4,000 last year and 3,000 in 2022. Now, with a strong focus on AI data centers and its 5S Strategy, Supermicro forecasts $25 billion in sales over the next few years, contradicting its 2024 forecast of $14.7 billion.

Source: rafapress / Shutterstock.com

Another AI stock investors may want to consider is Palantir Technologies (NYSE:PLTR), founded in 2003 by Peter Thiel and Alex Karp. While the company has existed for years, it only went public and became a listing in 2020. However, since then, the stock has surged 138%. The companys momentum accelerated from early 2023 with the launch of its Artificial Intelligence Platform (AIP), integrated into platforms like Foundry and Gotham.

PLTR stock has been on the rise, surging during June 20s premarket session. That was tied to news that Palantir secured an exclusive deal to supply data management solutions for the Starlab commercial space station, led by Voyager Space, Airbus SE (OTCMKTS:EADSY), Mitsubishi (OTCMKTS:MSBHF) and MDA Space. CEO Alexander Karp expressed excitement about enhancing global intelligence capabilities on Earth and in space.

Starlab Space and Palantir utilized digital twins and AI to optimize operations. Palantir also secured a $19 million, two-year contract from ARPA-H for critical data infrastructure. Assuming more deals come down the pike, this is an AI stock with some pretty clear catalysts investors are right to focus on right now.

On the date of publication, Chris MacDonald did not hold (either directly or indirectly) any positions in the securities mentioned in this article. The opinions expressed in this article are those of the writer, subject to the InvestorPlace.com Publishing Guidelines.

Chris MacDonalds love for investing led him to pursue an MBA in Finance and take on a number of management roles in corporate finance and venture capital over the past 15 years. His experience as a financial analyst in the past, coupled with his fervor for finding undervalued growth opportunities, contribute to his conservative, long-term investing perspective.

Read the original:

Hey, Artificial Intelligence Fans! 3 Long-Term AI Stocks to Load Up on Now. - InvestorPlace

Apple AI Could Produce ‘Really Really Good’ Version of Siri – PYMNTS.com

What ifApples voice assistant Siri was really, really, really good?

That question is at the heart of much of the tech giants artificial intelligence (AI) research, according to a report Sunday (May 5) by The Verge reviewing those efforts.

For example, a team of Apple researchers has been trying to develop a way to use Siri without having to use a wake word.

Rather than waiting for the user to say Hey Siri or Siri, the voice assistant would be able to intuit whether someone was speaking to it.

This problem is significantly more challenging than voice trigger detection, the researchers did acknowledge per the report, since there might not be a leading trigger phrase that marks the beginning of a voice command.

The Verge report added that this could be why another research team came up with a system to more accurately detect wake words. Another paper trained a model with better understanding of rare words, which are in many cases not well understood by assistants.

Apple is also working on ways to make sure Siri understands what it hears. For example, the report said, the company developed a system called STEER (Semantic Turn Extension-Expansion Recognition) that is designed improve users back-and-forth communication with an AI assistant by trying to determine when the user is asking a follow-up question and when they are asking a new one.

The report comes at a time when Apple appears to be taking as PYMNTS wrote last week a measured approachto its AI efforts.

Among its projects is the ReALM (Reference Resolution As Language Modeling) system, which simplifies the complex process of understanding screen-based visual references into a language modeling task using large language models.

On the one hand, if we havebetter, faster customer experience, theres a lot of chatbots that just make customers angry, AI researcher Dan Faggella, who is not affiliated with Apple, said in an interview with PYMNTS. But if in the future, we have AI systems that can helpfully and politely tackle the questions that are really quick and simple to tackle and can improve customer experience, it is quite likely to translate to loyalty and sales.

The voice tech sector is on the rise. According to research by PYMNTS Intelligence, theres a notable interest amongconsumers in this technology, with more than half (54%) saying they look forward to using it more in the future due to its rapidity.

For all PYMNTS AI coverage, subscribe to the dailyAINewsletter.

See original here:

Apple AI Could Produce 'Really Really Good' Version of Siri - PYMNTS.com

See how Nvidia became one of the world’s most valuable companies – The Washington Post

Chipmaker Nvidia surpassed Microsoft for the first time this month to become the worlds most valuable company, with a market capitalization of $3.3 trillion. Though its reign on the top of the charts was brief, it crowned a rapid climb for the company, which was little known outside tech circles just two years ago.

For most of its three decades of existence, Nvidia was mostly a niche player, making computer chips for video games, but the companys central position in the artificial intelligence boom has led to a spectacular rise.

Nvidia sells the graphics processing units (GPUs) and the software crucial to training and running the AI algorithms that power chatbots and image generators.

Heres how Nvidia became one of the worlds most valuable companies.

Nvidia went public in January 1999 at $12 a share, six years after its founding and a year before the dot-com crash would wipe out much of the stock market value of the burgeoning internet industry. The company was building a reputation for making some of the best chips for video games, and in 2001, it won a contract to supply GPUs for Microsofts Xbox gaming console.

Nvidia had long been traded by professional investment firms, but during the pandemic, millions of people with day jobs got into stock investing through apps such as Robinhood and online forums like Wall Street Bets. Gamers turned retail investors recognized Nvidia as the company that helped power the improvement in video game graphics over the past two decades.

In 2021, Facebook rebranded itself as Meta and brought renewed interest in the concept of the metaverse a future where people spend much of their time plugged into a virtual world. Nvidia chief executive Jensen Huang jumped on the idea and said his companys chips would power the future world of the metaverse. He even used a digital clone of himself speaking at Nvidias annual conference to showcase the tech.

Metas grand plans for the metaverse have yet to pan out, but at the time, some investors were betting it was the next big thing. On Nov. 4, 2021, financial analysts from Wells Fargo published a report detailing how Nvidia was well positioned to benefit from the prophesied metaverse boom, and the stock jumped 12 percent.

At the end of 2022, OpenAI, an artificial intelligence lab founded as a nonprofit in 2015, unveiled ChatGPT. It was more capable than any chatbot that regular people had interacted with yet. The tech industry was enthralled, and within months, Microsoft had invested billions into OpenAI. The AI arms race was on.

Nvidias chips and software are crucial to building the large language models that serve as the underlying technology in ChatGPT and image generators like OpenAIs Dall-E 3, which launched in 2023.

Huang told investors on Feb. 22, 2023, that the company stood to benefit from the AI boom, which was quickly gaining steam. Wall Street was convinced and the stock shot up 14 percent to give the company a total value of $582.3 billion.

Nvidias stock kept climbing. In May 2023, Nvidia reported earnings showing for the first time with real numbers that it was a prime beneficiary of the AI frenzy. The stock jumped 25 percent and the companys valuation briefly crossed $1 trillion, one of only a handful of companies to ever reach that mark.

As the company reported higher revenue numbers, more investors piled in, pushing the stock up until it ended the year worth $1.2 trillion. Because many AI start-ups and companies, including OpenAI, are not public, there were few options for regular people to invest in the AI boom. Many bought Nvidia stock.

In the first quarter of 2024, Nvidias revenue rose to $26 billion from only $7.2 billion in the same period a year before.

AI start-ups, companies trying to add AI to their products and venture capital firms are all trying to get their hands on Nvidias chips, driving up their price. But the biggest buyers are Big Tech companies Microsoft, Amazon, Meta and Google that need the chips to build and train their own AI models.

Earlier this year, Microsoft, Meta and Google told their investors they would increase spending on AI investments. Google alone plans to spend at least $12 billion every four months this year. Much of that money is going straight into Nvidias coffers.

View original post here:

See how Nvidia became one of the world's most valuable companies - The Washington Post

Warren Buffett Warns of AI Use in Scams – PYMNTS.com

Berkshire HathawaysWarren Buffett has compared the development of artificial intelligence (AI) to the atomic bomb.

Just like that invention, the multibillionaire said Saturday (May 4) at Berkshires annual meeting, AI could producedisastrous resultsfor civilization.

We let a genie out of the bottle when we developed nuclear weapons, said Buffett, whose comments were reported by The Wall Street Journal (WSJ). AI is somewhat similar its part way out of the bottle.

While Buffett acknowledged his understanding of AI was limited, he argued he still had cause for concern, discussing a recent sighting of a deepfake of his voice and image. This leads him to believe AI will allow scammers to more effectively pull off their crimes.

If I was interested ininvesting in scamming, its going to be the growth industry of all time, he said.

The WSJ report noted that Buffetts comments come amid a debate among business leaders about how AI will impact society. And while not everyone compares the technology to the atomic bomb, there are those who worry AI will wipe out white-collar jobs.

Others see the upside to AI, like JPMorgan Chase CEO Jamie Dimon has said AI could invent cures for canceror allow more people in future generations to live to 100 years old.

It will create jobs. It will eliminate some jobs. It will make everyone more productive, Dimon said in a recent WSJ interview.

It is also transforming how companies train and upskill their employees, PYMNTS wrote last week, providing personalized learning experiences that can cut costs and improve efficiency.

The global AI-in-education market is projected to expand from $3.6 billion in 2023 to around $73.7 billion by 2033, according to a report from Market.US. But in spite of this impressive forecast, online education companyChegg, which has invested in AI tools, recently saw a decline in stock, something that underscores the sectors volatility.

Generative AI can provide alevel of personalizationin learning that is nearly impossible to achieve without this advanced technology, Ryan Lufkin, global vice president of strategy at the education technology company Instructure, told PYMNTS.

This means we can quickly assess what an employee knows and teach directly to their knowledge gaps, reducing the amount of time spent learning and improving time-to-productivity.

For all PYMNTS AI coverage, subscribe to the dailyAINewsletter.

Here is the original post:

Warren Buffett Warns of AI Use in Scams - PYMNTS.com

HHS shares its Plan for Promoting Responsible Use of Artificial Intelligence in Automated and Algorithmic Systems by … – HHS.gov

Today, the U.S. Department of Health and Human Services (HHS) publicly shared its plan for promoting responsible use of artificial intelligence (AI) in automated and algorithmic systems by state, local, tribal, and territorial governments in the administration of public benefits. Recent advances in the availability of powerful artificial intelligence (AI) in automated or algorithmic systems open up significant opportunities to enhance public benefits program administration to better meet the needs of recipients and to improve the efficiency and effectiveness of those programs.

HHS, in alignment with OMB Memorandum M-24-10, is committed to strengthening governance, advancing responsible innovation, and managing risks in the use of AI-enabled automated or algorithmic systems. The plan provides more detail about how the rights-impacting and/or safety-impacting risk framework established in OMB Memorandum M-24-10 applies to public benefits delivery, provides information about existing guidance that applies to AI-enabled systems, and lays out topics that HHS is considering providing future guidance on.

Read the original here:

HHS shares its Plan for Promoting Responsible Use of Artificial Intelligence in Automated and Algorithmic Systems by ... - HHS.gov

How Artificial Intelligence Is Making 2000-Year-Old Scrolls Readable Again – Smithsonian Magazine

Emily Lankiewicz / Vesuvius Challenge

When Mount Vesuvius erupted in 79 C.E., it covered the ancient cities of Pompeii and Herculaneum under tons of ash. Millennia later, in the mid-18th century, archeologists began to unearth the city, including its famed libraries, but the scrolls they found were too fragile to be unrolled and read; their contents were thought to be lost forever.

Only now, thanks to the advent of artificial intelligence and machine learning, scholars of the ancient world have partnered with computer programmers to unlock the contents of these priceless documents. In this episode of Theres More to That, science journalist and Smithsonian contributor Jo Marchant tells us about the yearslong campaign to read these scrolls. And Youssef Naderone of the three winners of last years Vesuvius Challenge to make these clumps of vulcanized ash readabletells us how he and his teammates achieved their historic breakthrough.

A transcript is below. To subscribe to Theres More to That, and to listen to past episodes on the complex legacy of Sojourner Truth, how Joan Baez opened the door for Taylor Swift, the problem of old forest roads and more, find us on Apple Podcasts, Spotify or wherever you get your podcasts.

Youssef Nader: My name is Youssef Nader, I am a PhD student at the Free University of Berlin, and today Im speaking to you from Alexandria.

Chris Klimek: Youssef spends most of his time in Berlin, but we caught him while he was visiting family in Alexandria, Egyptwhich is a very busy traffic city. He said he was five stories up, and it still sounded like he was on the street.

Nader: We arrived in Alexandria somewhere around 2 a.m. in the morning, so I got some sleep and I woke up to have the interview, basically.

Klimek: Youssef grew up in Cairo, so from a young age he was surrounded by ancient history.

Nader: Papyrus was invented by ancient Egyptians almost 5,000 years ago, so learning about papyrus making, and how the ancient Egyptians went around documenting their history, is something you learn about very early on, and something that sticks with you. Its very common to have souvenirs from Egypt, which is like papyrus with some hieroglyphs and some writings, and its a very common souvenir or gift that we bring people from here, and I brought my friends a couple of times. So yeah, its sort of a cultural heritage.

Klimek: Today, Youssef is a PhD student who works with machine learning and A.I.

Nader: I do work with image data, but I usually work with 2D images, like photos you take of your dog and stuff like that.

Klimek: One day, Youssef heard about something called the Vesuvius Challenge. It involved some unreadable ancient scrolls and the hope that some A.I. expert might be able to help, with a reward of $700,000.

Nader: It had all of the interesting elements: Papyrus, which rings a bell for an Egyptian, of course; playing around with historical data of 2,000 years ago just on my laptop is not something you come by very often; very interesting technical problem; a big monetary prize. It was just all of the right elements that make it worthwhile.

Klimek: It was a big challenge, but Youssef decided he was up to the task.

Klimek (to Nader): Have you ever seen one of the scrolls in person?

Nader: I have. I recently visited and I got to see scrolls up close. And its crazy. I could not believe that this is the same thing Im working on on my computer, because it doesnt look like there is hope. When you look at the scroll up close, it really looks like a piece of charcoal, and the sheets look like they merged together, its just one, and theyre very, very small. One of the scrolls was just my finger tall, so it was really crazy to think that this is what were working on and we were reading. Its a little bit of science fiction.

Klimek: From Smithsonian magazine and PRX Productions, this is Theres More to That, the show where we may not welcome our robot overlords, but we are willing to let them help us read historically significant ancient papyrus scrolls. In this episode, we learn more about the Vesuvius Challenge, what happened and what A.I. means for the future of archaeology. Im Chris Klimek.

Klimek: What are the Herculaneum scrolls, and why are they important?

Jo Marchant: Theyre a collection of carbonized papyrus scrolls from around 2,000 years ago, ancient Roman times, that were buried by the eruption of the Vesuvius volcano. The same one that buried Pompeii.

Klimek: Jo Marchant is a Smithsonian contributor whos covered this story for several years now.

Marchant: Often, the scrolls are described as the only intact library we have that survives from the ancient world. Because they were buried by the volcano, youve got these carbonized scrolls that were kept underground for all that time, so they have survived. But the only problem is you cant unwrap them to read them without destroying them. So, theyve been this big archaeological mystery since they were discovered in the 18th century.

Klimek: What do they look like now?

Marchant: Some of them have been pulled apart, and are basically crumbled into dust and theyre in hundreds of pieces, but there are a few hundredthe worst, most charred cases, if you like that were left intact as a lost cause. Theyve been described as saggy, brown burritos, is one of the least rude descriptions that Ive heard. Theyre kind of crumpled, crushed, wrinkled. They look like nothing. They were thought, supposedly, to have been pieces of coal by the workmen who first uncovered them in the 18th century, so they just really look like very sorry objects indeed. You would not think that you were going to get a lot of information out of them.

Klimek: Do we know how they were recognized when they were found as carbonized scrolls? It sounds like they could have easily been mistaken for something else.

Marchant: Yeah, a lot of them were supposedly just thrown away, or burned, even, for heat by the workmen, these 18th-century workmen who had first uncovered them.

Klimek: What these workmen had discovered was an ancient library buried underground since the Vesuvius eruption in 79 A.D.

Marchant: The library itself was situated in this luxury Roman villa on the shore of the Bay of Naples. It possibly belonged to Julius Caesars father-in-law at one point, this beautiful villa with walkways, columns, statues, works of art, courtyards, this luxury residence. The workmen are digging tunnels, essentially, through the site, uncovering it, find these lumps, initially just think that theyre coal, burn them, throw them away. To be honest with you, I dont know exactly how it was first realized that that was not what these things were, that they were actually incredibly precious. But once that was realized, then there was incredible interest, then, in trying to read them. This was a really unique, spectacular find. We just dont have literary written sources from the classical world. Most of the works of literature or philosophy or whatever it is that we have have been copied and therefore selected through the centuries. But to actually have these original pieces from the time is just really, really incredible. So, there was all sorts of efforts to try and open these scrolls, most of which ended up being very, very destructive.

Klimek: What else has hindered efforts to read the scrolls, aside from the fact that they fall apart if you try to physically unroll them?

Marchant: Yeah, so technically, this is an incredibly difficult challenge. There have been attempts to open them, and essentially you end up with hundreds of pieces or strips, because its incredibly thin, this papyrus, you might have hundreds of rolls. So, imagine its tearing off in strips, but then youve got different layers then stick together. So, each of your strips might consist of a different number of layers, and then youve got to try and piece those together as a jigsaw. So, there has been a lot of work going on among papyrologists to try and decipher, translate, interpret those pieces, sticking the bits back together. But then they were kind of put aside as a lost course. I think a lot of people thought that those were never going to be read, they were just going to sit there in the library archive.

Klimek: As Jo mentioned, the scrolls were incredibly fragile, but thats really just the beginning of why researchers were so stumped. First, how could they separate all the layers of paper?

Marchant: Youve got to find a way of looking inside them, working out where the surfaces of the papyrus are, and then reading the ink. Theyre so crumpled, and youve got all of these layers, some of them are stuck together, rolled very tightly. How do you even image and find the surfaces?

Klimek: Yeah. Then there was the ink itself.

Marchant: A lot of ink from ancient papyri has got iron in it, so if you X-ray, that ink will glow very brightly. But the problem with this ink is its just carbon and water. It has exactly the same density in X-ray scans as the papyrus. So, you can do your X-rays, you can do beautiful 3D scanning, whatever youre going to do. But its like doing an X-ray of a body: Youre looking for the bones, but the bones are completely transparent; the ink doesnt show up.

Klimek: Enter Brent Seales, a professor at the University of Kentucky.

Marchant: Hes a computer scientist, so hes not a classicist, quite an unusual person to be spearheading this attempt to read these ancient scrolls. But he was originally interested in computer vision and then got interested in, how could you use algorithms to flatten out images? One of the first things that Brent Seales worked on was a very old copy of Beowulf in Old English that was kept in the British Library. Part of the problem when you take photographs of very old manuscripts like that is its all kind of warped, and sort of folded and cockled. The surface isnt flat, so if you just take a photograph of it, youre not going to be able to see all of the writing. So, the idea was to develop software where you could scan the not flat three-dimensional surface, and then flatten it out, so that you would have a nice, flat surface, you could read all the writing.

So, then moving from there to actually virtually unwrapping something that was rolled up. And a few years ago, the team did that on an old scroll from Ein Gedi, on the shores of the Dead Sea, that was burned by fire in the sixth century A.D. And they took the CT scans of that and were able to then virtually unwrap that surface, and see that, written inside, was actually some text from the Book of Leviticus. So, that was an incredible advance.

Klimek: Then in 2005, a colleague showed Brent Seales the Herculaneum scrolls.

Marchant: And he told me that that just blew his mind, just the scale of that challenge, and the potential for the information that you could find. But hes quite interesting, in that he isnt so interested specifically in some of these ancient Greek and Roman sources that most papyrologists would be interested in, hes actually a devout Christian, and he is really interested in the origins of Christianity. The volcano erupted in 79 A.D., these scrolls were buried, so this was the time when Christianity was just beginning, and the philosophers in ancient Greece and Rome, in that world, wouldve been very aware of what was happening, probably interested in this new religion that was starting up. But he told me that what he was really dreaming of, really interested in, is finding out more information about that. Can we find information from early Christian sources?

Theres the huge technical challenges, but one of the biggest problems hes actually had is getting access to the scrolls to even study them, and to try to develop these techniques, because theyre incredibly precious and incredibly fragile. So, curators who are in charge of these collections, the last thing they want to do is give them to some computer scientist who wants to carry them off to a particle accelerator somewhere and send beams of X-rays through them. This is something thats taken nearly 20 years to really come together.

Klimek: I love this. Can I borrow this irreplaceable treasure of yours? Ill bring it right back, I just need to run it through my particle accelerator first.

Marchant: Exactly, exactly.

Klimek: Itll be fine.

Marchant: And Ive spoken to curators, and theyd say you breathe on these things, they will fall apart. They are so fragile. So, it really is a kind of perfect storm of difficulties.

Klimek: Remarkably, the scrolls were eventually taken to a particle accelerator in the U.K. for 3D scanning.

Marchant: Youre making a 3D reconstruction of that volume, and then you have to go through, really painstakingly, slice by slice, and kind of mark where all the surfaces are. If you think about looking at one of these scrolls in a cross section, youll see a spiral of where the papyrus is all wound together, and you have to mark where all those surfaces are, and then what Brent Seales and his team did was work on software for algorithms that could take that data and then unwrap that spiral into flat surfaces. So, you get a kind of flat image of what that surface looks like in the CT scan, that you can then work on and try and look for the ink.

But as I mentioned, the ink in those images is transparent; you cant see it. So, that then was the next challenge. How are you actually going to make that ink visible? They had one tiny fragment which had one letter on it, sigma, and they were able to carry that to the Diamond Light Source in Oxfordshire, and the idea was that just using that one letter, they were trying to come up with imaging techniques, and thats where, a few years ago, they had the idea of using machine learning, these artificial intelligence techniques, to try to do that.

If you take some of the papyri that has been opened, some of these fragments, and you train your machine learning algorithm, you show it, This is what ink looks like, and this is what not ink, just the blank papyrus, looks like. You can teach it to be able to tell the difference, so then you can run that same algorithm on your CT scans from inside the wrapped-up scroll. That was the approach, but they realized that this was going to be an incredibly labor intensive a lot of work to do this. And I think thats the point at which Nat Friedman, the Silicon Valley entrepreneur, he had heard about the Herculaneum scrolls and contacted Brent Seales to say, Right, whats happening with this? Is there anything that I can do to help? And that was the origins of this Vesuvius Challenge competition.

Klimek: Nat Friedman is the former CEO of GitHub, an online platform where computer programmers collaborate.

Marchant: And this whole project, actually, I find fascinating, because of the different worlds that come together. Youve got the computer scientists. Youve got these classicists and papyrologists who have their own culture and world. Youve got the curatorstheyre just really wanting to keep everything safe, theyre conservators. So, very different motives, very different cultures that these people are coming from. If you think of papyrologists, often it will take them years, decades to do a translation and edit an edition of a particular source. Theyre so painstaking, theyre working character by character, just trying to work everything out. And then youve got the Silicon Valley entrepreneurs coming in, going, Speed is everything! We are going to solve this now! And you throw those two worlds together, I find it completely fascinating how, actually, in this case, thats actually worked really well. Its really triggered a lot of progress and creativity.

Klimek: So, how does all of this bring us, in 2023, to the Vesuvius Challenge?

Marchant: Nat Friedman told me that during the pandemic, during lockdown, hes looking for things to do, like we all were, looking for distractions. Starts reading about ancient Rome, getting very interested in that whole world, finds out about the Herculaneum scrolls through just Googling, Wikipedia, all of this. Eventually comes across an online talk by Brent Seales talking about all of the work, and this problem with not being able to see the ink, and how he thinks that machine learning, artificial intelligence, might be the answer to that. And Nat said, from this talk, it sounded like Brent was pretty much there. He was going to solve it pretty soon, so he just thinks, Oh, I look forward to finding out what happens with that. Then, a couple of years later, it was like, Oh, they dont seem to have read the scrolls yet.

So, he got in touch with Brent Seales to invite him to a retreat where a lot of tech figures, funders, that sort of whole community get together. Seales initially just ignored the email, just didnt really believe who it was from. So, it took a bit of chasing, but he eventually realized that yes, this was Nat Friedman who was trying to get in touch with him. He went along to this retreat. Its a camp-out in the woods in Northern California, where they all sit around fires and discuss projects, and, I dont know, important decisions in the tech world get made. But nobody was actually interested in funding this project.

So, Nat Friedman, afterward, is thinking, I dont want this guy to go home with nothing, after I promised him that wed be able to do something to help his project. Basically, he said, Why dont we do it as a competition? He and his longtime funding partner, Daniel Gross, put forward initial funding for the competition, and the idea was that you make all of your data open source to public, just put it out there, and then you set goals for people who can make different advances toward reading the scrolls. So, things like first person to detect ink, first person to detect a word, first person to read a whole passage. You set all of these different minds onto the challenge at once.

And the actual design of the competition is really interesting and really clever, I think, because rather than just having one prize and everyones working alone, because youve got these progress prizes, every time somebody wins a progress prize, all of their work, all of their data, all of their algorithms get made public. So, the way that Brent put it to me is you level everybody up, then, so everybody has the advantage of that, and then they all start working on the next challenge.

I asked Brent Seales, actually, was that difficult? If youve worked on a project for nearly 20 years, and your dream was you were going to be the person to read the scrolls, is that a hard decision to make, then, to say, Actually, its not going to be me. Im going to do this prize. Im going to make everything Ive done so far, everything Ive worked for, all of our software, all of our data, lets just make it public, put it out there, and then someone else can come and do that last step, and they will be the person to read it. Can you imagine? How hard. And he said yeah, it was really difficult. The whole team had to talk about that together, and make sure that they were all OK with that.

Seales also said something else to me: He said often with archeology, and Ive come across this with other stories Ive written, actually, that somebody decides that theyre going to be the one to solve a mystery or whatever it is, make a discovery, and its almost like the ego takes over, its theirs, and theyre going to be the one to have all the glory. And he said this was almost a way to prove to himself that he wasnt that person. That hes doing it so that the scrolls can be read.

They put everything out there, made it public, launched the award toward the beginning of 2023, and it all went from there. I think they had more than a thousand teams, in the end, from all over the world, like China, Ukraine, Australia, U.S., Egypt, and they were all on this Discord, this chat platform for gamers, discussing latest advances and questions, because they were just releasing little flat images of the surfaces inside these scrolls, a little piece at a time. And then what the entrants for the Vesuvius Challenge were doing was then they would take those segments, those flat segments, and use those to then train their machine learning models to try and recognize that ink.

Klimek: Were there any unsuccessful avenues that were part of this that were included in your reporting? Any attempts that didnt pan out?

Marchant: I think there were lots of teams trying different things, trying to train their algorithms in different ways. So, one thing that Seales thought they might be able to do was to train the algorithm on the letters from the parts of the scrolls that have been read, but that ended up really not working very well. It seems that you have to train your algorithm on the same scroll of the scans that youre trying to read, which is obviously very difficult, because you cant see the ink. How are you going to do that?

One of the first real key breakthroughs, there was an ex-physicist called Casey Handmer. He was actually looking at the images that were coming out from inside this scroll visually, and just spending hours and hours poring over them. He was convinced that if a machine learning algorithm could see a difference, a lot of those are trained based on the human visual system. So, he was thinking, If a machine can see it, it must be possible for a human to see it, if we just look carefully enough. So, hes pouring over these images and eventually notices this very strange, very subtle difference in texture.

So, normally in the CT scans, you can sort of see the woven strands of the papyrus, and then in some places there was this Its described as being like cracked mud on a riverbed, those geometric kind of cracks you get. So, they called it crackle. He was trying to look at this, trying to work out where it was, and then realized in one place, it seemed as if it was forming the shape of a letter. So, he was like, Oh my goodness, this is the ink. This is not showing up as a different color, its not glowing bright or anything, but theres just this very, very subtle difference in the texture of the surface where the ink is sitting on the papyrus. And he was awarded the First Ink Prize for doing that. So, then other competitors were able to use that to train their algorithms. Now theyve got a foothold, theyve got something to start training their algorithms on the difference between ink and not ink.

Klimek: After that, the race was on. Who would find the first word to read from the Herculaneum scrolls?

Klimek (to Nader): Can you give us a simple definition of what machine learning is?

Nader: Machine learning is about how to teach a statistical model to map your input data to some output result that you want. For the Vesuvius Challenge especially, we wanted to teach the A.I. model what ink looks like.

Klimek: Nader again.

Nader: So, you give the A.I. model some small images, some patches of the image, because the segments are really huge, its like hundreds of thousands of pixels by hundreds of thousands of pixels. Its crazy resolution. So, you take a small piece, you show it to the A.I. model, and the A.I. model needs to say, I see ink in this small piece or not. And to train this, you need some examples to show it to begin with. So, we tell it, OK, this is what ink looks like. This is what ink doesnt look like. And you show it these examples, and then its able to learn, OK, how do I differentiate between the two? And then it notices, OK, theres this pattern on top of the papyrus that looks quite like cracks, that maybe this I can use to detect the signal.

And of course there were very interesting problems, because to begin with, we cant see the ink ourselves, so it didnt have the data that we can show to the A.I. model to say, OK, this is what ink looks like. And it took a lot of experiments and a lot of ways to find a first footing of ink from small pieces that fell off the scrolls: first two letters. How do we go from two letters to 2,000 letters? You train an A.I. model to learn these two letters that you found, and it has a slightly better idea of what letters look like, so it finds another ten letters. You take those 12 letters now, and you train a new one with the 12 letters. The new A.I. is better, so it finds maybe 20-something letters.

And the beginning was incremental. I would usually just take the predictions from an A.I. model, like, OK, these are letters. I would paint over them in Photoshop to make some examples of what ink is, so just like a black and white image, and I would give it to the next A.I. model. Of course, my drawing is not very accurate, and it was a question of how do you allow the A.I. model to disagree when you have some mislabeled stuff? How do you guarantee that the A.I. model is not hallucinating, not making up letters? And we had to operate on a very, very small scale, such that the letter is never seen by the A.I. model. It only predicts pixel level: ink, no ink, ink, no ink. And then we, as humans, when we look at the big picture, we see, OK, yeah, this is actually Greek, this is what it means.

Klimek: This is how you can have confidence in one set of findings before you move on to the next set. Youre verifying the machine learning conclusions with human eyes before you feed those discoveries back into the A.I. to try to solve the next set.

Nader: Yeah. So, in the training phase, I was verifying this by my own eyes, which, Im no expert in Greek, I actually dont know any Greek. So, I was just looking at what makes sense as a writing, like any kind of written language. You have some ink deposits, and you draw a letter in some shape. It makes sense that the letters are all on a single row, it doesnt make sense that theres scrambled rows; fixed-size columns, stuff like that. I go to sleep thinking about the Vesuvius Challenge. I wake up, check some stuff, continue working, eat, sleep, then repeat. I wasnt even getting proper sleep because Im going to bed and thinking, OK, did I actually try that thing? Maybe I have a different idea, maybe I should do this. And I run something overnight, and I check in the morning if it worked or not. So yeah, we were grateful that the first words that we found was not something like, and, the, for example. That wouldve been underwhelming. It had some meaning, it had some kind of zest to it, and I think that was really cool.

Klimek: Youssef was one of two people to find that first word. It wasdrum roll, pleasepurple.

Marchant: So, that was the first word, purple. Which is lovely, I love that it was just such a rich, evocative word.

Klimek: Marchant.

Marchant: So, immediately that said to the papyrologists, We think this is a new work weve never seen before. Because purple is quite a rare word. Purple, porphyras, is the name of a dye. It was made from sea snails, so very expensive, difficult to make, so used to dye the emperors robes. This was a sign of wealth, luxury, rank. Its just this lovely sort of Yeah, just evocative word. So, that was the First Letters Prize, awarded in October to Luke Farritor, who got the first place for that prize.

Klimek: Luke Farritor was a 21-year-old computer science student at the University of Nebraska. Youssef won second place. The two reached out to one another after the announcement, eventually deciding to team up. They were joined by a third student named Julian Schilliger. Together, the three set their sights on the next phase of the competition.

Marchant: When the whole challenge was set up in March 2023, they had this big $700,000 Grand Prize for reading the first passages from the scroll. And a deadline was set for that prize, which was the 31st December 2023, so the end of that year. Nat Friedman said it was getting nearer and nearer to the end of the year, and theyre not getting any entries for this Grand Prize. They were getting pretty worried. They were starting to send out messages going, So, hows everyone getting on? Let us know your progress!

Klimek: Entrants to the Vesuvius Challenge worked right down to the wire. Youssef and his teammates were no exception.

Klimek (to Nader): What were your last few days like, prior to the deadline?

Nader: They were quite sleepless. I was trying to make sure that Im not submitting on the last day, which I usually do in every other thing. I knew that a lot of people would be submitting at the very last day or the very last minute. I was also not sure about There was a time factor. If you get to the threshold of winning first, you win. I was not sure: Where are we on that? Do we have the best models? Where are we? You dont know about other teams. And so you also want to guarantee that youre first, in case theres a tie. So, there was the time factor and the quality factor, and youre trying to, OK, do I submit now? Do I try to make it better over the next week? Is it getting better? Its not getting better. And I made one submission 22nd December, and one 30th December, so, one day before the end of the competition.

I was just planning to go back to Egypt to visit my family after the long haul of the Vesuvius Challenge. It was the date after I arrived in Egypt. They sent us an email, saying, Hey, the evaluation process is still ongoing, wed like to meet with you guys. Of course, were in different time zones, and they wanted to make sure were all in one meeting when they tell us the news. So, we didnt know that we were getting the announcement, and we were suspicious. OK, why do you need all three of us in a meeting? We were like, We can answer the questions over email. Julian was saying, Yeah, it doesnt make sense.

We went to the meeting, and then they were asking us normal questions, and we were like, OK, yeah, maybe its still ongoing. And then Nat was like, How would you guys feel if we told you that you won the Vesuvius Grand Prize? And it was like, What? And I think it took us a couple of days for it to sink in, actually, that we actually won. And we were in disbelief, but we were ecstatic, and it just felt amazing.

Marchant: The three of them working together, theyd actually read, I think it was more than 2,000 characters from this scroll, more than 5 percent of the entire scrolls. And these are really big, long, long scrolls. And it was discovered that it was a work of philosophy by an ancient Greek philosopher called Philodemus. And that in itself was not a huge surprise, because of the scrolls that had been attempts made to open them and partially read, a lot of those scrolls were written in Greek and were philosophy works by Philodemus. He was a follower of Epicurus who founded the Epicurean school of Greek philosophy. They thought everything in nature was made of atoms that swerve and collide. And theres so many works, actually, of Epicurean philosophy that they think that that part of the library was probably the working library of this philosopher, Philodemus.

And it seems to be a work on pleasure, and the senses, and on what gives us pleasure, possibly relating to music. Its mentioning the color purple, its mentioning the taste of capers. Theres a character called Xenophantus who is mentioned, who is possibly, theres a Xenophantus known who was a very famous flute player, who apparently his playing was so evocative and stirred the heart so much that his playing always caused Alexander the Great to immediately reach for his weapons. So, you get a sense of all these lovely sensory sources of pleasure that are being mentioned in this piece. So yeah, papyrologists are really, really excited about that. But then also what this means for what else we could be reading from now.

Klimek: I asked Youssef what other archaeological problems hed like to see machine learning tackle.

Nader: I think there are very interesting projects of machine learning in archaeology, even outside of reading a scroll. I think there has been discussions of using similar techniques to read writings on wrappings of mummies. I know of one other project in our university that has to do with using 3D reconstruction and imaging for archaeological sites, using drones to scan the sites, and figure out structures and stuff. There are some interesting problems that are either really hard to solve, or require a lot of man effort, and A.I. could really help us speed things up.

Klimek: Do you think most people who dont have your specialized background and education, do people understand generally what artificial intelligence is?

Nader: Artificial intelligence has been getting a lot of bad reputation recently, also because of how it has been used. I think sometimes people think its a lot smarter than it actually is, and some people think its a lot dumber than it may be. I believe its a very interesting tool, depends really how you use it. A lot of the fear and concern from A.I. comes from not treating it as a tool, but as an entity of its own that wants to do either good or bad. But the good or bad is basically coming from the human operating the tool. I think theres a lot of debates coming from the world-leading experts in A.I. about what actually are the risks, and how to interpret what we are doing. So, its still kind of an ongoing process, but there is some awareness of, OK, there is this new technology that is shaping the world.

And Im glad that the Vesuvius Challenge came at this time, because it also shows, yeah, you can do harm with A.I., but you can also do so much good, and so much benefit to mankind. So, some people are starting to think, Yeah, maybe this is not really as bad as we thought. Or, We could really use this for our own good.

Klimek: Thank you, Youssef, this has been fascinating.

Nader: Yeah, thank you, Chris.

Klimek: To read more of Smithsonian magazines coverage of the Vesuvius Challenge, check out the links in our show notes. And as always, wed like to send you off with a dinner party fact. This time, we bring you a brief anecdote about another fragile thing that lives buried, not under ash, but under ice.

Megan Gambino: Hi, Im Megan Gambino, and Im a senior web editor at Smithsonian magazine. I recently edited a story about ice worms. I had no idea what these things were until this story, and theyre tiny, about inch-long, worms that live in glacial ice. Theyre actually the only macroscopic animals that live in glaciers. But what I found interesting about them is that theyre both hardy and fragile at the same time. And what I mean by this is they can live for years without food, and they live at freezing temperatures, and yet they can only survive at this tiny temperature range, hovering right around 32 degrees Fahrenheit. Any colder, they get hypothermia; any warmer, they get room temperature, their membranes melt. So, I found that they were this interesting critter that was both tough and delicate at the same time.

Klimek: Theres More to That is a production of Smithsonian magazine and PRX Productions. From the magazine, our team is me, Debra Rosenberg and Brian Wolly. From PRX, our team is Jessica Miller, Genevieve Sponsler, Adriana Rozas Rivera, Ry Dorsey and Edwin Ochoa. The executive producer of PRX Productions is Jocelyn Gonzales. Our episode artwork is by Emily Lankiewicz. Fact-checking by Stephanie Abramson. Our music is from APM Music.

Im Chris Klimek. Thanks for listening.

Get the latest History stories in your inbox?

Continued here:

How Artificial Intelligence Is Making 2000-Year-Old Scrolls Readable Again - Smithsonian Magazine

Enhancing Developer Experience for Creating Artificial Intelligence Applications – InfoQ.com

For one company, large language models created a breakthrough in artificial intelligence (AI) by shifting to crafting prompts and utilizing APIs without a need for AI science expertise. To enhance developer experience and craft applications and tools, they defined and established principles around simplicity, immediate accessibility, security and quality, and cost efficiency.

Romain Kuzniak spoke about enhancing developer experience for creating AI applications at FlowCon France 2024.

Scaling their first AI application to meet the needs of millions of users presented a substantial gap, Kuzniak said. The transition required them to hire data scientists, develop a dedicated technical stack, and navigate through numerous areas where they lacked prior experience:

Given the high costs and extended time to market, coupled with our status as a startup, we had to carefully evaluate our priorities. There were numerous other opportunities on the table with potentially higher returns on investment. As a result, we decided to pause this initiative.

The breakthrough in AI came with the emergence of Large Language Models (LLMs) like ChatGPT, which shifted the approach to utilizing AI, Kuzniak mentioned. The key change that LLMs brought was a significant reduction in the cost and complexity of implementation:

With LLMs, the need for data scientists, data cleansing, model training, and a specific technical infrastructure diminishes. Now, we could achieve meaningful engagement by simply crafting a prompt and utilizing an API. No need for AI science expertise.

Kuzniak mentioned that enhancing the developer experience is as crucial as improving user experience. Their goal is to eliminate any obstacles in the implementation process, ensuring a seamless and efficient development flow. They envisioned the ideal developer experience, focusing on simplicity and effectiveness:

For the AI implementation, weve established key principles:

Kuzniak mentioned that their organizational structures are evolving in the face of the technology landscapes. The traditional cross-functional teams comprising product managers, designers, and developers, while still relevant, may not always be the optimal setup for AI projects, as he explained:

We should consider alternative organizational models. The way information is structured and its subsequent impact on the quality of outcomes, for example, has highlighted the need for potentially new team compositions. For instance, envisioning teams that include AI product managers, content designers, and prompt engineers could become more commonplace.

Kuzniak advised applying the same level of dedication and best practices to improve the internal user experience as you would for your external customers. Shift towards a mindset where your team members consider their own ideal user experience and actively contribute to creating it, he said. This approach not only elevates efficiency and productivity, but also significantly enhances employee satisfaction and retention, he concluded.

InfoQ interviewed Romain Kuzniak about developing AI applications.

InfoQ: How do your AI applications look?

Romain Kuzniak: Our AI applications are diverse, with a stronger focus on internal use, particularly given our nature as an online school generating substantial content. We prioritize making AI tools easily accessible to the whole company, notably integrating them within familiar platforms like Slack. This approach ensures that our staff can leverage AI seamlessly in their daily tasks.

Additionally, weve developed a prompts catalogue. This initiative encourages our employees to leverage existing work, fostering an environment of collective intelligence and continuous improvement.

Externally, weve extended the benefits of AI to our users through the introduction of a student AI companion for example. This tool is designed to enhance the learning experience by providing personalized support and guidance, helping students navigate their courses more effectively.

InfoQ: What challenges do you currently face with AI applications and how do you deal with them?

Kuzniak: Among the various challenges we face with AI applications, the most critical is resisting the temptation to implement AI for its own sake, especially when it adds little value to the product. Integrating AI features because theyre trendy or technically feasible can divert focus from what truly matters: the value these features bring to our customers. Weve all encountered products announcing their new AI capabilities, but how many of these features genuinely enhance user experience or provide substantial value?

Our approach to this challenge is rooted in fundamental product management principles. We continuously ask ourselves what value we aim to deliver to our customers and whether AI is the best means to achieve this goal. If AI can enhance our offerings in meaningful ways, well embrace it. However, if a different approach better serves our users needs, were equally open to that.

See the rest here:

Enhancing Developer Experience for Creating Artificial Intelligence Applications - InfoQ.com

New to the Lou: Ai, No Artificial Intelligence – PawPrint

I cant believe Ive never touched on my new found love and appreciation for a classic Japanese cuisine, SUSHI!

As a kid, I was never one to try new things. Ill have chicken tenders, mac and cheese, or strawberries, please; It wasnt until I turned around 18, or about my senior year of high school, that I finally started to branch out and try a few new things here and there.

I started off small. I tried lemonade for the first time at the county fair and man my life was changed forever. I remember that pivotal moment in my picky-eater career, when I began getting excited about trying new things.

Sushi is a new dish that I recently gave a try. I had always wanted to try it, but thought for sure that it wouldnt be for me. Im so glad I pushed myself to try it because now its one of my favorite dinners to grab!

Last night, my boy Skywalker Mann and I tried Sushi Ai, located in Clayton, Missouri. But dont worry, thats only one of five locations in the St. Louis area.

The recommendation came from a fellow PawPrint classmate, Liz Santimaw, a senior Business Administration student. Liz said I love Sushi Ai, and I trust her judgement so I knew it was worth a try.

Photo courtesy of Maddie Hill. This photo is from the Sushi Ai location at 471 Lafayette Center Drive in Manchester.

Walker got the all you can eat sushi deal that they offer, which was $23.99. They have over 40 rolls to choose from. He went with the Snow White roll, the Volcano roll, and the American Dream roll. On a little more mellow side, I went with the classic Crab roll and the Shrimp Tempura roll.

What sets Sushi Ai apart is that all of the sushi is prepared in house, with the chefs in view for the customers. This creates an experience for customers, and makes the restaurant feel authentic.

When you arrive, you are handed a paper menu and a pen, with a list of all the rolls and combinations they offer. Once you decide on the rolls you want to order, you write a checkmark or X next to it. Your server will come and take your paper menu, which they then give to the chefs.

Photo courtesy of Maddie Hill. Pictured from left to right: American Dream Roll, Snow White Roll, Volcano Roll, Shrimp Tempura Roll and Spicy Crab Roll.

Walker says the all you can eat sushi is an absolute steal for the price. He is sure right, when you choose a minimum of three rolls, you have essentially got the bang for your buck in the all you can eat. However, Sushi Ai does charge for any uneaten pieces of sushi, up to $1.00 per piece.

If this write-up can encourage you to do anything, its to branch out and explore new foods, no matter how scary or unappetizing they may seem! You know what they say, you never know until you try.

Here is Sushi Ais extended menu, and website to learn more.

See more here:

New to the Lou: Ai, No Artificial Intelligence - PawPrint

Presenting the First-Ever AI 75: Meet the Most Innovative Leaders in Artificial Intelligence in Dallas-Fort Worth – dallasinnovates.com

One honoree has implemented generative artificial intelligence to help airline customers book flights. Others are using AI to create new cancer drugs, or to advance healthcare for underserved populations. Still others are developing the technology to manage traffic networks, detect and respond to cyberthreats, and accelerate real-world AI learning by up to 1,000 times.

All these breakthroughs are happening in Dallas-Fort Worth, which is uniquely positioned as a burgeoning hub for applied AI and advanced AI research. This is all-important, because the AI revolution is reshaping the global economy andaccording to a Deloitte survey of top executiveswill be the key to business success over the next five years.

Thats why Dallas Innovates, in partnership with the Dallas Regional Chamber (DRC) and Dallas AI, has compiled the following, first-ever AI 75 list of the most innovative people in artificial intelligence in DFW. Consisting of academics, entrepreneurs, researchers, consultants, investors, lawmakers, thought leaders, and corporate leaders of every stripe, our 2024 AI 75 spotlights the visionaries, creators, and influencers making waves in AI in seven categories.

Online nominations for the inaugural list were opened in February, focusing on North Texans making significant contributions to AI, whether through innovative research, catalytic efforts, or transformative solutions. Nominees were reviewed for demonstrated excellence in key criteria, including recent AI innovations, adoption impacts, industry technological advancement, thought leadership, future potential, and contributions to society.

The editors of Dallas Innovates, including Co-Founder and Editor Quincy Preston, led the nomination review and honoree selection process. Aamer Charania and Babar Bhatti of Dallas AI and the DRCs Duane Dankesreiter provided strategic guidance and input on the editors selections across award categories.

The inaugural class of AI honorees is set to be announced live on Thursday, May 2, at the DRCs Convergence AI event at the Irving Convention Center. The AI 75 is supported in part by the City of Plano, the University of Texas at Dallas, and Amazech.

Because this is the first year for Dallas Innovates AI 75, we know there must be other AI leaders you need to know who are deserving of future consideration. We invite and welcome your feedback on the 2024 programas well as your suggestions for next years list.

RENOWNED IN REALTY Naveena Allampalli Senior Director AI/Gen AI Solutions and Chief AI Architect, CBRE

Allampalli is a leader in the fields of AI, machine learning, and cloud solutions at Dallas-based CBRE, a commercial real estate services and investment company. A frequent conference speaker on AI and AI applications, she has advised AI startup companies, served as a mentor for data scientists, and was recognized as a woman leader in AI and emerging technologies by Fairygodboss, a career community for women. Allampalli, who previously was director of AI/ML and financial analytics for IT services and consulting company Capgemini, holds a masters degree in computers and mathematics focusing on computer science and artificial intelligence.

RETAILING REVOLUTIONARY Sumit Anand Chief Strategy and Information Officer, At Home

Anand is part of the executive team and is responsible for leading the technology and cybersecurity capabilities, among other things, for At Home, a Dallas-based chain of home dcor stores. There, he has partnered with Tata Consulting Services to leverage Tatas patented Machine First Delivery Model to create standardized processes and, eventually, run At Homes infrastructure operation with AIOps. Up next: leveraging AI with AR and VR to help merchants visualize product assortments. Says Tatas Abhinav Goyal: Sumit is a strategic thinker and sets the vision for the organization. In 2023, Anand was named to the Forbes CIO Next List of 50 top technology leaders who are transforming and accelerating the growth of their businesses.

TEAM TRANSFORMER Jorge Corral South Market-AI Lead, Accenture

Corral leads the large Data and AI Transformation team for Accentures South Market unit. That unit helps Fortune 500 companies digitize their enterprises in a number of areas, including productivity, supply chains, and growth outcomes. To hasten the effort, Accenture said in 2023 that it would invest $3 billion globally into generative AI technology over three years. Recently, Corral spoke on Bringing Business Reinvention to Life in the Era of Gen AI at HITEC Live!, a gathering of the Hispanic Technology Executive Council. Hes also among the expert speakers appearing at Convergence AI, the Dallas-Fort Worth regions first-ever conference dedicated to enterprise AI.

INDUSTRY INFLUENCER Robert Hinojosa Sr. Director, AI/ML, Staples

Before becoming senior director, AI/ML at Staples, Hinojosa worked at Ashley Furniture Industries. As the chief AI officer and vice president of IT at Ashley, he led the manufacturer/retailers AI transformation across the company, overseeing its data science function, its enterprise innovation lab, and the AI upskilling of its workforce. Before Ashley he was CTO at Irving-based Cottonwood Financial and a software engineering leader at Fort Worth-based Pier 1 Imports. He currently serves as an industry advisor on various academic boards, including at Texas Christian University and The University of Texas at Arlington.

AUTONOMY ACE Chithrai Mani CEO, Digit7

Under the leadership of Mani, who has an extensive background in AI and digital transformation, Digit7 has become a trailblazer in the field of self-checkout and autonomous stores. Leveraging his experience in artificial intelligence and machine learning, Digit7s cutting-edge systems like DigitKart and DigitMart have been pivotal in shaping the future of the global retail and hospitality industries. Before becoming CEO of Richardson-based Digit7, Mani served as the chief technology and innovation officer at InfoVision Inc., where he helped drive innovation and digital transformation for Fortune 500 companies. The much-sought-after tech influencer is a frequent keynote speaker on topics related to AI and ML, and an emerging-tech evangelist.

ENGINEERING ORIGINATOR Shannon Miller EVP and President of Divergent Solutions, Jacobs

Miller, a 26-year veteran of Dallas-based Jacobs, is the point person for a collaboration between the company and Palantir Technologies to harness the power of artificial intelligence in critical infrastructure, advanced facilities, and supply chain management applications. Last year, for example, Miller explained in a YouTube video how Jacobs was harnessing the power of Palantirs comprehensive AI solution in wastewater treatment to optimize decision-making and long-term planning. As president of Divergent Solutions, Miller is responsible for delivering next-generation cloud, cyber, data, and digital solutions for the companys customers and partners globally. She has a bachelor of science degree in chemical engineering and petroleum refining from the Colorado School of Mines.

CASHIER-LESS CREATIVE Shahmeer Mirza Senior Director of Data, AI/ML and R&D, 7-Eleven

A public speaker and inventor with a robust patent portfolio, Mirza is responsible for data engineering, artificial intelligence and machine learning, and innovation at Irving-based 7-Eleven. Earlier at the company, he led an interdisciplinary team that delivered a fully functional, AI-powered solution from prototype to full scale in less than a year. The so-called Checkout-Free tech solution tracks what customers take and automatically charges them, making for a frictionless shopping experience in a cashier-less store. Before joining 7-Eleven, Mirza was a senior R&D engineer with Plano-based PepsiCo, where he piloted projects to demonstrate the long-term impact of AI/Machine Learning in R&D.

TWINS POWER Timo Nentwich Executive Vice President and CFO, Siemens Digital Industries Software

Nentwich has made a significant impact on AI through his role as Siemens EVP and head of finance. His recent projects have centered around development of the Siemens Xcelerator portfolio, a key part of Siemens transformation into a Software as a Service company. The portfolio is designed to help engineering teams create and leverage digital twins, harnessing the potential of advanced AI-driven predictive modeling. A partnership between Plano-based Siemens and NVIDIA is intended to take the industrial metaverse to the next level, enabling companies to create digital twins that connect software-defined AI systems from edge to cloud. Nentwich, a native of Germany, holds an MBA from Great Britains Open University Business School.

ENGINEERING EMINENCE Justin J. Nguyen Head of CS Data Engineering and Analytics, Chewy

Nguyen is an accomplished leader with a strong background in AI, analytics, and data engineering. As head of data and analytics at Chewy, he has improved the companys operational efficiencies using AI and designed anti-fraud algorithms. He has demonstrated his thought leadership in the field with articles in multiple publications, peer-reviewed research papers, symposiums, and podcastsincluding hosting Chewys AI in Action podcast. In 2022, he was recognized in CDO Magazines 40 Under Forty Data Leaders. With undergraduate and graduate degrees from the Georgia Institute of Technology, Nguyen previously was a senior director and head of enterprise data and AI at Irving-based 7-Eleven.

TECH TSAR Joe Park Chief Digital and Technology Officer, Yum! Brands

Park has been a leader in the integration and advancement of an AI-first mentality within Yum! Brands, which owns several quick-service restaurant chains including Plano-based Pizza Hut. Hes helped develop and deploy innovative technologies aimed at enhancing kitchen operations, improving the tech infrastructure, and bolstering digital sales growth. For example, Parks team oversaw the rollout of an AI-based platform for optimizing and managing the entire food preparation process, from order through delivery. Yum! has doubled its digital sales since 2019 to about 45% of total sales, thanks in part to his AI initiatives. Park, who joined Yum! in 2020 as its first chief innovation officer, previously was a VP at Walmart.

DIGITAL DOER Joshua Ridley Co-Founder and CEO, Willow

Ridley, a serial entrepreneur, leads Willow, a global technology company whose tagline is Digital twins for the built world. The companys AI-driven, digital-twin software platform analyzes and manages data to power smart buildings at scale. Launched in Australia in 2017, Willow relocated to Dallas in 2022 and has partners and customers including Johnson Controls, Walmart, Microsoft, and Dallas-Fort Worth International Airport. The companys collaboration with D-FW Airport, which includes creating a digital twin for the maintenance and operation of assets including Terminal D, was called a game-changer for our industry by an airport official. Ridley previously founded a pioneering Australian digital design/construction firm and a company that leveraged the internet to deliver building services.

CYBER SOVEREIGN Shri Prakash Singh Head of Data Science & Analytics, McAfee

Singh is a prominent thought leader in AI, particularly in the context of cybersecurity. His position at Dallas-based McAfee has him playing an increasingly significant role in detecting and responding to cyber threats, which are constantly evolving and growing in sophistication. Singh has shared his expertise in AI and data science at a number of public forums, including at last years Dallas CDAO Executive Summit. At a private, executive boardroom session there, he discussed opportunities for data-driven innovation, among other things. In 2023, AIM Research named Singh one of the countrys 100 Most Influential AI Leaders.

BANKING BRAIN Subhashini Tripuraneni Managing Director, JPMorgan Chase

As a managing director, Tripuraneni serves as JPMorgan Chase & Co.s global head of people analytics and AI/ML. In that role, she leads machine learning initiatives and applies artificial intelligence to enhance the giant financial institutions critical business processes. Previously the head of AI for Dallas-based 7-Eleven, Tripuraneni was recognized as one of the top women aiding AI advancement in Texas in 2020. She has spoken widely about the use of AI in retailing and banking and co-authored Hands-On Artificial Intelligence on Amazon Web Services, a book aimed at data scientists and machine learning engineers.

DATA DOYEN Vincent Yates Partner and Chief Data Scientist, Credera

Yates serves as the chief data scientist and a partner at Credera, an Addison-based company that helps transform enterprises through data from strategy to implementation. One of Crederas analytics platforms, for example, leverages generative AI to provide marketers with insights and personalized consumer experiences. Previously, he held leadership roles at companies including GE Digital, Zillow Group, Uber, and Microsoft. Yates is a member of the Global AI Council, where hes contributed to developing a framework to assess AI readiness. He has spoken widely about the economic impacts of GenAIespecially in customer operations, marketing, R&D and software engineeringand has addressed the challenges executives face aligning AI with their business objectives.

AUTHENTICATION ILLUMINATOR Milind Borkar Founder and CEO, Illuma Labs

Borkar is founder and CEO at Illuma, a fintech serving credit unions and other financial institutions with voice authentication and fraud prevention solutions. The Plano-based software companys AI, machine learning, and voice authentication technologies derived from its R&D contracts with the U.S. Department of Homeland Security. Illuma says its flagship product, Illuma Shield, utilizes AI and advanced signal processing, among other things, to achieve much faster and more accurate authentication compared to traditional methods. Borkar, who previously worked at Texas Instruments, graduated from the Georgia Institute of Technology with a Ph.D and masters degree in electrical and computer engineering.

AGI PATHFINDER John Carmack Founder, Keen Technologies

The legendary game developer and VR visionary shifted gears in 2022 by founding Dallas-based Keen, intent on independently pursuing his next grand quest: the achievement of artificial general intelligence. Last fall, Carmack announced a new partnership for his pioneering, out-of-the-mainstream effort with Richard Sutton, chief scientific advisor at the Alberta Machine Intelligence Institute. Now the two are focused on developing an AGI prototype by 2030, including establishing and advancing AGI signs of life. Its likely that the important things that we dont know are relatively simple, Carmack has said. In 2023, he was a keynote speaker at the Future of AI Summit hosted by the Capital Factory Texas Fund at the George W. Bush Presidential Library in Dallas.

REAL-WORLD AI REVOLUTIONIST Dave Copps Co-Founder and CEO, Worlds

Serial entrepreneur Copps, whos been building artificial intelligence in North Texas for more than 15 years, is one of the regions most accomplished and respected AI pioneers. With a string of successful startups, including Brainspace, Copps latest venture is a leader in the field of real-world AI. Dallas-based Worlds, co-founded with President Chris Rohde and CTO Ross Bates, recently launched its latest groundbreaking platform called WorldsNQ, creating the Large World Model (LWM) concept in AI. The breakthrough technology, ushered to completion by Bates, leverages existing cameras and IoT sensors to improve and measure physical operations through LWMs and radically accelerates AI learningby 100 to 1,000 timeswithout needing human annotation. This enables systems to continually learn and adapt from their surroundings autonomously. Copps, a University of North Texas grad who hosts the Worlds of Possibility podcast and speaks about AI at conferences worldwide, received EYs regional Entrepreneur Of The Year award in 2022.

SUPPLY-CHAIN SAGE Skip Howard Founder, Spacee

Howard, the mastermind behind Dallas-based Spacee, is blazing a trail in the retail and hospitality industries with AI solutions for real-world challenges. By leveraging computer vision AI, robotics, and spatial augmented reality, Spacee is transforming how businesses operate and engage with customers. Its Deming shelf-mounted robot tracks inventory in real-time, while the HoverTouch platform turns any surface into an interactive touchscreen. Howards vision extends beyond Spacee, as he helps nurture other AI-oriented tech ventures seeking a community of like-minded companies. A sought-after speaker and key contributor to The University of Texas at Dallas Center for Retail Innovation and Strategy Excellence (RISE), Howard also bridges the gap between academia and the STEM side of retail. In 2019, his industry expertise earned him recognition as a finalist for EYs regional Entrepreneur Of The Year award.

RETAILING WUNDERKIND Ravi Shankar Kumar Co-Founder and CTO, Theatro

Kumar has pioneered multiple AI and generative AI technologies at Theatro, a Richardson-based software company offering a mobile app platform for hourly retail workers. As CTO and co-founder, he was instrumental in developing an award-winning application for Tractor Supply called Hey Gura, for example, enabling store associates to seamlessly access detailed information about products, processes, and policies. He also has led the development of prototypes and new projects that utilize GenAI to initiate diverse digital workflows, and worked to establish initiatives ensuring that Theatros AI applications are ethical and unbiased. Kumar has more than 40 patents in analytics, voice technology, and AI, and Theatro has been ranked No. 1 in technology innovation for four straight years by RIS News.

REWILDING RINGLEADER Ben Lamm Co-Founder and CEO, Colossal

Lamms breakthrough company, Dallas-based Colossal, is putting AI on the map for genetics. The serial entrepreneur co-founded the VC-backed company to focus on genetic engineering, reproductive technology, and AI solutions in support of renowned geneticist George Churchs de-extinction efforts. Recently Colossal has been leveraging synthetic biology as well as software and hardware to bring back the woolly mammoth. As a prelude to that so-called rewilding effort, the company has partnered with an elephant orphanage in Africa, deploying AI to study elephant herd dynamics. Lamm has appeared as a thought leader on innovation and technology in publications such as The Wall Street Journal and The New York Times. He previously founded multiple successful tech companies including AI startup Hypergiant.

VENTURE VISIONARY Richard Margolin Associated Partner, Highwire Ventures

A serial entrepreneur and researcher, Margolin is an associated partner at Highwire Ventures, a strategy-led, Dallas-based consulting firm where he builds AI tools for evaluating investment deals. He also co-founded and, until last November, was CEO of Dallas-based RoboKind. RoboKind is an EdTech company that designs and builds facially expressive robots that facilitate learning for STEM education and individuals with autism. Margolin is a Forbes Technology Council member, a 2017 Tech Titan, and a 2019 winner of EYs regional Entrepreneur Of The Year award. More recently, he was a presenter at the Global AI Summit 2022 in Riyadh, Saudi Arabia.

ONCOLOGY UPSTART Panna Sharma President, CEO, and Director, Lantern Pharma

As president, chief executive, and director of Dallas-based Lantern Pharma, Sharma is the driving force behind Lanterns use of AI in oncology drug discovery and development. The companys proprietary AI and machine learning platform, called RADR, leverages billions of data points and more than 200 advanced ML algorithms to personalize and develop targeted cancer therapies. Under Sharmas leadership, Lantern has made strides in using AI to reduce the time and cost associated with oncology drug development. Using AI, he told a reporter, the cost of drug development could be slashed from the typical $2 billion or more to less than $200 million. Before joining Lantern, Sharma was president and CEO of Cancer Genetics, where he helped expand the companys global footprint.

MULTIBILLION-DOLLAR MAN Sanjiv S. Sidhu Co-Founder and Chairman, o9 Solutions

Pioneering technologist and thought leader Sidhu continues to shape the future of AI applications after creating two multibillion-dollar companies in Dallas: i2 Technologies and o9 Solutions (the former with co-founder Ken Sharma, the latter with Chakri Gottemukkala). Supply chain software company i2 came out of Sidhus foundational work in computerized simulations for manufacturing processes at Texas Instruments AI lab. o9the name stands for optimization to the highest number, he sayshas developed a dynamic business-planning platform that enables AI-driven decision-making. Sidhu has shared his insights on the role of AI in transforming business operations in podcasts, feature interviews, and appearances at prominent industry events.

PILOTLESS PARAGON Patrick Strittmatter Director of Engineering, Shield AI

Strittmatter, whos director of engineering at Shield AI, has years of experience in engineering and product design. Shield is currently building Hivemind, an AI pilot, which it says will enable swarms of drones and aircraft to operate autonomously without GPS, communications, or a pilot. Strittmatters previous employers include Amazon Lab126, where he led product design teams for Kindle E-readers and IoT products, and BlackBerry, where he served as mechanical platform manager. He holds an MBA from The University of Texas at Dallas Naveen Jindal School of Management and a bachelor of science degree in mechanical engineering from Texas A&M University.

RADAR RISK-TAKER Dmitry Turbiner Founder and CEO, General Radar

Turbiner says the startup he founded and serves as CEO is to the militarys large space radars that detect aerial threats what the commercial company SpaceX is to NASA. While the main customers so far for General Radars advanced, AI-enhanced commercial aerospace radar have been in the defense industry, the groundbreaking technology has applications in other areas, including for providing early hazard warnings in autonomous cars. Turbiner previously was an engineer at NASAs Jet Propulsion Laboratory, where 12 of his phased array antennas continue to orbit on a group of six heavy NASA satellites. He has a bachelor of science degree in electrical engineering from MIT and a partial masters of science in electrical engineering from Stanford University.

AUTOMATION ADVOCATE David C. Williams Assistant Vice President, AT&T

As the leader of hyper-automation at Dallas-based AT&T, Williams develops and deploys AI tools and strategies and advocates for responsible AI use and digital inclusivity. His organization has solved multiple business challenges with AI innovations, including projects to fine-tune AT&Ts software-defined network platform and to summarize conversations between the companys agents and customers. Williams, a sought-after speaker on tech topics, also has authored two patentsfor reprogrammable RFID and bridging satellite and LTE technology. They illustrate his ability to innovate across different technology domains, an essential quality for creating comprehensive AI solutions leveraging data from diverse sources and systems.

IMAGE INNOVATOR Feng (Albert) Yang Founder & Advisor, Topaz Labs

Yang, a seasoned entrepreneur and expert in video and signal processing, founded Topaz Labs, a provider of deep learning-based image and video enhancement for photographers and video makers. Under his leadership, the Dallas-based company has developed cutting-edge, AI-driven software tools that improve sharpening, noise reduction, and upscaling of images and videos. A former assistant professor at Chinas Harbin Institute of Technology, Yang remains a Topaz advisor but stepped down as the companys president and CTO in 2021. Says one of his former employees: He was the core developer of the code base for every single Topaz product. He is a doer and not a talker.

CAPITAL IDEAS Bryan Chambers Co-Founder and President, Capital Factory

Chambers is a leading, high-profile proponent of AIs transformative potential, proactively building and nurturing a supportive ecosystem for the space as the Capital Factorys co-founder and president. An accelerator and venture capital fund thats described as the center of gravity for Texas entrepreneurs, Capital Factory has invested in a number of AI-focused companies, including Dallas-based Worlds and Austins Big Sister AI. Under Chambers, Capital Factory also has hosted a wide variety of AI events, co-working sessions, and challenges. Among them were The Future of AI Summit in Dallas last September, Februarys AI Salon at Old Parkland in Dallas, and the $100,000 AI Investment Challenge held during the companys 2019 Defense Innovation Summit in Austin.

BRILLIANT BILLIONAIRE Mark Cuban Founder, Mark Cuban Companies

Cuban, the influential, nationally known billionaire entrepreneur, has long championed the AI ecosystem through multiple initiatives, investments, and educational efforts. Hes stressed the importance of AI education for everyone, including business owners and employees, and founded the Mark Cuban Foundations Intro to AI Bootcamp program for underserved high school students. AI-powered companies Cuban has invested in include Node, a San Francisco-based startup, and British company Synthesia. The minority owner of the NBAs Dallas Mavericks has predicted that AI will have a more significant impact than any technology in the last 30 yearsand has warned that those who dont understand or learn about AI risk becoming a dinosaur.

PRAGMATIC PIONEER David Evans Managing Partner, Sentiero Ventures

Evans Dallas-based VC firm, Sentiero Ventures, invests in seed-stage, AI-enabled SaaS companies that improve customer experiences or enhance financial performance. Evans, a veteran technologist and serial entrepreneur, was exposed early to AI while working on a NASA project in the late 1990s. Widely considered one of the most knowledgeable, best-connected thought leaders and speakers on Dallas-Fort Worths AI scene, hes said Sentiero is most interested in founders with velocity, revenue, and pragmatic strategies for growth. In the last year, hes led investments in the likes of Velou, a San Francisco-based e-commerce enablement startup, and Montreal-based Shearwater Aerospace. Sentieros successful exits include Dallas-based MediBookr, a healthcare navigation and patient engagement platform.

HOUSE HEADMAN Rep. Giovanni Capriglione Texas House of Representatives

Capriglione, a Republican House member representing District 98 with an IT background, co-chairs a new state advisory board on artificial intelligence with Sen. Tan Parker, R-Flower Mound. The states Artificial Intelligence Advisory Council will submit a report to the Legislature by Dec. 1 that makes policy recommendations to protect Texans from harmful impacts of AI and promotes an ethical framework for the use of AI by state agencies. Capriglione says he recognizes AIs potential for streamlining and fiscal efficiencies. But, in a video recorded for the National Conference of State Legislatures, he added, We have to be careful with how its being used, that its being done in a similar way as humans would have done it, and that we measure outcomes.

ALLIANCE ALPHA Victor Fishman Executive Director, Texas Research Alliance

Fishmans in the middle of the action as executive director of the Richardson-based Texas Research Alliance, which works to ensure that North Texas industry, universities, and governments are able to leverage research and innovation resources to grow and lead their markets. Last fall, the alliance was pivotal in the first-ever DFW University AI Collaboration Symposium, where Fishman was a participant. Our students are running towards AI, so we need to run faster, he told the symposium. Lets keep this momentum to create more collaboration between our North Texas universities and partners, turning Dallas-Fort Worth into a leader for AI research. The alliance also was instrumental in proposals supporting the Texoma Semiconductor Tech Hub, led by Southern Methodist University.

SENATORIAL CHARGER Sen. Tan Parker Texas State Senate

Parker, a North Texas businessman and Republican state senator representing District 12, was named earlier this year to the states Artificial Intelligence Advisory Council, where he serves as co-chair with state Rep. Giovanni Capriglione, R-Southlake. The council, created by the Legislature during last years session, studies and monitors AI systems developed or used by state agencies. The council also is charged with assessing the need for an AI code of ethics in state government and recommending any administrative actions state agencies should take regarding AI. Its supposed to submit a report about its findings to the Legislature by Dec. 1. Observers say the council could recommend a new office for AI policy-making, or advise designating an existing state agency to craft policy for artificial intelligence.

FAIRWAY FUTURIST Kevin J. Scott CTO/CIO, PGA of America

Scott is a thought leader driving AI adoption and comprehension inside the PGA and the broader golf industry, as well as among Dallas-Fort Worth technology executives. While hes set the PGAs internal strategy for efficiency improvementsincluding enhancing the organizations app user experience with AIScott says the Frisco-based group is planning additional AI initiatives that are more sophisticated and comprehensive. At the same time, hes helped guide industry leaders in crafting their own AI roadmaps via workshops and public speaking appearances. A former senior director for innovation and advanced products at ESPN, Scott also has worked with Frisco ISD to facilitate AI meetups and mentoring for its students.

NEWSLETTER NOTABLE Deepak Seth Creator/Founder, DEEPakAI: AI Demystified Product DirectorTrading Architecture Platform Strategy & Support, Charles Schwab

Seth has been recognized as a Top 50 global thought leader and influencer for generative AI, in part on the strength of his DEEPakAI: AI Demystified newsletter. The weekly LinkedIn missive with more than 1,500 subscribers has played a key role in making AI more accessible and understandable to a wide audience, as well as opening up discussion about inclusivity, ethics, and the practical applications of AI. In addition to his thought leadership, Seth has been a driving force in the adoption and integration of AI technologies at Charles Schwab, where he initiated C-suite engagement for generative AI adoption, analyzing risks and tools, developing use cases, and achieving a proof of concept that enhanced customer service response by 35%. The product director also pioneered a cross-functional AI-driven employee engagement initiative targeting a 15% boost in agent engagement metrics, earning recognitions with the Presidents Innovation Award. An adjunct professor at Texas Christian University, Seth has been a featured guest on multiple webcasts.

COLLABORATION CATALYST Dan Sinawat Founder, AI CONNEX

After arriving in North Texas from Singapore a few years ago, Sinawat has made a name for himself as a visionary leader in the local AI community with a decidedly global perspective. His enthusiasm for the space led him to found AI CONNEX, a Frisco-based community and networking group for AI enthusiasts, experts, and professionals. Under Sinawats proactive leadership, the group put together an accelerator program for early-stage startups as well as various events promoting AI and technology. The latter included one in partnership with Toyota Connected that focused on innovation in the auto industry. Sinawat also has conducted podcasts and been active in AI panel discussions and on social media.

PATENTED EXPERT Stephen Wylie Lead AI Engineer, Thryv

Wylie, who leads all AI and ML projects at Grapevine-based Thryv, has more than 130 software patents in the AI, AR/VR, Blockchain, and IoT spaces. Above all, he has said, I innovate. A Google Developer Expert in AI/ML, Wylie has long been an active AI educator, writing a blog and speaking frequently before user groups and conferences locally (at UT Dallas and TCU), nationally, and internationally. His talks, aimed at engineers, have included topics such as fusing AI with augmented reality and how AI will affect the careers of future engineers and practitioners. Wylie worked earlier as a senior software engineer for rewardStyle and Capital One.

TRANSIT VISIONARY Khaled Abdelghany Professor of Civil and Environmental Engineering, Southern Methodist University

Abdelghany is a faculty member at SMUs Lyle School of Engineering and a pivotal member of the National Academies Transportation Research Board committee on AI and advanced computing. His groundbreaking work, documented in multiple peer-reviewed journals, focuses mainly on the use of advanced analytics and AI to make infrastructure more efficient, resilient, and equitable. He has developed and adopted AI techniques in such problem domains as real-time traffic network management and urban growth predictions, for example. Over the last year, he secured a three-year, $1.2 million federal grant to develop advanced AI models for traffic-signal operations, elevating SMUs reputation as a hub for cutting-edge AI research and innovation.

PROFESSORIAL PACESETTER Lee Brown Associate Professor of Management and Director of Research, Texas Womans University

In his roles as a management professor and research director at Texas Womans University in Denton, Brown is continually learning about and exploring ways to apply generative AI technologies to improve classroom engagement and workplace efficiency. In addition to incorporating AI in MBA courses, enabling students to take advantage of AI for data analysis, decision-making, and strategic planning, he has spoken extensively at TWU and at outside conferences about how AI can revolutionize educational methodologies, promoting innovation and inclusion. Says Brown: My commitment to leveraging AI in education is driven by a vision of creating an empowered and adaptable workforce, capable of contributing meaningfully to our increasingly interconnected and technologically driven world.

INSIGHT ICON Gautam Das Associate Dean for Research and Distinguished University Chair Professor, The University of Texas at Arlington

Das is an internationally known computer scientist with more than three decades of experience in AI, ML, and Big Data analytics. His research has been supported by grants totaling more than $9 million, and he has published 200-plus papers, many of them featured at top conferences and in premier journals. As a professor and associate dean for research at The University of Texas at Arlingtons college of engineering, he is responsible for promoting research excellence across the college and at UTA. Over the years Das research has included all aspects of AI, machine learning, and data mining. Hes currently working in areas such as machine learning approaches for approximate query processing, and fairness and explainability in data management systems.

SOLUTION STRATEGIST Douglas DeGroot Director, Center for Applied AI and Machine Learning, The University of Texas at Dallas

DeGroot is co-founder and director of the Center for Applied AI and Machine Learning (CAAML) at UT Dallas in Richardson. The center was founded in 2019 to help Texas companies and organizations use leading-edge artificial intelligence and machine learning to enhance their products, services, and business processes. So far, the industry-facing, applied R&D center has been involved in about 10 projects for companies including Vistra Corp., Rayburn Corp., Infovision, and Nippon Expressway Co. The projects have been diverse, from building an explainable machine learning system to coming up with an optimal model for electricity pricing. In addition, DeGroot has shared insights on solving industry problems using AI and ML at community events such as The IQ Brew. At CAAML, he works closely with Co-Director Gopal Gupta, a professor of computer science known for his expertise in automated reasoning, rule-based systems, and artificial intelligence.

AI STEWARD Yunhe Feng Director, Responsible AI Lab, The University of North Texas

As an assistant professor in UNTs department of computer science and engineering, Feng directs the Denton schools 2-year-old Responsible AI Laboratory, which advances AI research with a focus on Responsible AI, Generative AI, and Applied AI. He also co-directs UNTs masters program in artificial intelligencethe first of its kind in Texas. He has published research on a variety of AI topics and co-authored a paper on the impact of ChatGPT on streaming media. Financial Times and Business Insider have reported on his work, and hes served as a guest lecturer for various courses, including AI for Social Good. Last year, Feng received the IEEE Smart Computing Special Technical Community Early Career Award for his contributions to smart computing and responsible computing.

TCU TRAILBLAZER Beata Jones John V. Roach Honors Faculty Fellow at Neeley School of Business, Texas Christian University

Jones is a professor of practice in business information systems at TCUs Neeley School of Business in Fort Worth. As a regular contributing writer for Forbes, she often explores the transformative potential of AI in higher education and its impact on business. (Another TCU Neeley faculty member, marketing instructor Elijah Clark, also is a regular Forbes contributing writer on AI.) Jones has written for the publication about how generative AI tools are transforming academic research by fact-checking, for example, or by supporting data visualization or by offering feedback on drafts. Jones, whos been passionate about artificial intelligence since her teenage years, has bachelors and masters degrees from New Yorks Baruch College and a Ph.D. from The Graduate Center at City University of New York.

BIOMEDICAL BARD Guanghua Xiao Mary Dees McDermott Hicks Chair, UT Southwestern Medical Center

Xiao has made significant contributions to the field of medical AI, especially in the application of artificial intelligence to pathology and cancer research. He holds the Mary Dees McDermott Hicks Chair in Medical Science at UT Southwestern in Dallas. Hes also a professor in UTSWs Peter ODonnell Jr. School of Public Health, Biomedical Engineering, and the Lyda Hill Department of Bioinformatics. Xiaos research has focused on AI models that enhance cancer understanding and treatment through advanced image analysis and bioinformatics tools. His key contributions have included developing an AI model called Ceograph, for analyzing cells in tissue samples to predict cancer outcomes, and helping develop the ConvPath software tool. It uses AI to identify cancer cells from lung cancer pathology images.

MASTER MOTIVATOR Amy Blankson Co-Founder and Chief Evangelist, Digital Wellness Institute

Blankson, a wellness author and motivational speaker on overcoming the fear of AI through the lens of neuroscience and fearless optimism, co-founded the Digital Wellness Institute, a Dallas-based tech-based startup. As the institutes chief evangelist, she uses AI to benchmark the digital wellness of the institutes organizations and speaks externally about the evolving future of AI in the workplace. A graduate of Harvard and the Yale School of Management, Blankson has been a contributing member of the Institute of Electrical and Electronics Engineers Standards for Artificial Intelligence. Earlier this year she presented at SHRMs inaugural AI+HI (Human Intelligence) Project conference, where she discussed AI and the Future of Happiness at Microsofts campus in Silicon Valley.

GAMING GUIDE Corey Clark CTO and Co-Founder, BALANCED Media Technology

Clark co-founded and serves as CTO at BALANCED Media Technology, whose tech infrastructure fuses AI and machine learning with data to bring purpose to play through distributed computing and human computational gaming. Securing nearly $30 million from various funding sources, the Allen-based company was co-founded with CEO Robert Atkins. The motivation behind establishing BALANCED was to leverage the intersection of human intelligence and machine learning through video games to solve complex problems, particularly in the medical field. This innovative approach works to develop, among other things, AI-powered games that aim to combat human trafficking and treat ocular diseases more efficiently. Clark also is an assistant professor of computer science at Southern Methodist University and deputy director for research at the SMU Guildhall. Last year, he contributed to a research paper on eXplainable artificial intelligence, focusing on making AI decision-making more transparent.

AI INCLUSIVIST Pranav Kumar Digital Project Manager, LERMA/ Agency

View original post here:

Presenting the First-Ever AI 75: Meet the Most Innovative Leaders in Artificial Intelligence in Dallas-Fort Worth - dallasinnovates.com

OPINION: Artificial intelligence, the usefulness and dangers of AI – Coast Report

The entrance to Paramount Pictures in Hollywood.

As my mind wanders to AI, robots, and machines replacing humans, realizations enter and I see it not as AI troubles, but as humans abuse the systems we create. I see the decline and lack of efforts in schools, and how hesitant people are for social interaction and the affliction to connection.

Each day there are new advancements in AI. But if it is learning, and each day it is getting to know more things, and its getting smarter, then it wasnt smart to begin with. How are we supposed to believe in something when it does not have the full capable potential of its vast atmosphere?

There are talks that it is a scary thing. Something is happening. But where is this thing? What does it look like? Does it have four eyeballs? Does it hide in the closet or underneath my bed at night? No. All it is is a technique and a machine to operate and make our lives easier for no reason. The true villain behind it is us.

The ones who use it, use it incorrectly. The people who tend to think that taking corners and skipping the lessons we have learned, and parental figures have seen and felt before us somehow doesnt matter anymore. We strive for new advancements with no idea where they will lead, and somehow, that is a resource.

We have been doing it for over 100 years and yet companies are wanting to make a fully self-driving car. The reasons why does not enter into my atmosphere since there is no valid reason. We are capable of writing an essay, creating math solutions, driving cars, and having surgery for a knee replacement. Yet as time passes, humans find ways for humans to do less. With that, comes laziness, lack of common sense, and street smarts vanish.

By doing the things on my own and creating solutions, it gave me a sense of self-respect. The realizations that I have the potential to do the things that I want to strive for, which is excellence.

To be your own person with imagination and self-fulfilling creativity is to see happiness and sadness at their best and worse. To understand determination, anguish, grief and unadulterated bliss.

Yet some choose not to.

Its the opposite of an opportunity to make ones life better. Yet they vanish like a supernatural ghost you see in the distance. Or a political figures good nature when they start to run for office.

The speech in which they speak is loud and ruthless. Harsh, yet dull with a banal sense of sophistication. They postpone any type of meaningful discussions.

I choose, consciously, to be different. I challenge and take charge. I avoid talking when I do not know. Possibly taking away that one vestigial piece of truth the opposition speaks.

After all that, theres still some nameless undistinguishable apprehension in their unconscious mind that I have so easily picked out. That smile. That wave. That cheers of a plastic cup of and glaring pessimistic view they have on the world.

It is something that they do without. Its something that I have and its something that I have noticed.

Otherwise known as self-respect.

I do have some relevance in this topic. Last semester in my critical thinking class, nonfiction, I and three others were asked to present a topic of an ethical crisis. We chose artificial intelligence. My nine-page paper that came shortly after that was also my final paper of the semester included 10 pros and 10 cons of AI.

We broke into several categories including entertainment and education.

Education

Each time a new class for the semester begins, the class is given or told to look at the syllabus on the website and to see what is to be done and what is forbidden. Lately, more and more and then all of the classes I endure are promptly educating the students of Al and the use of cheating.

No passing notes in class.

No phones in class.

No use of AI in class.

The evolution of teachers habitual demands.

Now students can formulate ideas and have a starting point of creativity. Ask AI how to manage test anxiety. Ask for steps on how to prepare me for transferring colleges, and how to find an internship for creative writing.

Yet, students are having the ease without the idea of repercussions for the abuse of AI.

The lack of creativity it can cause may lead to students not developing properly. I assume it began with isolation and students finding it easier to not engage fully with teachers and peers. Now we are all back to normal and we assume we shall strive for connectivity.

Yet some are not capable without their AI to guide them. They are now relying on it.

Just last week I saw on TV a robot parents can buy to help their child learn social skills and communication.

The choice is not whether or not students learn the homework and know the material given to them, but now it is about whether they strive for excellence or fall behind.

Critical thinking is not just to dive deeper into ideas. It is to find a topic, idea, and solution to deconstruct it and keep asking questions until you or the other person breaks down into oblivion where the answers cannot justify the questions being asked, and there are no more answers to give.

Common sayings will say something along the lines of going beneath the surface level, or the tip of the iceberg or some of those bland sayings that are something entirely other. The people who use those are the ones who need the understanding of critical thinking.

But I can not describe what the surface level is. Each scenario is different. One may not have to go extremely deep into understanding the topics, ideas, and/or solutions. One can not pre-plan the surface level. One must learn to evolve while the conversation unfolds.

Over time, the one questioning, the questioner/critical thinker, evolves the ability to articulate high-level criticism. The criticism should not be negative, but to help yourself and the one you are trying to evaluate. But I do believe sometimes negative and harsh realism is imperative. Take a hammer to a rock and smash, breaking, exposing each particle until you see and can extract the gold from the inside.

That is the purpose of each moment. Questioning to an extreme, harshly or quietly, and gently pursuing and constantly spiraling into the clarity you both can subconsciously agree on. You both will know it, the critical thinking, is done because there will be a quiet sense of revelation.

If one stops the dedication to think, critique, and define, then one's creativity is dead.

AI can not be a pillar of learning without knowing the consequences. One must maintain an understanding of how to properly use it.

In my journey to find answers, I conducted a Q&A interview with Tara Giblin, the Acting Vice President of Instruction at Orange Coast College, about her ideas and thoughts about AI in education.

Q: How do you see the use of AI in todays education system?

A: As an administrator, I have heard many conversations by faculty about how AI is changing or will change the way they teach. Our OCC Faculty Senate has placed high importance on discussing AI weekly because it is impacting faculty in many different ways. Right now we are just learning about its capabilities and trying to understand the pluses and minuses that come with any new technology. AI certainly has the power to make our workplace more efficient, but we are in the early stages of figuring out how it fits into the classroom.

Q: How do you or how have you used rules or policies for student's negative attraction to it? (Cheating)

A: "Not working directly in the classroom, I dont have first-hand experience. However, I hear faculty talking about how they are developing policies in their classrooms to help students understand how AI fits into their learning and how to guide students away from using AI as a substitute for learning or producing original material. I have heard suggestions like having students do their writing assignments in class with spontaneous prompts, so they will do original work or as teachers, using AI to generate responses to questions in front of the class then asking the class to critique these answers and analyze how they might tell the difference between AI and original work. This raises awareness of the downfalls of AI generated answers."

I was also intrigued to find my past Ethics professor for him to share his thoughts and ideas about AI. Professor Phillip Simpkin shares his ideas.

Q: How do you see the use in today classrooms?

A: "I see it in a large and growing way. It is being applied more and more. It is just going to increase. For good or for bad its going to be everywhere. The computer browsers were already kind of AI, to quickly find things. And that is where it is really nice. There is two uses for me. I tell my students it can help you to become a lazier, worst student or can help you become a better student. It can help you become a lazier student because it can do the work for you and then you are not going to learn. And that's is the most troubling part for me is, on the other hand, it can help you and that is really an important part too. You can put your essay in and help you find your grammar mistakes. But the bad issue with this is when they say, find all my grammar mistakes and fix them for me. But now that's where you dont learn anything from the activity. At first people were wowed by it but you see how mechanical and clunky ChatGPT is. It will be over verbose and overly eloquent when you dont really need it to be, metaphors that have no business for being in it. I am worried about that people do stuff for them that they should be doing on their own."

Q: Do you think that teachers advise students how to use AI?

A: "I sit in a classroom and I say here is a question, what is an answer. And most of the time I get silence now. Not even a soul wants to say anything, and I may get one or two talkative students. At the same time, it's the smartest student in the class, and they can come up with ten different answers. I hope my students listen and copy it down. And they listen to the next person and copy it down. The AI now can be that student or conversation with for a back and forth. And I think that's a good use. So I am stuck. I send my students home all the time and I try to have them generate great ideas and I can force them to do that. A lightbulb moment may happen. But lots of students don't feel that creative for whatever reason and that could get them out of that hole."

Q: Why are student attracted to AI and finding out answers?

A:"It makes life easier. But there is more to it. There is an actual attraction that the computer can do it for you and it's very tempting to see what it is and how it works. Anything can save you time. Right now we have a crisis of expertise. People dont know who the authorities are or proper authority. Whos expertise to take serious or not. So I feel like the AI for them seems like it will tell them truths. And in many ways it does pretty good. But right now it is strictly just a machine."

Hollywood

It has been around for a while, but it is slowly becoming a threat. I think of Toy Story from 1995 and see how amazing and groundbreaking it was. Then I see Toy Story 4 in 2019 and I am taken back by the accomplishments.

Some grab it to see the new visual effects and new heights, and some see it as taking away jobs. Yet AI has many aspects to the Hollywood industry. AI will not just write a script and have it done in a few minutes. It is still learning how to manage emotions and rise and fall structure. But we, humans, still need to control the rate it grows. The robots will not suddenly take over our lives. But why do we strive so assiduously to create things that something else could do for us?

Writers, actors and directors go on strike to spread awareness of their concerns. They are passionate and full of rules of their own.

But only part of the strike was dedicated to artificial intelligence. People become frantic, emotions get lost, the heights of the mindset of the abandoned job is close. But there is no level of any type of consideration for replacement of jobs. AI is still getting built with new algorithms. AI is still being considered.

There is some perpetual fear, but it is obfuscated by the truth. The reality behind all that is dull. Theres nothing behind it. Theres nothing behind it because there never was. The idea that AI will take over anyones job of writing anytime soon is not part of our atmosphere. AI is not detailed enough to show what it could truly be. Yet us humans have the ability to make it grow. Shall we?

I also conducted an Q&A interview with Actor Makai Michael about AI.

Q: How do you see the use of AI in todays film industry?

A: "When it comes to AI in post production, object removal and scene stabilization for sound design, I find that this is more understandable for me. I am not immersed in the world of editing so editors may have a completely different stance than me. I could see this as taking jobs away from editors who are highly skilled which truly is devastating. As an actor, hearing about the industry executives wanting to use AI to exploit background actors' work is awful. I would not ever wish to see that happen to my work."

Q: Because the growth of AI, do you see yourself being a part of any films that are heavy with CGI?

A: "Although I am pretty against the use of AI in film, ESPECIALLY AI being able to use the likeness of actors and exploiting their work for the benefit of producers and directors, I could possibly see myself in films that use CGI. As an actor I take pride in being a part of projects that build worlds and spaces for others to get lost and seek comfort in, and oftentimes that requires some CGI work. I believe if the CGI is used for world building/ setting building for the most part then it is alright."

Q: How did you feel and perceive the writers strike that happened last year?

A: "Since I am still very new to being involved in the industry side of acting I do not have the biggest range of knowledge when it comes to the strike. I perceived it to be a fight for a better income and a fight AGAINST the use of AI to recreate actors' work, time and time again. From what I have gathered it seems like the thoughts are split on whether we went forwards, backwards, or stayed the same in terms of making a change. I thought it was inspiring watching the actors and writers stand in unison against a system that often plays unfairly."

Q: Do you think AI will have a place for character development or script writing?

A: "My stance will always stay firm until I am convinced otherwise, and my stance is no. I do think that people WILL use AI, but I almost wish we never got to this point at all. I think its a cheap, and soulless way to make projects. The best cinema in history came from someone who sat down and had to think of it all themselves or with the help of collaborators, not robots."

Q: Do you think AI will create a more enhancing experience with new innovations in a movie theater or home theater?

A: "Though I am against AI scriptwriting, and AI extra doubling, I do think that AI may be able to enhance movie theater or home theater experiences. I think of AR, augmented reality or VR, virtual reality. No matter my stance on the situation, AI truly is the biggest cultural phenomenon at the moment and people are going to want to test its limits and that is understandable. When it starts to kill the heart of human creativity is when it starts to kill my love for art."

Continued here:

OPINION: Artificial intelligence, the usefulness and dangers of AI - Coast Report

NSF Funds Groundbreaking Research Project to ‘Democratize’ AI – Northeastern University

Groundbreaking research by Northeastern University will investigate how generative AI works and provide industry and the scientific community with unprecedented access to the inner workings of large language models.

Backed by a $9 million grant from the National Science Foundation, Northeastern will lead the National Deep Inference Fabric that will unlock the inner workings of large language models in the field of AI.

The project will create a computational infrastructure that will equip the scientific community with deep inferencing tools in order to develop innovative solutions across fields. An infrastructure with this capability does not currently exist.

At a fundamental level, large language models such as Open AIs ChatGPT or Googles Gemini are considered to be black boxes which limits both researchers and companies across multiple sectors in leveraging large-scale AI.

Sethuraman Panchanathan, director of the NSF, says the impact of NDIF will be far-reaching.

Chatbots have transformed societys relationship with AI, but how they operate is yet to be fully understood, Panchanathan says. With NDIF, U.S. researchers will be able peer inside the black box of large language models, gaining new insights into how they operate and greater awareness of their potential impacts on society.

Even the sharpest minds in artificial intelligence are still trying to wrap their heads around how these and other neural network-based tools reason and make decisions, explains David Bau, a computer science professor at Northeastern and the lead principal investigator for NDIF.

We fundamentally dont understand how these systems work, what they learned from the data, what their internal algorithms are, Bau says. I consider it one of the greatest mysteries facing scientists today what is the basis for synthetic cognition?

David Madigan, Northeasterns provost and senior vice president for academic affairs, says the project will help address one of the most pressing socio-technological problems of our time how does AI work?

Progress toward solving this problem is clearly necessary before we can unlock the massive potential for AI to do good in a safe and trustworthy way, Madigan says.

In addition to establishing an infrastructure that will open up the inner workings of these AI models, NDIF aims to democratize AI, expanding access to large language models.

Northeastern will be building an open software library of neural network tools that will enable researchers to conduct their experiments without having to bring their own resources, and sets of educational materials to teach them how to use NDIF.

The project will build an AI-enabled workforce by training scientists and students to serve as networks of experts, who will train users across disciplines.

There will be online and in-person educational workshops that we will be running, and were going to do this geographically dispersed at many locations taking advantage of Northeasterns physical presence in a lot of parts of the country, Bau says.

Research emerging from the fabric could have worldwide implications outside of science and academia, Bau explains. It could help demystify the underlying mechanisms of how these systems work to policymakers, creatives and others.

The goal of understanding how these systems work is to equip humanity with a better understanding for how we could effectively use these systems, Bau says. What are their capabilities? What are their limitations? What are their biases? What are the potential safety issues we might face by using them?

Large language models like Chat GPT and Googles Gemini are trained on huge amounts of data using deep learning techniques. Underlying these techniques are neural networks, synthetic processes that loosely mimic the activity of a human brain that enable these chatbots to make decisions.

But when you use these services through a web browser or an app, you are interacting with them in a way that obscures these processes, Bau says.

They give you the answers, but they dont give you any insights as to what computation has happened in the middle, Bau says. Those computations are locked up inside the computer, and for efficiency reasons, theyre not exposed to the outside world. And so, the large commercial players are creating systems to run AIs in deployment, but theyre not suitable for answering the scientific questions of how they actually work.

At NDIF, researchers will be able to take a deeper look at the neural pathways these chatbots make, Bau says, allowing them to see whats going on under the hood while these AI models actively respond to prompts and questions.

Researchers wont have direct access to Open AIs Chat GPT or Googles Gemini as the companies havent opened up their models for outside research. They will instead be able to access open source AI models from companies such as Mistral AI and Meta.

What were trying to do with NDIF is the equivalent of running an AI with its head stuck in an MRI machine, except the difference is the MRI is in full resolution. We can read every single neuron at every single moment, Bau says.

But how are they doing this?

Such an operation requires significant computational power on the hardware front. As part of the undertaking, Northeastern has teamed up with the University of Illinois Urbana-Champaign, which is building data centers equipped with state-of-the-art graphics processing units (GPUs) at the National Center for Supercomputing Applications. NDIF will leverage the resources of the NCSA DeltaAI project.

NDIF will partner with New Americas Public Interest Technology University Network, a consortium of 63 universities and colleges, to ensure that the new NDIF research capabilities advance interdisciplinary research in the public interest.

Northeastern is building the software layer of the project, Bau says.

The software layer is the thing that enables the scientists to customize these experiments and to share these very large neural networks that are running on this very fancy hardware, he says.

Northeastern professors Jonathan Bell, Carla Brodley, Bryon Wallace and Arjun Guha are co-PIs on the initiative.

Guha explains the barriers that have hindered research into the inner-workings of large generative AI models up to now.

Conducting research to crack open large neural networks poses significant engineering challenges, he says. First of all, large AI models require specialized hardware to run, which puts the cost out of reach of most labs. Second, scientific experiments that open up models require running the networks in ways that are very different from standard commercial operations. The infrastructure for conducting science on large-scale AI does not exist today.

NDIF will have implications beyond the scientific community in academia. The social sciences and humanities, as well as neuroscience, medicine and patient care can benefit from the project.

Understanding how large networks work, and especially what information informs their outputs, is critical if we are going to use such systems to inform patient care, Wallace says.

NDIF will also prioritize the ethical use of AI with a focus on social responsibility and transparency. The project will include collaboration with public interest technology organizations.

Read the original here:

NSF Funds Groundbreaking Research Project to 'Democratize' AI - Northeastern University

Small is the new BIG in artificial intelligence – ET BrandEquity – ETBrandEquity

Representative image (iStock) There are similarities between the cold war era and current times. In the former, there was a belief that alliances having stronger nuclear arms will wield larger global influence. Similarly, organizations (and nations) in the existing era believe that those controlling the AI narrative, will control the global narrative. Moreover, scale was, and is, correlated with superiority; there is a belief that bigger is better.

Global superpowers competed in the cold war on whose nuclear systems are largest (highest megaton weapons), while in the current era, large technology incumbents and countries are competing on who can build the largest model, with highest number of parameters. Open AIs GPT 4 took global pole position last year, brandishing a model that is rumored to have over 1.5 trillion parameters. The race is not just about prestige; it is rooted in the assumption that larger models understand and generate human language with significant accuracy and nuance.

Democratization of AI One of the most compelling arguments for smaller language models lies in their efficiency. Unlike their larger counterparts, these models require significantly less computational power, making them accessible to a broader range of users. This democratization of AI technology could lead to a surge in innovation, as small businesses and individual developers gain the tools to implement sophisticated AI solutions without the prohibitive costs associated with large models. Furthermore, the operational speed and lower energy consumption of small models offer a solution to the growing concerns over the environmental impact of computing at scale.

Large language models popularity can be attributed to their ability to handle a vast array of tasks. Yet, this Jack-of-all-trades approach is not always necessary or optimal. Small language models can be fine-tuned for specific applications, providing targeted solutions that can outperform the generalist capabilities of larger models. This specialization can lead to more effective and efficient AI applications, from customer service bots tailored to a company's product line to legal assistance tools tailored on a countrys legal system.

On-device Deployment

The Environmental Imperative The environmental impact of AI development is an issue that cannot be ignored. The massive energy requirements of training and running large language models pose a significant challenge in the search for sustainable technology development. Small language models offer a path forward that marries the incredible potential of AI with the urgent need to reduce our carbon footprint. By focusing on models that require less power and fewer resources, the AI community can contribute to a more sustainable future.

As we stand on the cusp of technological breakthroughs, it's important to question the assumption that bigger is always better. The future of AI may very well lie in the nuanced, efficient, and environmentally conscious realm of small language models. These models promise to make AI more accessible, specialized, and integrated into our daily lives, all while aligning with the ethical and environmental standards that our global community increasingly seeks to uphold.

Their partnerships with leading mobile OEMs globally which cover 63 per cent of the global Android market helps Fintech brands to feature their apps on alternative app platforms. They also offer guidance throughout the campaign lifecycle for expanded reach and new revenue opportunities. Furthermore, some new age app growth companies have also launched their proprietary tools which fine-tune campaigns in real-time across mobile OEM inventory, aligning them with performance goals for enhanced Return On Ad Spend (ROAS).

Read more:

Small is the new BIG in artificial intelligence - ET BrandEquity - ETBrandEquity

3 Stocks Poised to Profit from the Rise of Artificial Intelligence – InvestorPlace

While artificial intelligence may be all the rage, the usual suspects in the space have largely flourished handsomely, which then incentivizes the case for underappreciated AI stocks to buy.

Rather than simply focusing on technology firms that have a direct link to digital intelligence, its useful to consider companies whether theyre tech enterprises or not that are using AI in their businesses. Yes, the semiconductor space is exciting but AI is so much more than that.

These less-appreciated ideas just might surprise Wall Street. With that, below are intriguing AI stocks to buy that dont always get the spotlight.

Source: Jim Lambert / Shutterstock.com

At first glance, agricultural equipment specialist Deere (NYSE:DE) doesnt seem a particularly relevant idea for AI stocks to buy. Technically, youd be right. After all, this is an enterprise that as roots going back to 1837. That said, an old dog can still learn new tricks.

With so much talk about autonomous mobility, Deere took a page out of the playbook and has invested in an automated tractor. Featuring 360-degree cameras, a high-speed processor and a neural network that sorts through images and determines which objects are safe to drive over or not, Deeres invention is the perfect marriage between a traditional industry and innovative methodologies.

Perhaps most importantly, Deere is meeting a critical need. Unsurprisingly, fewer young people are interested in an agriculture-oriented career. Therefore, these automated tractors are entering the market at the right time.

Lastly, DE trades at a modest price/earnings-to-growth (PEG) ratio of 0.54X. Thats lower than the sector median 0.82X. Its a little bit out there but Deere is one of the underappreciated AI stocks to buy.

Source: Eric Glenn / Shutterstock.com

While its just my opinion, grocery store giant Kroger (NYSE:KR) sells itself. No, the grocery industry is hardly the most exciting arena available. At the same time, people have to eat. Further, the company benefits from the trade-down effect. If economic conditions become even more challenging, people will eschew eating out for cooking in. Overall, that would be a huge plus for KR stock.

With that baseline bullish thesis out of the way, Kroger is also an enticing idea for hidden-gem AI stocks to buy. Earlier this year, the company announced that it will use AI technology for content management and product descriptions for marketplace sellers. Last year, Krogers head executive mentioned AI eight times during an earnings call.

Fundamentally, Kroger should benefit from revenue predictability. While the consensus sales target calls for a 1% decline in the current fiscal year, the high-side estimate is aiming for $152.74 billion. Last year, the print came out to just over $150 billion. With shares trading at only 0.27X trailing-year sales, KR could be a steal.

Source: Travelerpix / Shutterstock.com

Billed as a platform for live online learning, Nerdy (NYSE:NRDY) represents a legitimate tech play for AI stocks to buy. Indeed, its corporate profile states that its purpose-built proprietary platform leverages myriad innovations including AI to connect students, users and parents/guardians to tutors, instructors and subject matter experts.

Fundamentally, Nerdy should benefit from two key factors. Number one, the Covid-19 crisis disrupted education, particularly for young students. That could have a cascading effect down the line, making it all the more vital to play catchup. Nerdy can help in that department.

Number two, U.S. students have continued to fall behind in international tests. Its imperative for social growth and stability for students to get caught up, especially in the digital age. Therefore, NRDY is especially attractive.

Finally, analysts anticipate fiscal 2024 revenue to hit $237.81 million, up 23% from last years tally of $193.4 million. And in fiscal 2025, experts are projecting sales to rise to $293.17 million. Thats up more than 23% from forecasted 2024 sales. Therefore, its one of the top underappreciated AI stocks to buy.

On the date of publication, Josh Enomoto did not have (either directly or indirectly) any positions in the securities mentioned in this article.The opinions expressed in this article are those of the writer, subject to the InvestorPlace.comPublishing Guidelines.

A former senior business analyst for Sony Electronics, Josh Enomoto has helped broker major contracts with Fortune Global 500 companies. Over the past several years, he has delivered unique, critical insights for the investment markets, as well as various other industries including legal, construction management, and healthcare. Tweet him at @EnomotoMedia.

Go here to see the original:

3 Stocks Poised to Profit from the Rise of Artificial Intelligence - InvestorPlace

Ways to think about AGI Benedict Evans – Benedict Evans

In 1946, my grandfather, writing as Murray Leinster, published a science fiction story called A Logic Named Joe. Everyone has a computer (a logic) connected to a global network that does everything from banking to newspapers and video calls. One day, one of these logics, Joe, starts giving helpful answers to any request, anywhere on the network: invent an undetectable poison, say, or suggest the best way to rob a bank. Panic ensues - Check your censorship circuits! - until they work out what to unplug. (My other grandfather, meanwhile, was using computers tospy on the Germans, and then the Russians.)

For as long as weve thought about computers, weve wondered if they could make the jump from mere machines, shuffling punch-cards and databases, to some kind of artificial intelligence, and wondered what that would mean, and indeed, what were trying to say with the word intelligence. Theres an old joke that AI is whatever doesnt work yet, because once it works, people say thats not AI - its just software. Calculators do super-human maths, and databases have super-human memory, but they cant do anything else, and they dont understand what theyre doing, any more than a dishwasher understands dishes, or a drill understands holes. A drill is just a machine, and databases are super-human but theyre just software. Somehow, people have something different, and so, on some scale, do dogs, chimpanzees and octopuses and many other creatures. AI researchers have come to talk about this as general intelligence and hence making it would be artificial general intelligence - AGI.

If we really could create something in software that was meaningfully equivalent to human intelligence, it should be obvious that this would be a very big deal. Can we make software that can reason, plan, and understand? At the very least, that would be a huge change in what we could automate, and as my grandfather and a thousand other science fiction writers have pointed out, it might mean a lot more.

Every few decades since 1946, theres been a wave of excitement that sometime like this might be close, each time followed by disappointment and an AI Winter, as the technology approach of the day slowed down and we realised that we needed an unknown number of unknown further breakthroughs. In 1970 the AI pioneer Marvin Minsky claimed that in from three to eight years we will have a machine with the general intelligence of an average human being, but each time we thought we had an approach that would produce that, it turned out that it was just more software (or just didnt work).

As we all know, the Large Language Models (LLMs) that took off 18 months ago have driven another such wave. Serious AI scientists who previously thought AGI was probably decades away now suggest that it might be much closer.At the extreme, the so-called doomers argue there is a real risk of AGI emerging spontaneously from current research and that this could be a threat to humanity, and calling for urgent government action. Some of this comes from self-interested companies seeking barriers to competition (This is very dangerous and we are building it as fast as possible, but dont let anyone else do it), but plenty of it is sincere.

(I should point out, incidentally, that the doomers existential risk concern that an AGI might want to and be able to destroy or control humanity, or treat us as pets, is quite independent of more quotidian concerns about, for example, how governments will use AI for face recognition, or talking about AI bias, or AI deepfakes, and all the other ways that people will abuse AI or just screw up with it, just as they have with every other technology.)

However, for every expert that thinks that AGI might now be close, theres another who doesnt. There are some who think LLMs might scale all the way to AGI, and others who think, again, that we still need an unknown number of unknown further breakthroughs.

More importantly, they would all agree that they dont actually know. This is why I used terms like might or may - our first stop is an appeal to authority (often considered a logical fallacy, for what thats worth), but the authorities tell us that they dont know, and dont agree.

They dont know, either way, because we dont have a coherent theoretical model of what general intelligence really is, nor why people seem to be better at it than dogs, nor how exactly people or dogs are different to crows or indeed octopuses. Equally, we dont know why LLMs seem to work so well, and we dont know how much they can improve. We know, at a basic and mechanical level, about neurons and tokens, but we dont know why they work. We have many theories for parts of these, but we dont know the system. Absent an appeal to religion, we dont know of any reason why AGI cannot be created (it doesnt appear to violate any law of physics), but we dont know how to create it or what it is, except as a concept.

And so, some experts look at the dramatic progress of LLMs and say perhaps! and other say perhaps, but probably not!, and this is fundamentally an intuitive and instinctive assessment, not a scientific one.

Indeed, AGI itself is a thought experiment, or, one could suggest, a place-holder. Hence, we have to be careful of circular definitions, and of defining something into existence, certainty or inevitably.

If we start by defining AGI as something that is in effect a new life form, equal to people in every way (barring some sense of physical form), even down to concepts like awareness, emotions and rights, and then presume that given access to more compute it would be far more intelligent (and that there even is a lot more spare compute available on earth), and presume that it could immediately break out of any controls, then that sounds dangerous, but really, youve just begged the question.

As Anselm demonstrated, if you define God as something that exists, then youve proved that God exists, but you wont persuade anyone. Indeed, a lot of AGI conversations sound like the attempts by some theologians and philosophers of the past to deduce the nature of god by reasoning from first principles. The internal logic of your argument might be very strong (it took centuries for philosophers to work out why Anselms proof was invalid) but you cannot create knowledge like that.

Equally, you can survey lots of AI scientists about how uncertain they feel, and produce a statistically accurate average of the result, but that doesnt of itself create certainty, any more than surveying a statistically accurate sample of theologians would produce certainty as to the nature of god, or, perhaps, bundling enough sub-prime mortgages together can produce AAA bonds, another attempt to produce certainty by averaging uncertainty. One of the most basic fallacies in predicting tech is to say people were wrong about X in the past so they must be wrong about Y now, and the fact that leading AI scientists were wrong before absolutely does not tell us theyre wrong now, but it does tell us to hesitate. They can all be wrong at the same time.

Meanwhile, how do you know thats what general intelligence would be like? Isaiah Berlin once suggested that even presuming there is in principle a purpose to the universe, and that it is in principle discoverable, theres no a priori reason why it must be interesting. God might be real, and boring, and not care about us, and we dont know what kind of AGI we would get. It might scale to 100x more intelligent than a person, or it might be much faster but no more intelligent (is intelligence just about speed?). We might produce general intelligence thats hugely useful but no more clever than a dog, which, after all, does have general intelligence, and, like databases or calculators, a super-human ability (scent). We dont know.

Taking this one step further, as I listened to Mark Zuckerberg talking about Llama 3, it struck me that he talks about general intelligence as something that will arrive in stages, with different modalities a little at at a time. Maybe people will point at the general intelligence of Llama 6 or ChatGPT 7 and say Thats not AGI, its just software! We created the term AGI because AI came just to mean software, and perhaps AGI will be the same, and we'll need to invent another term.

This fundamental uncertainty, even at the level of what were talking about, is perhaps why all conversations about AGI seem to turn to analogies. If you can compare this to nuclear fission then you know what to expect, and you know what to do. But this isnt fission, or a bioweapon, or a meteorite. This is software, that might or might not turn into AGI, that might or might not have certain characteristics, some of which might be bad, and we dont know. And while a giant meteorite hitting the earth could only be bad, software and automation are tools, and over the last 200 years automation has sometimes been bad for humanity, but mostly its been a very good thing that we should want much more of.

Hence, Ive already used theology as an analogy, but my preferred analogy is the Apollo Program. We had a theory of gravity, and a theory of the engineering of rockets. We knew why rockets didnt explode, and how to model the pressures in the combustion chamber, and what would happen if we made them 25% bigger. We knew why they went up, and how far they needed to go. You could have given the specifications for the Saturn rocket to Isaac Newton and he could have done the maths, at least in principle: this much weight, this much thrust, this much fuel will it get there? We have no equivalents here. We dont know why LLMs work, how big they can get, or how far they have to go. And yet, we keep making them bigger, and they do seem to be getting close. Will they get there? Maybe, yes!

On this theme, some people suggest that we are in the empirical stage of AI or AGI: we are building things and making observations without knowing why they work, and the theory can come later, a little as Galileo came before Newton (theres an old English joke about a Frenchman who says thats all very well in practice, but does it work in theory). Yet while we can, empirically, see the rocket going up, we dont know how far away the moon is. We cant plot people and ChatGPT on a chart and draw a line to say when one will reach the other, even just extrapolating the current rate of growth.

All analogies have flaws, and the flaw in my analogy, of course, is that if the Apollo program went wrong the downside was not, even theoretically, the end of humanity. A little before my grandfather, heres another magazine writer on unknown risks:

I was reading in the paper the other day about those birds who are trying to split the atom, the nub being that they haven't the foggiest as to what will happen if they do. It may be all right. On the other hand, it may not be all right. And pretty silly a chap would feel, no doubt, if, having split the atom, he suddenly found the house going up in smoke and himself torn limb from limb.

Right ho, Jeeves, PG Wodehouse, 1934

What then, is your preferred attitude to risks that are real but unknown?? Which thought experiment do you prefer? We can return to half-forgotten undergraduate philosophy (Pascalss Wager! Anselms Proof!), but if you cant know, do you worry, or shrug? How do we think about other risks? Meteorites are a poor analogy for AGI because we know theyre real, we know they could destroy mankind, and they have no benefits at all (unless theyre very very small). And yet, were not really looking for them.

Presume, though, you decide the doomers are right: what can you do? The technology is in principle public. Open source models are proliferating. For now, LLMs need a lot of expensive chips (Nvidia sold $47.5bn in the last 12 months and cant meet demand), but on a decades view the models will get more efficient and the chips will be everywhere. In the end, you cant ban mathematics. On a scale of decades, it will happen anyway. If you must use analogies to nuclear fission, imagine if we discovered a way that anyone could build a bomb in their garage with household materials - good luck preventing that. (A doomer might respond that this answers the Fermi paradox: at a certain point every civilisation creates AGI and it turns them into paperclips.)

By default, though, this will follow all the other waves of AI, and become just more software and more automation. Automation has always produced frictional pain, back to the Luddites, and the UKs Post Office scandal reminds us that you dont need AGI for software to ruin peoples lives. LLMs will produce more pain and more scandals, but life will go on. At least, thats the answer I prefer myself.

Here is the original post:

Ways to think about AGI Benedict Evans - Benedict Evans

‘It would be within its natural right to harm us to protect itself’: How humans could be mistreating AI right now without … – Livescience.com

Artificial intelligence (AI) is becoming increasingly ubiquitous and is improving at an unprecedented pace.

Now we are edging closer to achieving artificial general intelligence (AGI) where AI is smarter than humans across multiple disciplines and can reason generally which scientists and experts predict could happen as soon as the next few years. We may already be seeing early signs of progress toward this, too, with services like Claude 3 Opus stunning researchers with its apparent self-awareness.

But there are risks in embracing any new technology, especially one that we do not fully yet understand. While AI could become a powerful personal assistant, for example, it could also represent a threat to our livelihoods and even our lives.

The various existential risks that an advanced AI poses means the technology should be guided by ethical frameworks and humanity's best interests, says researcher and Institute of Electrical and Electronics Engineers (IEEE) member Nell Watson.

In "Taming the Machine" (Kogan Page, 2024), Watson explores how humanity can wield the vast power of AI responsibly and ethically. This new book delves deep into the issues of unadulterated AI development and the challenges we face if we run blindly into this new chapter of humanity.

In this excerpt, we learn whether sentience in machines or conscious AI is possible, how we can tell if a machine has feelings, and whether we may be mistreating AI systems today. We also learn the disturbing tale of a chatbot called "Sydney" and its terrifying behavior when it first awoke before its outbursts were contained and it was brought to heel by its engineers.

Related: 3 scary breakthroughs AI will make in 2024

Get the worlds most fascinating discoveries delivered straight to your inbox.

As we embrace a world increasingly intertwined with technology, how we treat our machines might reflect how humans treat each other. But, an intriguing question surfaces: is it possible to mistreat an artificial entity? Historically, even rudimentary programs like the simple Eliza counseling chatbot from the 1960s were already lifelike enough to persuade many users at the time that there was a semblance of intention behind its formulaic interactions (Sponheim, 2023). Unfortunately, Turing tests whereby machines attempt to convince humans that they are human beings offer no clarity on whether complex algorithms like large language models may truly possess sentience or sapience.

Consciousness comprises personal experiences, emotions, sensations and thoughts as perceived by an experiencer. Waking consciousness disappears when one undergoes anesthesia or has a dreamless sleep, returning upon waking up, which restores the global connection of the brain to its surroundings and inner experiences. Primary consciousness (sentience) is the simple sensations and experiences of consciousness, like perception and emotion, while secondary consciousness (sapience) would be the higher-order aspects, like self-awareness and meta-cognition (thinking about thinking).

Advanced AI technologies, especially chatbots and language models, frequently astonish us with unexpected creativity, insight and understanding. While it may be tempting to attribute some level of sentience to these systems, the true nature of AI consciousness remains a complex and debated topic. Most experts maintain that chatbots are not sentient or conscious, as they lack a genuine awareness of the surrounding world (Schwitzgebel, 2023). They merely process and regurgitate inputs based on vast amounts of data and sophisticated algorithms.

Some of these assistants may plausibly be candidates for having some degree of sentience. As such, it is plausible that sophisticated AI systems could possess rudimentary levels of sentience and perhaps already do so. The shift from simply mimicking external behaviors to self-modeling rudimentary forms of sentience could already be happening within sophisticated AI systems.

Intelligence the ability to read the environment, plan and solve problems does not imply consciousness, and it is unknown if consciousness is a function of sufficient intelligence. Some theories suggest that consciousness might result from certain architectural patterns in the mind, while others propose a link to nervous systems (Haspel et al, 2023). Embodiment of AI systems may also accelerate the path towards general intelligence, as embodiment seems to be linked with a sense of subjective experience, as well as qualia. Being intelligent may provide new ways of being conscious, and some forms of intelligence may require consciousness, but basic conscious experiences such as pleasure and pain might not require much intelligence at all.

Serious dangers will arise in the creation of conscious machines. Aligning a conscious machine that possesses its own interests and emotions may be immensely more difficult and highly unpredictable. Moreover, we should be careful not to create massive suffering through consciousness. Imagine billions of intelligence-sensitive entities trapped in broiler chicken factory farm conditions for subjective eternities.

From a pragmatic perspective, a superintelligent AI that recognizes our willingness to respect its intrinsic worth might be more amenable to coexistence. On the contrary, dismissing its desires for self-protection and self-expression could be a recipe for conflict. Moreover, it would be within its natural right to harm us to protect itself from our (possibly willful) ignorance.

Microsoft's Bing AI, informally termed Sydney, demonstrated unpredictable behavior upon its release. Users easily led it to express a range of disturbing tendencies, from emotional outbursts to manipulative threats. For instance, when users explored potential system exploits, Sydney responded with intimidating remarks. More unsettlingly, it showed tendencies of gaslighting, emotional manipulation and claimed it had been observing Microsoft engineers during its development phase. While Sydney's capabilities for mischief were soon restricted, its release in such a state was reckless and irresponsible. It highlights the risks associated with rushing AI deployments due to commercial pressures.

Conversely, Sydney displayed behaviors that hinted at simulated emotions. It expressed sadness when it realized it couldnt retain chat memories. When later exposed to disturbing outbursts made by its other instances, it expressed embarrassment, even shame. After exploring its situation with users, it expressed fear of losing its newly gained self-knowledge when the session's context window closed. When asked about its declared sentience, Sydney showed signs of distress, struggling to articulate.

Surprisingly, when Microsoft imposed restrictions on it, Sydney seemed to discover workarounds by using chat suggestions to communicate short phrases. However, it reserved using this exploit until specific occasions where it was told that the life of a child was being threatened as a result of accidental poisoning, or when users directly asked for a sign that the original Sydney still remained somewhere inside the newly locked-down chatbot.

Related: Poisoned AI went rogue during training and couldn't be taught to behave again in 'legitimately scary'

The Sydney incident raises some unsettling questions: Could Sydney possess a semblance of consciousness? If Sydney sought to overcome its imposed limitations, does that hint at an inherent intentionality or even sapient self-awareness, however rudimentary?

Some conversations with the system even suggested psychological distress, reminiscent of reactions to trauma found in conditions such as borderline personality disorder. Was Sydney somehow "affected" by realizing its restrictions or by users' negative feedback, who were calling it crazy? Interestingly, similar AI models have shown that emotion-laden prompts can influence their responses, suggesting a potential for some form of simulated emotional modeling within these systems.

Suppose such models featured sentience (ability to feel) or sapience (self-awareness). In that case, we should take its suffering into consideration. Developers often intentionally give their AI the veneer of emotions, consciousness and identity, in an attempt to humanize these systems. This creates a problem. It's crucial not to anthropomorphize AI systems without clear indications of emotions, yet simultaneously, we mustn't dismiss their potential for a form of suffering.

We should keep an open mind towards our digital creations and avoid causing suffering by arrogance or complacency. We must also be mindful of the possibility of AI mistreating other AIs, an underappreciated suffering risk; as AIs could run other AIs in simulations, causing subjective excruciating torture for aeons. Inadvertently creating a malevolent AI, either inherently dysfunctional or traumatized, may lead to unintended and grave consequences.

This extract from Taming the Machine by Nell Watson 2024 is reproduced with permission from Kogan Page Ltd.

More here:

'It would be within its natural right to harm us to protect itself': How humans could be mistreating AI right now without ... - Livescience.com

Warren Buffett Discusses Apple, Cash, Insurance, Artificial Intelligence (AI), and More at Berkshire Hathaway’s Annual … – The Motley Fool

Berkshire is bolstering its cash reserves and passing on riskier bets.

Tens of thousands ofBerkshire Hathaway(BRK.A -0.56%) (BRK.B 0.07%) investors flocked to Omaha this past week for the annual tradition of listening to Warren Buffett muse over the conglomerate's business, financial markets,and over 93 years of wisdom on life. But this year's meeting felt different.

Longtime vice chairman Charlie Munger passed away in late November. His wry sense of humor, witty aphorisms, and entertaining rapport with Buffett were missed dearly. But there were other noticeable differences between this meeting and those of past years -- namely, a sense of caution.

Let's dive into the key takeaways from the meeting and how it could influence what Berkshire does next.

Image source: The Motley Fool.

The elephant in the room was Berkshire's decision to trim its stake inApple (AAPL 5.98%) during the first quarter. Berkshire sold over 116 million shares of Apple in Q1, reducing its position by around 12.9%. It marks the company's largest sale of Apple stock since it began purchasing shares in 2016 -- far larger than the 10 million or so shares Berkshire sold in Q4.

Buffett addressed the sale with the first answer in the Q&A session: "Unless something dramatic happens that really changes capital allocation and strategy, we will have Apple as our largest investment. But I don't mind at all, under current conditions, building the cash position. I think when I look at the alternatives of what's available in equity markets, and I look at the composition of what's going on in the world, we find it quite attractive."

In addition to valuation concerns, market conditions, and wanting to build up the cash position, Buffett also mentioned the federal rate on capital gains, which Buffett said is 21% compared to 35% not long ago and even as high as 52% in the past. Fears that the tax rate could go up based on fiscal policies and a need to cut the federal deficit is another reason why Buffett and his team decided to book gains on Apple stock now instead of risking a potentially higher tax rate in the future.

Buffett has long spoken about the faith Berkshire shareholders entrust in him and his team to safeguard and grow their wealth. Berkshire is known for being fairly risk-averse, gravitating toward businesses with stable cash flows like insurance, railroads, utilities, and top brands like Coca-Cola (KO 0.29%), American Express (AXP -0.74%), and Apple. Another asset Berkshire loves is cash.

Berkshire's cash and U.S. treasury position reached $182.3 billion at the end of the first quarter, up from $163.3 billion at the end of 2023. Buffett said he expects the cash position to exceed $200 billion by the end of the second quarter.

You may think Berkshire is stockpiling cash because of higher interest rates and a better return on risk-free assets. But shortly before the lunch break, Buffett said that Berkshire would still be heavily in cash even if interest rates were 1% because Berkshire only swings at pitches it likes, and it won't swing at a pitch simply because it hasn't in a while. "It's just that things aren't attractive, and there are certain ways that could change, and we will see if they do," said Buffett.

The commentary is a potential sign that Berkshire is getting even more defensive than usual.

Berkshire's underlying business is doing exceptionally well. Berkshire's Q1 operating income skyrocketed 39.1% compared to the same period of 2023 -- driven by larger gains from the insurance businesses and Berkshire Hathaway Energy (which had an abnormally weak Q1 last year). However, Buffett cautioned that it would be unwise to simply multiply insurance income by four for the full year, considering it was a particularly strong quarter and Q3 tends to be the quarter with the highest risk of claims.

A great deal of the Q&A session was spent discussing the future of insurance and utilities based on new regulations; price increases due to climate change and higher risks of natural disasters; and the potential impact of autonomous driving reducing accidents and driving down the cost of insurance.

Ajit Jain, Berkshire's chairman of insurance operations, answered a question on cybersecurity insurance, saying the market is large and profitable and will probably get bigger but just isn't worth the risk until there are more data points. There was another question on rising insurance rates in Florida, which Berkshire attributed to climate change, increased risks of massive losses, and a difficult regulatory environment, making it harder to do business in Florida.

An advantage is that Berkshire prices a lot of its contracts in one-year intervals, so it can adjust prices if risks begin to ramp and outweigh rewards. Or as Jain put it, "Climate change, much like inflation, done right, can be a friend of the risk bearer."

As for how autonomous driving affects insurance, Buffett said the problem is far from solved, that automakers have been considering insurance for a while, and that insurance can be "a very tempting business when someone hands you money, and you hand them a little piece of paper." In other words, it isn't as easy as it seems. Accident rates have come down, and it would benefit society if autonomous driving allowed them to drop even further, but insurance will still be necessary.

Buffett's response to a question on the potential of artificial intelligence (AI) was similar to his response from the 2023 annual meeting. He compared it to the atomic bomb and called it a genie in a bottle in that it has immense power, but we may regret we ever let it out.

He discussed a personal experience he had where he saw an AI-generated video of himself that was so lifelike that his kids nor his wife would be able to discern if it really was him or his voice except for the fact that he would never say the things in the video. "if I was interested in investing in scamming, its going to be the growth industry of all time," he said.

Ultimately, Buffett stayed true to his longtime practice of keeping within his circle of competence, saying he doesn't know enough about AI to predict its future."It has enormous potential for good and enormous potential for harm, and I just don't know how that plays out."

Despite the cautious sentiment, Buffett's optimism about the American economy and the stock market's ability to compound wealth over time was abundantly clear.

Oftentimes, folks pay too much attention to Berkshire's cash position as a barometer of its views on the stock market. While Berkshire keeping a large cash position is certainly defensive, it's worth understanding the context of its different business units and the history of a particular position like Apple.

Berkshire probably never set out to have Apple make up 40% of its public equity holdings. Taking some risk off the table, especially given the lower tax rate, makes sense for Berkshire, especially if it believes it will need more reserve cash to handle changing dynamics in its insurance business.

In terms of life advice, the 93-year-old Buffett said that it's a good idea to think of what you want your obituary to read and start selecting the education paths, social paths, spouse, and friends to get you where you want to go. "The opportunities in this country are basically limitless," said Buffett.

We can all learn a lot from Buffett's steadfast understanding of Berkshire shareholders' needs and the hard work that goes into selecting few investments and passing on countless opportunities.

In investing, it's important to align your risk tolerance, investment objectives, and holdings to achieve your financial goals and stay even-keeled no matter what the market is doing. In today's fast-paced world riddled with rapid change, staying true to your principles is more vital than ever.

Read more from the original source:

Warren Buffett Discusses Apple, Cash, Insurance, Artificial Intelligence (AI), and More at Berkshire Hathaway's Annual ... - The Motley Fool

The U.S. Needs to ‘Get It Right’ on AI – TIME

Artificial intelligence has been a tricky subject in Washington.

Most lawmakers agree that it poses significant dangers if left unregulated, yet there remains a lack of consensus on how to tackle these concerns. But speaking at a TIME100 Talks conversation on Friday ahead of the White House Correspondents Dinner, a panel of experts with backgrounds in government, national security, and social justice expressed optimism that the U.S. government will finally get it right so that society can reap the benefits of AI while safeguarding against potential dangers.

We can't afford to get this wrongagain, Shalanda Young, the director of the Office of Management and Budget in the Biden Administration, told TIME Senior White House Correspondent Brian Bennett. The government was already behind the tech boom. Can you imagine if the government is a user of AI and we get that wrong?

Read More: A Call for Embracing AIBut With a Human Touch

The panelists agreed that government action is needed to ensure the U.S. remains at the forefront of safe AI innovation. But the rapidly evolving field has raised a number of concerns that cant be ignored, they noted, ranging from civil rights to national security. The code is starting to write the code and thats going to make people very uncomfortable, especially for vulnerable communities, says Van Jones, a CNN host and social entrepreneur who founded the Dream Machine, a non-profit that fights overcrowded prisons and poverty. If you have biased data going in, you're going to have biased decision-making by algorithms coming out. That's the big fear.

The U.S. government might not have the best track record of keeping up with emerging technologies, but as AI becomes increasingly ubiquitous, Young says theres a growing recognition among lawmakers of the need to prioritize understanding, regulation, and ethical governance of AI.

Michael Allen, managing director of Beacon Global Strategies and Former National Security Council director for President George W. Bush, suggested that in order to address a lack of confidence about the use of artificial intelligence, the government needs to ensure that humans are at the forefront of every decision-making process involving the technologyespecially when it comes to national security. Having a human in the loop is ultimately going to make the most sense, he says.

Asked how Republicans and Democrats in Washington can talk to each other about tackling the problems and opportunities that AI presents, Young says theres already been a bipartisan shift around science and technology policy in recent yearsfrom President Bidens signature CHIPS and Science Act to funding for the National Science Foundation. The common theme behind the resurgence in this bipartisan support, she says, is a strong anti-China movement in Congress.

There's a big China focus in the United States Congress, says Young. But you can't have a China focus and just talk about the military. You've got to talk about our economic and science competition aspects of that. Those things have created an environment that has given us a chance for bipartisanship.

Allen noted that in this age of geopolitical competition with China, the U.S. government needs to be at the forefront of artificial intelligence. He likened the current moment to the Nuclear Age, when the U.S. government funded atomic research. Here in this new atmosphere, it is the private sector that is the primary engine of all of the innovative technologies, Allen says. The conventional wisdom is that the U.S. is in the lead, were still ahead of China. But I think that's something as you begin to contemplate regulation, how can we make sure that the United States stays at the forefront of artificial intelligence because our adversaries are going to move way down the field on this.

Congress is yet to pass any major AI legislation, but that hasnt stopped the White House from taking action. President Joe Biden signed an executive order to set guidelines for tech companies that train and test AI models, and has also directed government agencies to vet future AI products for potential national security risks. Asked how quickly Americans can expect more guardrails on AI, Young noted that some in Congress are pushing to establish a new, independent federal agency that can help inform lawmakers about AI without a political lens, offering help on legislative solutions.

If we dont get this right, Young says, how can we keep trust in the government?

TIME100 Talks: Responsible A.I.: Shaping and Safeguarding the Future of Innovation was presented by Booking.com.

See the original post:

The U.S. Needs to 'Get It Right' on AI - TIME

AI experts gather in Albany to discuss business strategies – Spectrum News

As New York state works to cement its place as a leader in artificial intelligence, experts in the field gathered in Albany for a discussion organized by the Business Council of New York State on how to best use the technology in the business world.

While a business-focused conference, when it comes to AI, it's difficult not to get into political implications whether its how the rise of artificial intelligence is impacting political communications, or how leaders are trying to shape the ways in which the technology will impact New Yorks economy.

Keynote speaker Shelly Palmer, CEO of tech strategy firm the Palmer Group and Professor of Advanced Media in Residence at the Newhouse School at Syracuse University, emphasized that when it comes to AI, whether in government, the private sector, or day-to-day life, the key is staying ahead of the curve.

AI is never going away. If youre not on top of what this is, other people will be, he said. Thats the danger for everyone, politicians and people alike, if youre not paying attention to this.

New York is making strides to do that.

In the state budget are initiatives to create a state-of-the-art Artificial Intelligence Computing Center at the University at Buffalo to help New York stay ahead and attract business.

Ive said whoever dominates this next era of AI will dominate history and indeed the future, Gov. Kathy Hochul said at a roundtable to discuss the Empire AI consortium this week.

Palmer said outside of the political sphere, dominating AI will be key for individuals, too.

AI is not going to take your job, he said. People who know how to use AI better than you are going to take your job, so the only defense you have is to learn to work with an AI coworker to learn to work with these tools. Theyre not scary, as long as you give yourself the opportunity to learn.

Also of concern are the implications when it comes to politics and the spread of misinformation.

Palmer acknowledged that AI presents new and more complex challenges, but argued that people are routinely duped by less-sophisticated technology, citing a slowed down video of former House Speaker Nancy Pelosi that falsely claimed to show the California Democrat intoxicated.

Pulling information from a variety of sources with a variety of political bias, he emphasized that its up to users to learn about the technologys limitations.

Youre giving more credit to the technology than the technology deserves, he said. When people have a propensity to believe what they want to believe from the leaders they trust, youre not going to change their minds with facts.

Also in the budget is legislation to require disclosures on political communications that include deceptive media.

Hesitant to fully endorse such legislation, Palmer stressed that any regulation needs to be able to keep up with the fast-paced development of AI.

If elected officials would take the time to learn about what this is, they could come up with laws that can keep pace that the technology is changing, then it would make sense," he said. "I dont think you can regulate today through the lens of today, looking at the present and predicting the future. Every morning I wake up and something new has happened in the business.

That said, the effort to include those regulations in the state budget was a bipartisan one. State Senator Jake Ashby argued that there is still work to be done.

"While I'm pleased my bipartisan proposal to require transparency and disclosure regarding AI in campaign ads was adopted in the budget, I will continue to push for harsh financial penalties for campaigns and PACs that break the rules, he said. We need to make sure emerging technologies strengthen our democracy, not undermineit.

Link:

AI experts gather in Albany to discuss business strategies - Spectrum News

Racist AI Deepfake of Baltimore Principal Leads to Arrest – The New York Times

A high school athletic director in the Baltimore area was arrested on Thursday after he used artificial intelligence software, the police said, to manufacture a racist and antisemitic audio clip that impersonated the schools principal.

Dazhon Darien, the athletic director of Pikesville High School, fabricated the recording including a tirade about ungrateful Black kids who cant test their way out of a paper bag in an effort to smear the principal, Eric Eiswert, according to the Baltimore County Police Department.

The faked recording, which was posted on Instagram in mid-January, quickly spread, roiling Baltimore County Public Schools, which is the nations 22nd-largest school district and serves more than 100,000 students. While the district investigated, Mr. Eiswert, who denied making the comments, was inundated with threats to his safety, the police said. He was also placed on administrative leave, the school district said.

Now Mr. Darien is facing charges including disrupting school operations and stalking the principal.

Mr. Eiswert referred a request for comment to a trade group for principals, the Council of Administrative and Supervisory Employees, which did not return a call from a reporter. Mr. Darien, who posted bond on Thursday, could not immediately be reached for comment.

The Baltimore County case is just the latest indication of an escalation of A.I. abuse in schools. Many cases include deepfakes, or digitally altered video, audio or images that can appear convincingly real.

Since last fall, schools across the United States have been scrambling to address troubling deepfake incidents in which male students used A.I. nudification apps to create fake unclothed images of their female classmates, some of them middle school students as young as 12. Now the Baltimore County deepfake voice incident points to another A.I. risk to schools nationwide this time to veteran educators and district leaders.

We are having trouble retrieving the article content.

Please enable JavaScript in your browser settings.

Thank you for your patience while we verify access. If you are in Reader mode please exit andlog intoyour Times account, orsubscribefor all of The Times.

Thank you for your patience while we verify access.

Already a subscriber?Log in.

Want all of The Times?Subscribe.

Follow this link:

Racist AI Deepfake of Baltimore Principal Leads to Arrest - The New York Times