Page 7«..4567

Category Archives: Artificial General Intelligence

Operation HOPE and CAU Host ChatGPT Creator to Discuss AI – Black Enterprise

Posted: May 18, 2023 at 12:55 am

Operation HOPE recently partnered with Clark Atlanta University (CAU) to host two events focused on The Future of Artificial Intelligence, with Sam Altman, Open AI Founder and ChatGPT Creator. The conversations were led by Operation HOPE Founder, Chairman, and CEO John Hope Bryant and featured the President of Clark Atlanta University, Dr. GeorgeT.French,Jr.

Held on CAUs campus, the first event provided Atlantas most prominent Black leaders from the public and private sectors an opportunity to engage with Altman and discuss pressing issues around artificial intelligence (AI). The second discussion provided local HBCU and Atlanta-based college students with the same opportunity.

Altman, a billionaire tech pioneer, shared how he believes AI can positively impact lives and create new economic opportunities for communities of color, particularly among students at Historically Black Colleges and Universities (HBCUs). The standing-room-only event included representatives from government, technology, non-profit, education, and the creative industries, among others.

In 2015, Altman co-founded OpenAI, a nonprofit artificial intelligence research and deployment company with the stated mission, to ensure that artificial general intelligence highly autonomous systems that outperform humans at most economically valuable work benefits all of humanity. In partnership with Operation HOPE, serial entrepreneur Altman has committed to makingAI a force for good by stimulating economic growth, increasing productivity at lower costs and stimulating job creation.

The promise of an economic boost via machine learning is understandably seductive, but if we want to ensure AI technology has a positive impact, we must all be engaged early on.With proper policy oversight, I believe it can transform the future of the underserved, said Operation HOPE Chairman, Founder, and CEO John Hope Bryant. The purpose of this discussion is to discover new ways to leverage AI to win in key areas of economic opportunity such as education, housing, employment, and credit. If it can revolutionize business, it can do the same for our communities.

Getting this right by figuring out the new society that we want to build and how we want to integrate AI technology is one of the most important questions of our time, Altman said. Im excited to have this discussion with a diverse group of people so that we can build something that humanity as a whole wants and needs.

Throughout the event, Altman and Bryant demystified AI and how modern digital technology is revolutionizing the way todays businesses compete and operate. By putting AI and data at the center of their capabilities, companies are redefining how they create, capture, and share valueand are achieving impressive growth as a result. During the Q&A session, they also discussed how government agencies can address AI policies that will lead to more equitable outcomes.

Altman is an American entrepreneur, angel investor, co-founder of Hydrazine Capital, former president of Y Combinator, founder and former CEO of Loopt, and co-founder and CEO of OpenAI. He was also one of TIME Magazines 100 Most Influential People of 2023.

According to recent research by IBM, more than one in threebusinesses were using AI technology in 2022. The report also notes that the adoption rate is exponential, with 42% currently considering incorporating AI into their business processes. Other research suggests that although the public sector is lagging, an increasing number of government agencies are considering or starting to use AI to improve operational efficiencies and decision-making. (McKinsey, 2020)

The rest is here:

Operation HOPE and CAU Host ChatGPT Creator to Discuss AI - Black Enterprise

Posted in Artificial General Intelligence | Comments Off on Operation HOPE and CAU Host ChatGPT Creator to Discuss AI – Black Enterprise

Paper Claims AI May Be a Civilization-Destroying "Great Filter" – Futurism

Posted: at 12:55 am

If aliens are out there, why haven't they contacted us yet? It may be, a new paper argues, that they or, in the future, we inevitably getwiped out by ultra-strong artificial intelligence, victims of our own drive to create a superior being.

This potential answer to the Fermi paradox in which physicist Enrico Fermi and subsequent generations pose the question: "where is everybody?" comes from National Intelligence University researcher Mark M. Bailey, who in a new yet-to-be-peer-reviewed paper posits that advanced AI may be exactly the kind of catastrophic risk that could wipe out entire civilizations.

Bailey cites superhuman AI as a potential "Great Filter," a potential answer to the Fermi paradox in which some terrible and unknown threat, artificial or natural, wipes out intelligent life before it can make contact with others.

"For anyone concerned with global catastrophic risk, one sobering question remains," Bailey writes. "Is the Great Filter in our past, or is it a challenge that we must still overcome?"

We humans, the researcher notes, are "terrible at intuitively estimating long-term risk," and given how many warnings have already been issued about AI and its potential endpoint, anartificial general intelligence or AGI it's possible, he argues, that we may be summoning our own demise.

"One way to examine the AI problem is through the lens of the second species argument," the paper continues. "This idea considers the possibility that advanced AI will effectively behave as a second intelligent species with whom we will inevitably share this planet. Considering how things went the last time this happened when modern humans and Neanderthals coexisted the potential outcomes are grim."

Even scarier, Bailey notes, is the prospect of near-god-like artificial superintelligence (ASI),in which an AGI surpasses human intelligence because "any AI that can improve its own code would likely be motivated to do so."

"In this scenario, humans would relinquish their position as the dominant intelligent species on the planet with potential calamitous consequences," the author hypothesizes. "Like the Neanderthals, our control over our future, and even our very existence, may end with the introduction of a more intelligent competitor."

There hasn't yet, of course, been any direct evidence to suggest that extraterrestrial AIs wiped out natural life in any alien civilizations, though in Bailey's view, "the discovery of artificial extraterrestrial intelligence without concurrent evidence of a pre-existing biological intelligence would certainly move the needle."

The possibility, of course, raises the possibility that there are destructive AIs lingering around the universe after eliminating their creators. To that end, Bailey helpfully suggests that "actively signaling our existence in a way detectable to such an extraterrestrial AI may not be in our best interest" because "any competitive extraterrestrial AI may be inclined to seek resources elsewhere including Earth."

"While it may seem like science fiction, it is probable that an out-of-control... technology like AI would be a likely candidate for the Great Filter whether organic to our planet, or of extraterrestrial origin," Bailey concludes. "We must ask ourselves; how do we prepare for this possibility?"

Reader, it's freaky stuff but once again, we're glad someone is considering it.

More on an AI apocalypse: Warren Buffett Compares AI to the Atom Bomb

Follow this link:

Paper Claims AI May Be a Civilization-Destroying "Great Filter" - Futurism

Posted in Artificial General Intelligence | Comments Off on Paper Claims AI May Be a Civilization-Destroying "Great Filter" – Futurism

Vintage AI Predictions Show Our Hopes and Fears Aren’t New … – Gizmodo

Posted: at 12:55 am

The essence of the very concept of artificial intelligence is inherently cold. Cool metal encasing a mess of silicon and wires buzzing with processed data. No, the computer cannot feel or even think, but that sense of emotional scarcity often gets attributed to the scientists and researchers who helped create what we know as AI. Knowing that, what emotional substance can you truly glean from the old typewritten extrapolations of an early computer scientist?

Enoughenough is the answer. The starched writing of a 1960s academic is burned with with few hints of excitement like the scattered ash of a cigarette left on a white page, but they are there, hidden in the technical jargon of faded research papers and technical documents. In a letter from pioneering philosopher David Lewis to a friend in 1964, he touched on the metaphysical nature of intelligence. He claimed that human intelligence is an illusion, that were barely more than computers with large storage capacity. He rejects an existential identity of the human consciousness. To recreate a real human brain, you would need to recreate all its firing neurons, a difficult task since back then scientists understood even less about the brain than they do now.

If a machine won the Turing gamebehaved intelligentlywith the wrong internal mechanism, it wouldnt be intelligent, Lewis wrote. You say, what if humans have the same sort of internal mechanism? We dont know well what kind they have.

At the annual New York International Antiquarian Book Fair, a gathering of antique and rare booksellers from around the world, one small booth by the UK-based Christian White Rare Books contained a single shelf of old, yellowing documents, photos, and other paraphernalia relating to the early days and research into artificial intelligence. Owner Christian White and his daughter Poppy offered Gizmodo an exclusive glimpse into the old, fading text that offered a rare and intensive look at that early AI history.

Christian White and his Daughter Poppy in front of their collection of documents relating to AI.Photo: Kyle Barr / Gizmodo

Though the Turing test is now considered antiquated, these papers, letters, and documents were penned during exciting days, the anything is possible post-war fervor that bled from the research papers into the collective consciousness through the burgeoning golden age Science Fiction scene. So much of the collection is, White admitted, quite boring to look at, at least on its face. Theres technical memorandums, conference flyers. To White, with all his very British mix of whimsy and deference, this was the stuff that was going to change the world, but its not always done in a beautiful, fantastic format.

Most of the papers relate to the late 1940s, 50s and 60s, the post-WWII age where the legacy of Alan Turing and his contemporaries created a boom time for advancing the idea of an artificial cognition. Computers, these thinking machines, didnt actually perform any real thought. Our modern understanding of generative AI still doesnt actually think. But beyond vague notions of an artificial general intelligence, this early research shows that the ideas of Deep Learning, as we know it now, were already being theorized before the technology was there to see it through. The concept of neural networks that powers todays AI chatbots, as well as probabilistic computing and machines passing as a humanall that was being considered long before companies like OpenAI came onto the scene.

From the very beginning, computers are being thought about not just as mathematical, but as a kind of thinking, White said. This idea of artificial intelligence was there from the outset.

While there is plenty of AI experiments coming from laypeople in an open source network, todays major AI developments are stemming from for-profit companies who have shunted their entire businesses toward deploying generative AI en masse. Compared to now, these papers were from a time of scientists doing research for researchs sake, and not just in the halls of academia. Places like Bell Telephone Laboratories, the once-famed site of Nobel Prize-winning research into lasers and transistors, was also a home for early work in AI. One of Whites collections included a probabilistic work study looking at how well Turing-type computing machines were with a random element. Another old brochure from ACF Industries shared multiple papers discussing, in part, techniques for weighing information for intelligence analysis.

A full scope of the papers, books, and photos in Christian Whites collection.Photo: New York International Antiquarian Book Fair

The average ChatGPT-user probably hasnt heard any of the names on these documents, though women are often left out of the conversation on early AI research, despite how critical they were. One of those ACF papers was co-authored by Joyce Friedman, a professor and computer scientist working on the cutting edge of machine intelligence in the 1950s and 60s. Another photo in the collection features pioneer computer scientist Jean Hall feeding perforated paper through a computing machine at Argonne National Laboratory.

The documents also showcase the concerns these early researchers had about the computers they were building, before they even had the technical ability to build these machines. They were aware of the risks of what would happen when people couldnt tell the difference between content generated by a human or by AI.

Just by the idea of making a computer simulate a human is quite terrifying, said Poppy. The younger White is starting her masters degree in reproductive health at the University of Edinburgh. I think these people, they were just thinking of something so new and different, it probably seemed kind of surreal.

Flip through the following slides to get a better look at the ways scientists progressed toward AI, and how they conceptualized recreating the human brain with computers.

Go here to see the original:

Vintage AI Predictions Show Our Hopes and Fears Aren't New ... - Gizmodo

Posted in Artificial General Intelligence | Comments Off on Vintage AI Predictions Show Our Hopes and Fears Aren’t New … – Gizmodo

AIs Impact on Journalism – Signals AZ

Posted: at 12:55 am

Article audio is made possible by CAST11 Prescott Podcast Network. A Talking Glass Media production.

Turning things on their heads is something we like to do here at Talking Glass Media in Prescott Valley. We wanted to share with our readers what would an AI (artificial intelligence) chatbot like ChatGPT by the company OpenAI would say if we asked it How will AI impact journalism? Before we share that answer, heres some introductory information on ChatGPT and the company OpenAI.

According to the OpenAI website, OpenAI is an AI research and deployment company. Our mission is to ensure that artificial general intelligence benefits all of humanity Our mission is to ensure that artificial general intelligenceAI systems that are generally smarter than humansbenefits all of humanity.We are governed by a nonprofit and our unique capped-profit model drives our commitment to safety. This means that as AI becomes more powerful, we can redistribute profits from our work to maximize the social and economic benefits of AI technology.

AI has already begun to impact journalism in various ways, and this trend is likely to continue and intensify in the future. Here are some areas where AI is expected to have a significant impact:

Automated Reporting: AI can generate news stories from structured data. For example, templates can be created to report on financial earnings or sports results, which AI can then populate with the relevant numbers. This automation can free up human journalists to focus on more complex stories.

Data Analysis: AI tools can analyze vast amounts of data to identify trends, correlations, and outliers that might be relevant to journalists. These tools can do this work far faster and more accurately than humans, enabling journalists to uncover stories they might not otherwise find.

Verification: With the rise of deepfakes and other forms of digital deception, AI tools are being developed to help journalists verify the authenticity of images, videos, and other forms of media. These tools can help journalists navigate the increasingly complex landscape of digital misinformation.

Personalization: AI can help to tailor news content to individual readers based on their interests, reading habits, and other factors. This can help to increase engagement and keep readers coming back for more.

News Gathering: AI can assist in gathering news from various sources including social media, blogs, forums, and other sources. This can greatly enhance a journalists ability to gather information and stay updated with the latest happenings.

Fact-checking: AI can help automate the fact-checking process, which is becoming increasingly important in the era of fake news. By cross-referencing information with reliable sources, AI can help ensure the accuracy of news reporting.

While these are significant benefits, there are also important ethical considerations and potential downsides to consider. AI could potentially displace some journalists from their jobs, particularly those involved in more routine, data-heavy reporting. Theres also the risk that AI-generated news could be manipulated for propaganda or misinformation purposes. And there are questions about how the use of AI for personalization could lead to echo chambers, where people are only exposed to news that reinforces their existing beliefs.

Finally, theres the question of transparency and accountability. If a news story is generated by an AI, who is responsible for its accuracy? These and other questions will need to be addressed as AI continues to play a larger role in journalism. [end ChatGPT answer on how AI will impact journalism]

So there you have it. What an artificial chatbot like ChatGPT by OpenAI thinks of the impact AI will have on journalism. What do you think of this most recent proliferation of AI in our society? This is just the beginning We should mention that AI fixes and improves itself along the way

Of course, you can also listen to this entire article on the SignalsAZ podcast which is text-to-speech service generated by machine learning speech-generating software. Talking Glass Media (owner of SignalsAZ.com ) was the first publisher in the region to adopt this technology.

View original post here:

AIs Impact on Journalism - Signals AZ

Posted in Artificial General Intelligence | Comments Off on AIs Impact on Journalism – Signals AZ

What is Augmented Intelligence? Explanation and Examples – Techopedia

Posted: at 12:55 am

Augmented intelligence is the use of artificial intelligence (AI) technologies to enhance human cognitive capabilities, decision-making processes, and productivity. The label augmented is meant to focus attention on AIs assistive role, and emphasize the important role humans play in unlocking the complete potential of AI technologies.

The term is credited to the research firm Gartner and is often used in marketing. It is intended to shift the narrative away from the idea that AI will replace humans and focus attention on how the technology is just one more tool that humans can use to solve problems faster.

It acknowledges that while computers can do some things faster and better than humans, those skills should be used to complement human intelligence, not replace it.

The label augmented intelligence is sometimes considered to be a more politically correct way to describe artificial intelligence. In reality, the choice of the adjective augmented provides a more accurate picture of todays AI capabilities.

Thats because the primary objective of augmented intelligence is to use AI technologies as a tool. By definition, a tool is designed for a particular task and todays AI systems are designed to perform specific tasks.

Take ChatGPT, for example. The large language model (LLM) can generate impressive conversational responses, but it cant create images. And while the generative AI program Dall-E excels at generating images, it cant provide conversational responses. Thats because todays AI is what researchers call narrow AI.

Narrow AI vs. General AI

There are two types of artificial intelligence: narrow AI and general AI.

Narrow AI (also called weak AI) refers to AI systems that are designed to perform specific tasks or functions within a limited domain. These systems excel in their designated areas but lack the versatility, creativity and adaptability of human intelligence.

In contrast, general AI (also referred to as strong AI), represents the hypothetical idea that someday, AI systems will have human-level or beyond human-level cognitive abilities. General AI is what is often depicted in science-fiction literature and films.

At present, all AI systems, including ChatGPT, fall under the category of narrow AI. The distinguishing characteristic of narrow AI is its specificity; narrow AI systems are built and trained to excel in a designated area, and their capabilities are typically focused on addressing specific problems. General AI, with its ability to generalize and perform a wide range of tasks, remains a goal that researchers and developers are hoping to eventually achieve.

Originally posted here:

What is Augmented Intelligence? Explanation and Examples - Techopedia

Posted in Artificial General Intelligence | Comments Off on What is Augmented Intelligence? Explanation and Examples – Techopedia

‘Godfather’ of AI is now having second thoughts – The B.C. Catholic

Posted: at 12:55 am

Until a few weeks ago British-born Canadian university professor Geoffrey Hinton was little known outside academic circles. His profile became somewhat more prominent in 2019 when he was a co-winner of the A. M. Turing Award, more commonly known as the Nobel Prize for computing.

However, it is events of the past month or so that have made Hinton a bit of a household name, after he stepped down from an influential role at Google.

Hintons life work, particularly that in computing at the University of Toronto, has been deemed groundbreaking and revolutionary in the field of artificial intelligence, AI. Anyone reading this column will surely have encountered numerous pieces on AI in recent months, be it on TV, through radio, or in print, physical and digital. AI applications such as large language model ChatGPT have completely altered the digital landscape in ways unimaginable even a year ago.

While at the U of T, Hinton and graduate students made major advances in deep neural networks, speech recognition, the classification of objects, and deep learning. Some of this work morphed into a technology startup which captured the attention of Google, leading to the acquisition of the business for around $44 million a decade ago.

Eventually, Hinton became a Google vice-president, in charge of running the California companys Toronto AI lab. Leaving that position recently, at the age of 75, led to speculation, particularly in a New York Times interview, that he did so in order to criticize or attack his former employer.

Not so, said Hinton in a tweet. Besides his age being a factor, he suggested he wanted to be free to speak about the dangers of AI, irrespective of Googles involvement in the burgeoning field. Indeed, Hinton noted in his tweet that in his view Google had acted very responsibly.

Underscoring his view of Googles public AI work may be the companys slow response to the adoption of Microsoft-backed ChatGPT in its various incarnations. Googles initial public AI product, Bard, appeared months after ChatGPT began its meteoric adoption in early December. It did not gain much traction at the outset.

In recent weeks weve seen news stories of large employers such as IBM serving notice that about 7,000 positions would be replaced by AI bots such as specialized versions of ChatGPT. Weve also seen stories about individuals turning over significant aspects of their day-to-day life to such bots. One person gained particular attention for giving all his financial, email, and other records to a specialized AI bot with a view to having it find $10,000 in savings and refunds through automated actions.

Perhaps it is these sorts of things that are giving Hinton pause as he looks back at his lifes work. In the NYT interview, he uses expressions such as, It is hard to see how you can prevent the bad actors from using it for bad things, and Most people will not be able to know what is true anymore -- the latter in reaction to AI-created photos, videos, and audio depicting objects or events that didnt occur.

Right now, they are not more intelligent than us, as far as I can tell. But they soon may be, said Hinton, speaking to the BBC about AI machines. He went on to add, Ive come to the conclusion that the kind of intelligence we are developing (via AI) is very different from the intelligence we have.

Hinton went on to note how biological systems (i.e. people) are different from digital systems. The latter, he notes, has many copies of the same set of weights and the same model of the world, and while these copies can learn separately, they can share new knowledge instantly.

In a somewhat enigmatic tweet on March 14 Hinton wrote: Caterpillars extract nutrients which are then converted into butterflies. People have extracted billions of nuggets of understanding and GPT-4 is humanitys butterfly.

Hinton spent the first week of May correcting various lines from interviews he gave to prominent news outlets. He took particular issue with a CBC online headline: Canadas AI pioneer Geoffrey Hinton says AI could wipe out humans. In the meantime, theres money to be made. In a tweet he said: The second sentence was said by a journalist, not me, but you wouldnt know that.

Whether the race to a God-like form of artificial intelligence fully materializes, or not, AI is already being placed alongside climate change and nuclear war as a trio of existential threats to human life. Climate change is being broadly tackled by most nations, and nuclear weapons use has been effectively stifled by the notion of mutually-assured destruction. Perhaps artificial general intelligence needs a similar global focus for regulation and management.

Follow me on Facebook (facebook.com/PeterVogelCA), or on Twitter (@PeterVogel)

[emailprotected]

Go here to read the rest:

'Godfather' of AI is now having second thoughts - The B.C. Catholic

Posted in Artificial General Intelligence | Comments Off on ‘Godfather’ of AI is now having second thoughts – The B.C. Catholic

Top Philippine universities – Philstar.com

Posted: at 12:55 am

In late 2022, there was the official launch of the latest innovation in artificial intelligence (AI), ChatGPT. GPT stands for Generative Pre-trained Transformer. This innovation is believed to be a major turning point in computing technology. This is a chatbot powered by a large language model (LLM) which was made by Open AI, a start-up company closely linked to Microsoft.

However, the unbelievably rapid success of this chatbot has prompted Google and other tech firms to release their own LLM-powered chatbots.This system is said to have the capability to automatically generate texts like essays and presentations, and hold realistic discussions because they have been trained using millions of words taken from the internet. The latest trend now among technology firms is to fine tune these LLMs.

According to a recent article in The Economist: It will make access to AI much cheaper, accelerate innovation across the field and make it easier for researchers to analyze the behavior of AI systems, boosting transparency and safety.

A recent 155-page research paper from researchers at Microsoft is claiming that AI technology now has the ability to understand and reason the way people do.This new system is considered to be a step towards artificial general intelligence or AGI, which is shorthand for a machine that can do anything the human brain can do. This has now sparked a debate in the world of technology on whether it is possible to build something akin to human intelligence.

Microsofts research paper is called Sparks of Artificial General Intelligence. Some believe that the industry is now slowly inching towards developing a new AI system that is coming up with human-like answers and ideas that were not programmed into it.

All these recent technological developments have led some scientists to predict that AI will lead to the elimination of many jobs that require simple skills.It seems that in the future world, people will need a higher level of education to cope with all these technological developments.

Advertising

Scroll to continue

If education becomes the key to progressing in this world of AI, data science, chatGPT and other technologies yet to be invented, the question is, is the Philippines ready for these challenges?

The best way so far is to survey the quality of Philippine higher education, especially the universities. EduRank is an independent metric-based ranking of 14,131 universities from 183 countries. It uses a proprietary data base with an index of 44,909,300 scientific publications and 1,237,541,960 citations to rank universities across 246 research topics. In its overall rankings globally, EduRank adds non-academic prominence and alumni popularity indicators. Its most recent rankings were updated as of May 18, 2022.

EduRanks overall rankings consist of three parts: 45 percent research performance; 45 percent non-academic prominence; 10 percent alumni score.

With a population of 110 million, the Philippines did not have a single university in the top 1,000 universities in the world. Among the top 500 universities in Asia, there were only two Philippine-based universities: University of the Philippines-Diliman and De La Salle University-Manila.

The Philippines must ensure that its basic education should at least be competitive on an Asia-wide scale. However, if it is to become globally competitive in the new world of technology-driven society, it must address the current deplorable state of university level education.

Here is a list of EduRanks ranking of the top 20 best universities in the Philippines and their respective ranks in Asia and the world.

1. University of the Philippines-Diliman (#378 in Asia, #1,380 in the world)

2. De La Salle University-Manila (#462 in Asia, #1,643 in the world)

3. Ateneo de Manila University (#545 in Asia, #1,878 in the world)

4. University of Santo Tomas-Manila (#781 in Asia, #2,575 in the world)

5. University of the Philippines-Manila (#814 in Asia, #2,686 in the world)

6. University of the Philippines-Los Baos (#878 in Asia, #2,819 in the world)

7. University of San Carlos-Cebu (#1,280 in Asia, #3,838 in the world)

8. Mindanao State University-Marawi (#1528 in Asia, #4,407 in the world)

9. Asian Institute of Management-Makati (#1,536 in Asia, #4,424 in the world)

10. Mapua University- Manila (#1571 in Asia, #4,512 in the world)

11. University of the East -Philippines-Manila (#1,778 in Asia, #4,979 in the world)

12. Silliman University-Dumaguete City (#1,798 in Asia, #5,019 in the world)

13. University of Asia and the Pacific- Pasig (#1,953 in Asia, #5,350 in the world)

14. Visayas State University-Baybay (#1,979 in Asia, #5,426 in the world)

15. Far Eastern University-Philippines-Manila (#2,077 in Asia, #5,660 in the world)

16. De La Salle-College of Saint Benilde-Manila (#2,110 in Asia, #5,736 in the world)

17. University of the Philippines in the Visayas-Iloilo(#2,132 in Asia, #5,788 in the world)

18. Central Luzon State University-Science City of Muoz (#2,149 in Asia, #5,820 in the world)

19. Adamson University-Manila (#2,184 in Asia, #5,898 in the world)

20. Polytechnic University of the Philippines-Manila (#2,219 in Asia, #5,977 in the world)

* * *

Write Things Writefest 2023 is ongoing.The June schedule is being drawn up. Email writethingsph@gmail.comor call 0945. 2273216 for details.

Email: elfrencruz@gmail.com

See the rest here:

Top Philippine universities - Philstar.com

Posted in Artificial General Intelligence | Comments Off on Top Philippine universities – Philstar.com

The Politics of Artificial Intelligence (AI) – National and New Jersey … – InsiderNJ

Posted: at 12:55 am

On May 27, former Secretary of State Henry Kissinger will attain the age of 100. Over the last few months, I have been involved in authoring an historical essayKissinger at 100 His Complex Historical Legacy.

The essay is scheduled to be published around the time of Kissingers birthday by the Jandoli Institute, the public policy center for the Jandoli School of Communication at St. Bonaventure University. The institutes executive director is Rich Lee, a former State House reporter who also served as Deputy Communication Director for former Governor Jim McGreevey. I will also be developing a podcast regarding my essay.

For me, this project is truly a career capstone, utilizing all my analytic skills developed over a lifetime. This includes, inter alia, my studies as a political science honors scholar as a Northwestern University undergraduate, my service as a Navy officer, my years as a corporate and private practice attorney, my career as a public official, including my leadership of two major federal and state agencies, my accomplishments as a college professor, and my most recent post-retirement career as an opinion journalist.

Whether one is an admirer or critic of Dr. Henry Kissinger, there is no question that he has been a transformative figure, with a greater impact on American history than any 20th century American other than our presidents. Researching his life and career is truly a Sisyphean endeavor.

Kissinger has authored thirteen books, a plethora of articles, and numerous media appearances. In jocular fashion, I have told friends and family members that researching Henry Kissinger is like studying the Torah you never finish it!

So about a month ago, I thought that I had finished all my Kissinger research until I had the good fortune to meet with a friend of mine who also, unbeknownst to me, was a friend of Henry Kissinger. When I informed him of my Kissinger project, he proceeded to display for me on his I phone numerous photos of him and the legendary Dr. K!

Then, he asked me what were my research sources. I proudly told him the list of my readings, video tape viewings, and interviews. He responded by saying, Very good, but you have a critical omission. You did not read the book, The Age of AI (artificial intelligence) and Our Human Future.

The book was co-authored by Henry Kissinger, Eric Schmidt, former CEO of Google, and Daniel Huttenlocher, the Inaugural Dean of the MIT Schwarzman College of Computing. For ease of reference, and with all due respect to his co-authors, I will refer to this work as the Kissinger AI book.

I told my friend that I was aware of the book, but I had chosen not to include it in my essay because of my focus on Kissinger as a foreign policy maker and diplomat. My friend, however, admonished me, You do not understand. For Henry, his involvement with AI is a legacy item.

So I immediately ordered the book. My friend was correct. The Kissinger AI book should be a must read for high governmental officials, New Jersey and federal. Every New Jersey cabinet member and authority executive director should have this book on his or her desk.

Within the last month, AI has become a growing arena of national focus, sparked in large part by the resignation of Dr. Geoffrey Hinton from his job at Google. Dr. Hinton is known as the Godfather of AI. He resignedso he can freely speak out about the risks of AI. A part of him, he said, now regrets his lifes work.

In New Jersey, late last year, a bill was introduced in the Assembly, A4909, which would mandate thatemployers could use only hiring software that has been subjected to a bias audit, which looks for any patterns of discrimination. It would require annual reviews of whether programs comply with state law.

The bill was generated because of increasing concern that a growing number of AI systems had either a gender, racial, or disability bias. As an example,Reuters reported in 2018that Amazon had stopped using an AI recruiting tool because it penalized applicants with resumes that referred to womens activities or degrees from two all-womens colleges.

In February, NorthJersey.com journalist Daniel Munoz authored a comprehensive column dealing with AI and its potential dangers and biases in the hiring process. Included in the column was an interview with Assemblywoman Sadaf Jaffer (D-Mercer) a prime sponsor of this legislation.

It should be noted that the Kissinger AI book strongly recommends the auditing of AI systems by humans, rather than self-auditing by machines themselves. The human auditing can both increase the effectiveness of the AI while mitigating its dangers.

And today, on Twitter, Assembly Majority Leader Lou Greenwald (D-Camden) stated as follows: The power that Artificial Intelligence possesses makes it a potentially dangerous tool for people looking to spread misinformation. This is why I will be introducing legislation that looks to limit the harmful uses it has on election campaigns.

The beneficial effects of AI are real, as are the dangers. The politics of AI is the subject of increasing focus at both the national and New Jersey level.

The Kissinger AI book is highly relevant to all AI issues, both federal and state. The three-fold focus of the book makes it an indispensable basic guide to AI politics.

First, it gives a concise, contextual definition of AI. Second, it describes in depth the potential benefits and dangers of AI. Third, it proposes some solutions of a beginning nature to deal with the emerging negative impacts of AI.

In terms of contextual definition, the Kissinger AI book describes two empirical tests of what constitutes AI.

The first is the Alan Turing test, stating that if a software process enabled a machine to operate so proficiently that observers could not distinguish its behavior from a humans, the machine should be labeled intelligent.

Second is the John McCarthy test, defining AI as machines that can perform tasks that are characteristic of human intelligence.

The Kissinger AI book also describes the impact of AI on the reasoning process, so integral to decision making. The three components of reason are information, knowledge, and wisdom. When information becomes contextualized, it leads to knowledge. When knowledge leads to conviction, it becomes wisdom. Yet AI is without the reflection and self-awareness qualities that are essential to wisdom.

This lack of wisdom, combined with three essential features of AI magnifies its enormous danger in certain situations: 1) Its usefor both warlike and peaceful purposes; 2) its massive destructive force; and 3) its capacity to be deployed and spread easily, quickly, and widely.

The most alarming feature of AI is on the horizon: the arrival of artificial general intelligence (AGI). This means AI capable of completing any intellectual task humans are capable of, in contrast to todays narrow AI, which is developed to complete a specific task.

It is the growing capacity of unsupervised self-learning by AI systems which is facilitating the potential of the arrival of AGI. With AGI comes autonomy and autonomy in weapons systems increases the potential for accidental war.

The potential of AI leading to accidental war, along with the two above mentioned dangers publicized in New Jersey of AI generated job discrimination and political disinformation are the negative aspects of AI which will receive the most focus in the forthcoming debate.

Yet AI is not without its extremely beneficial uses, most notably in the development of new prescription drugs. So the obvious task of government, federal and state, is to filter out the dangers and facilitate the beneficial uses.

As a first step, the Kissinger AI book recommends that new national governmental authorities be created with two objectives: 1) America must remain intellectually and strategically competitive in AI; and 2) Studies should be undertaken to assess the cultural implications of AI.

In New Jersey, the best way to governmentally meet this challenge would be to create a new cabinet level Department of Science, Information, and Technology.

We currently have in New Jersey the Commission on Science, Information, and Technology, which with limited funding does a most commendable job in fulfilling its mission, namely: Responsibility for strengthening the innovation economy within the State, encouraging collaboration and connectivity between industry and academia, and the translation of innovations into successful high growth businesses.

A Department of Science, Information, and Technology would have three additional powers: 1) Regulatory powers regarding auditing, self-learning, and AGI; and 2) the ability to commission more in-depth studies regarding AI cultural impact; and 3) the ability to coordinate scientific policy throughout the executive branch. Obviously, an increased level of funding would be necessary to execute these three functions.

I also have a recommendation for the first New Jersey Commissioner of Science, Innovation, and Technology, State Senator Andrew Zwicker (D-Middlesex). His brilliance and competence as a scientist as demonstrated from his service at the Princeton Plasma Laboratory and his proven integrity and ethics in state government make him an ideal candidate for this role.

And to Henry Kissinger, my fellow Jew, I say to you: Mazal Tov on your 100th birthday! And like Moses in the Torah, may you live at least 120 years!

Alan J. Steinberg served as regional administrator ofRegion2 EPA during the administration of former President George W. Bush and as executive director of the New Jersey Meadowlands Commission.

(Visited 441 times, 1 visits today)

See original here:

The Politics of Artificial Intelligence (AI) - National and New Jersey ... - InsiderNJ

Posted in Artificial General Intelligence | Comments Off on The Politics of Artificial Intelligence (AI) – National and New Jersey … – InsiderNJ

People warned AI is becoming like a God and a ‘catastrophe’ is … – UNILAD

Posted: at 12:55 am

An artificial intelligence investor has warned that humanity may need to hit the breaks on AI development, claiming it's becoming 'God-like' and that it could cause 'catastrophe' for us in the not-so-distant future.

Ian Hogarth - who has invested in over 50 AI companies - made an ominous statement on how the constant pursuit of increasingly-smart machines could spell disaster in an essay for the Financial Times.

The AI investor and author claims that researchers are foggy on what's to come and have no real plan for a technology with that level of knowledge.

"They are running towards a finish line without an understanding of what lies on the other side," he warned.

Hogarth shared what he'd recently been told by a machine-learning researcher that 'from now onwards' we are on the verge of artificial general intelligence (AGI) coming to the fore.

AGI has been defined as is an autonomous system that can learn to accomplish any intellectual task that human beings can perform and surpass human capabilities.

Hogarth, co-founder of Plural Platform, said that not everyone agrees that AGI is imminent but rather 'estimates range from a decade to half a century or more' for it to arrive.

However, he noted the tension between companies that are frantically trying to advance AI's capabilities and machine learning experts who fear the end point.

The AI investor also explained that he feared for his four-year-old son and what these massive advances in AI technology might mean for him.

He said: "I gradually shifted from shock to anger.

"It felt deeply wrong that consequential decisions potentially affecting every life on Earth could be made by a small group of private companies without democratic oversight."

When considering whether the people in the AGI race were planning to 'slow down' to ' let the rest of the world have a say' Hogarth admitted that it's morphed into a 'them' versus 'us' situation.

Having been a prolific investor in AI startups, he also confessed to feeling 'part of this community'.

Hogarth's descriptions of the potential power of AGI were terrifying as he declared: "A three-letter acronym doesnt capture the enormity of what AGI would represent, so I will refer to it as what is: God-like AI."

Hogarth described it as 'a superintelligent computer that learns and develops autonomously, that understands its environment without the need for supervision and that can transform the world around it'.

But even with this knowledge and, despite the fact that it's still on the horizon, he warned that we have no idea of the challenges we'll face and the 'nature of the technology means it is exceptionally difficult to predict exactly when we will get there'.

"God-like AI could be a force beyond our control or understanding, and one that could usher in the obsolescence or destruction of the human race," the investor said.

Despite a career spent investing in and supporting the advancement of AI, Hogarth explained that what made him pause for thought was the fact that 'the contest between a few companies to create God-like AI has rapidly accelerated'.

He continued: "They do not yet know how to pursue their aim safely and have no oversight."

Hogarth still plans to invest in startups that pursue AI responsibly, but explained that the race shows no signs of slowing down.

"Unfortunately, I think the race will continue," he said.

"It will likely take a major misuse event - a catastrophe - to wake up the public and governments."

Read the rest here:

People warned AI is becoming like a God and a 'catastrophe' is ... - UNILAD

Posted in Artificial General Intelligence | Comments Off on People warned AI is becoming like a God and a ‘catastrophe’ is … – UNILAD

"Zero tolerance" for hallucinations – Dr. Vishal Sikka on how Vianai builds AI applications, and the mixed emotions of the AI hype cycle -…

Posted: at 12:55 am

(Dr. Vishal Sikka, CEO and Founder, Vianai)

Dr. Vishal Sikka is no stranger to AI. He might be the founder and CEO of a trending AI startup (Vianai), but he's been at the AI game a long time. I often met with Sikka when he was CTO of SAP; even back then he would extol the potential of enterprise AI - outside of any frenzied hype cycle.

So, during our recent video call, I asked Sikka: surely he must have conflicting emotions? Something Sikka has worked on for so long is now a hyped-drenched tech bandwagon, full of opportunists and techno-gimmickry. "Mixed emotions," admits Sikka. He remembers the roots of all this:

It's different emotions every day. It's like a barrage. I started in AI with natural language and NLU - Natural Language Understanding - was the phrase we used back then. I was 17. I started working on some basic techniques of natural language understanding. I had an idea that I wrote to a professor at MIT about; I was still in India at that time. That was Marvin Minsky, who was one of the fathers of AI. I got a chance to hang out with him when I was still an undergraduate student. He wrote my recommendation letter for my admission to Stanford for my PhD.

Fast forward to today: AI suddenly has massive technical (and cultural) momentum. Sikka found himself signing on to an important, controversial - and, I would argue, widely misunderstood letter warning of the dangers of AI, and urging that infamous pause. No, not a pause in AI innovation, but a pause in Large Language Model (LLM) expansion. As Sikka told me:

I'm one of the signers of that letter. Stuart Russell was one of my academic siblings. We have the same PhD advisor; he was one of the main authors. He asked me to sign it. We were not advocating stopping AI research. This is critically important. We were talking about putting a pause on models bigger than GPT4 for six months, so that people who regulate have an opportunity to catch up to what is going on. The risks of the AI today are, to me, very worrisome.

Queue Sikka's mixed emotions. The risks of AI are significant, but so is the potential for life-changing applications. Via his work with Vianai, Sikka lives that every day:

On the positive side, it's an incredibly powerful technology; there is so much that can be done with it. It can be really transformational in how we work, and what we work on. Just this morning, somebody released a thing called RedPajama. It's a large language model that was released in open source, with an open data set and things like that. So the openness that people have to experiment, to tinker and to try things, the speed at which this is happening - there are incredibly exciting applications you can build with it.

But the enterprise is a different matter - with a much more discerning risk profile. Enterprise leaders are understandably wary of the risks of this particular AI hype cycle, just as they were with the Metaverse and blockchain before that. But they also want to study the use cases. They want a much better handle on what's possible today, and what's been overheated by marketing departments. LLM adoption is absolutely forcing the question.

So, what's possible today? Let's start with hila, Vianai's newly-announced generative AI solution, billed as "Your new financial research assistant. hila can help you quickly find answers in earnings transcripts, and we're adding new data all the time." Sikka:

hila is a tool for investment. It has three components, and you should try it yourself. Just go to hila.ai and sign up. [hila has] zero tolerance for hallucination. There are plenty of tools like this in light of ChatGPT. But what we worked really hard on is to make sure that these things don't hallucinate.

There are a few scenarios within hila. There is an ability to ask questions to earnings calls, and to 10 ks 10 Q's, documents like that, for public companies. It also has a data set that you can query. This is a SQL-type querying of structured data using natural language.

Jake Klein, CEO of Dealtale, a Vianai company - was helping Sikka to pursue this type of functionality 18 years ago while they were both at SAP. "But today, we can finally do this kind of thing," says Sikka. And the second scenario?

We also have a [feature] where you can upload your own document, and ask questions to it. But in all cases, the safety, the correctness, the zero tolerance for nonsense is the hallmark. As for Dealtale, our team did a lot of work in a particular area of AI called causality, which is around cause and effect relationships.

Causation versus correlation has given data scientists headaches for a long time:

Basically, there is causation, but there is also correlation. Generally, when we build a model from data, it is difficult to make a distinction between these. Sometimes people confuse correlation with causality and causation.

Our team did a lot of pioneering work in that area, where you could say, 'Hey, I put this offer in front of John, based on the behavior of people in his cohort, is he likely to click on this or not?' Coming up with a causal understanding of that, and what are the phenomenon that caused that behavior is what we were working on. So we acquired Dealtale as a way to collect the data on which we could build these causal models, and causal graphs.

Then OpenAI released a little thing called ChatGPT, and inspired the Vianai team. Thus the release of another app on the Vianai platform, Dealtale IQ. Sikka:

This product, Dealtale IQ, that we launched a couple months ago, basically gives marketing analysts the ability to ask any question they can think of - any question, not only the four or five scenarios that we had around causality.

As always with AI, the caliber and comprehensiveness of your data makes the difference:

On top of this data, [we pull in data] from systems like Salesforce, Marketo, Microsoft Dynamics, Google ads, Facebook ads - there are 19 or 20 systems like that, We built this single view of the customer from their customer engagements.

It's generally for smaller companies - companies that are 100% digital in nature. HubSpot is one of these systems that we read from. And now marketing analysts have the ability to ask any question they can think of. Customers just absolutely love it.

We couldn't get into this without getting into coding. Sikka has managed some pretty big development organizations over the years. Like several other noted enterprise technologists I've spoken with, Sikka sees generative AI as particularly well-suited - and perhaps disruptive - for programming:

One interesting consequence of this large language model technology is that it is particularly effective at coding. If you have the right brackets in place, the right safeguards, guardrails, etc., you can actually do a very good job of generating SQL, generating JSON, and even just generating code.

Where we are headed is: we can make the entire application dynamic; we can make the entire application virtual, and replace that with human language. You will see us make announcements on that front in the next weeks and months.

I have an axe to grind over the technical limitations of LLMs, which I believe are being overlooked. I got Sikka's take on that; I'll share that in my next installment. But there's no denying the level of adoption these tools have achieved.

Some assumed that the overriding concern in the aforementioned "AI pause" letter was about the pending risk of Artificial General Intelligence (we are nowhere near that), and the sensationalized risk of mass unemployment - if/when AI becomes sophisticated at human-level problem solving and cognitive functions. I don't believe we're anywhere near that either, due to the technical limitations of generative AI. So what was the purpose of the letter, then?

It really depends on the person - there is no one correct view on this. Many who signed put aside big differences - both with each other, and on the future of AI. But some of those who signed, such as my former college classmate Gary Marcus, an AI expert in his own right, did so because they believe even with generative AI's current limitations, it's still an incredibly powerful tool - wide open to misuse, unintended consequences and/or exploitation by bad actors. As Sikka told me:

You probably saw the news of the 40,000 chemicals that were composed using these things. Each of those 40,000 is at the same level of lethality as VX, one of the most lethal chemical compounds known to humanity... And that's just one example.

The regulatory environment is too far behind to hedge this properly. Sikka, with his firsthand role in how AI has evolved, is exactly the type of voice this conversation needs.

But there is also the fascinating upside, as people experiment with ChatGPT across a range of use cases, including their own productivity. Sikka has already integrated GPT into his personal workflows:

One great use of this is to prompt you, and get you out of a rut... I had to write something last night, and I was just too tired. I went to ChatGPT and I said, 'Hey, write me a draft of a letter.' Now when I finished writing that letter, not one sentence in that letter was from ChatGPT. But it got me started. And it gave me ideas. And it gave me a frame and it got me going in the end.

Given Sikka's passion for education, I know he is concerned about the impact of this technology on junior roles, now that AI can fulfill a bigger chunk of the routine tasks: whether it's junior programmers, junior contract writers, or junior resource analysts, Sikka sees big changes coming.

You and I have talked about this before: the burden to educate. It is now even stronger than it has ever been.

Aspiring professionals need a viable path forward. I see that as a major shift in both educational requirements - and our approach to professional mentorship. But as Sikka points out, attitudes have to change also:

If those junior analysts don't want to learn what these things can do; if they don't want to use these in their world, then they will be disrupted. If they do use these things, they become far more productive, and they become much more effective as employees.

Plenty to think about - and new apps to try. Sikka also has advice for enterprises looking hard at AI apps and opportunities. I'll get into that in part two next week.

Read more from the original source:

"Zero tolerance" for hallucinations - Dr. Vishal Sikka on how Vianai builds AI applications, and the mixed emotions of the AI hype cycle -...

Posted in Artificial General Intelligence | Comments Off on "Zero tolerance" for hallucinations – Dr. Vishal Sikka on how Vianai builds AI applications, and the mixed emotions of the AI hype cycle -…

Page 7«..4567