The Prometheus League
Breaking News and Updates
- Abolition Of Work
- Ai
- Alt-right
- Alternative Medicine
- Antifa
- Artificial General Intelligence
- Artificial Intelligence
- Artificial Super Intelligence
- Ascension
- Astronomy
- Atheism
- Atheist
- Atlas Shrugged
- Automation
- Ayn Rand
- Bahamas
- Bankruptcy
- Basic Income Guarantee
- Big Tech
- Bitcoin
- Black Lives Matter
- Blackjack
- Boca Chica Texas
- Brexit
- Caribbean
- Casino
- Casino Affiliate
- Cbd Oil
- Censorship
- Cf
- Chess Engines
- Childfree
- Cloning
- Cloud Computing
- Conscious Evolution
- Corona Virus
- Cosmic Heaven
- Covid-19
- Cryonics
- Cryptocurrency
- Cyberpunk
- Darwinism
- Democrat
- Designer Babies
- DNA
- Donald Trump
- Eczema
- Elon Musk
- Entheogens
- Ethical Egoism
- Eugenic Concepts
- Eugenics
- Euthanasia
- Evolution
- Extropian
- Extropianism
- Extropy
- Fake News
- Federalism
- Federalist
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom
- Freedom of Speech
- Futurism
- Futurist
- Gambling
- Gene Medicine
- Genetic Engineering
- Genome
- Germ Warfare
- Golden Rule
- Government Oppression
- Hedonism
- High Seas
- History
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Immortality
- Human Longevity
- Illuminati
- Immortality
- Immortality Medicine
- Intentional Communities
- Jacinda Ardern
- Jitsi
- Jordan Peterson
- Las Vegas
- Liberal
- Libertarian
- Libertarianism
- Liberty
- Life Extension
- Macau
- Marie Byrd Land
- Mars
- Mars Colonization
- Mars Colony
- Memetics
- Micronations
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- Nanotech
- National Vanguard
- NATO
- Neo-eugenics
- Neurohacking
- Neurotechnology
- New Utopia
- New Zealand
- Nihilism
- Nootropics
- NSA
- Oceania
- Offshore
- Olympics
- Online Casino
- Online Gambling
- Pantheism
- Personal Empowerment
- Poker
- Political Correctness
- Politically Incorrect
- Polygamy
- Populism
- Post Human
- Post Humanism
- Posthuman
- Posthumanism
- Private Islands
- Progress
- Proud Boys
- Psoriasis
- Psychedelics
- Putin
- Quantum Computing
- Quantum Physics
- Rationalism
- Republican
- Resource Based Economy
- Robotics
- Rockall
- Ron Paul
- Roulette
- Russia
- Sealand
- Seasteading
- Second Amendment
- Second Amendment
- Seychelles
- Singularitarianism
- Singularity
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Spacex
- Sports Betting
- Sportsbook
- Superintelligence
- Survivalism
- Talmud
- Technology
- Teilhard De Charden
- Terraforming Mars
- The Singularity
- Tms
- Tor Browser
- Trance
- Transhuman
- Transhuman News
- Transhumanism
- Transhumanist
- Transtopian
- Transtopianism
- Ukraine
- Uncategorized
- Vaping
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Waveland
- Ww3
- Yahoo
- Zeitgeist Movement
-
Prometheism
-
Forbidden Fruit
-
The Evolutionary Perspective
Daily Archives: June 6, 2024
AI Ambassadors: 3 Stocks Bridging the Gap Between Humanity and Machine – InvestorPlace
Posted: June 6, 2024 at 8:48 am
These companies are building the future of AI and the world we'll inhabit in coming years
Source: shutterstock.com/Tex vector
Many people are predicting that computers will be smarter than their human creators as early as 2030. Known as artificial general intelligence, this would be the point at which humans are no longer the most intelligent things on the planet. Analysts and scientists say within a decade, machines could be making decisions independent of human control. This is a brave new world that has the potential for both positive and negative outcomes.
Regardless of how things shakeout, theres no question that artificial intelligence (AI) will shape our collective future and is fast emerging as the dominant technology of our time. Certain companies are leading the charge and pushing the frontiers of AI, as well as working to infuse AI into all kinds of machines and technologies. Here are AI ambassadors: three stocks bridging the gap between humanity and machine.
Source: JHVEPhoto / Shutterstock
Adobe (NASDAQ:ADBE), the software company behind popular creative products such as Photoshop and Illustrator, is adding AI where it can. In recent months, Adobe has launched AI assistant for both its Reader and Acrobat products. Management has called AI the new digital camera. The company is leading in terms of finding practical ways for AI to help humans enhance their creativity.
AI has proven to be a bit of a double edged sword for Adobe. ADBE stock has been hurt by news that privately held OpenAI has developed Sora, an AI platform that can generate videos based on written descriptions, similar to some Adobe products. The stock has also been dinged by news that Adobe canceled its planned $20 billion acquisition of design software start-up Figma and had to pay a $1 billion termination fee.
Despite near-term headwinds, Adobe stock should be fine in the long run. Investors might want to take advantage of the fact that ADBE stock is down 23% year to date.
Source: ToyW / Shutterstock
Few companies are doing as much to enable the AI revolution as Taiwan Semiconductor Manufacturing Co. (NYSE:TSM). The microchip foundry currently makes about three-quarters (75%) of all the chips and semiconductors used in the world today. Most AI applications and models run on chips that are produced by TSMC, as the company is known. It is a highly specialized business that is red hot right now.
TSMCs services are so in-demand that the U.S. government has given the company $6.60 billion to fund the construction of microchip fabrication plants in Arizona. TSMC is spending $65 billion to build three cutting-edge fabrication plants in Phoenix. The plants are expected to be operational in 2027 and will provide microchips to customers such as Nvidia (NASDAQ:NVDA) and Apple (NASDAQ: AAPL).
TSM stock has risen 50% so far in 2024.
Source: Vitaliy Karimov / Shutterstock.com
As its main electric vehicle manufacturing business struggles, Tesla (NASDAQ:TSLA) is pivoting to focus on AI. CEO Elon Musk has pledged to spend $10 billion this year alone on AI and has moved the company to focus both on AI and robotics with apparent plans to combine the two in the future. A few of the companys projects in this regard include a humanoid robot called Optimus and a super computer called Dojo.
Additionally, Musk has launched xAI, a separate company that aims to use AI to advance our collective understanding of the universe. Musk has said that xAI aims to compete with OpenAI and its various chatbots. Given the continued decline in Teslas sales and market share, switching to focus on AI, super computers, and humanoid robots might be the companys future.
Musks enthusiasm for electric vehicles appears to be waning along with the publics interest. He seems much more focused on AI. TSLA stock has declined 28% on the year.
On the date of publication, Joel Bagloleheld a long position in NVDA. The opinions expressed in this article are those of the writer, subject to the InvestorPlace.com Publishing Guidelines.
Joel Baglole has been a business journalist for 20 years. He spent five years as a staff reporter at The Wall Street Journal, and has also written for The Washington Post and Toronto Star newspapers, as well as financial websites such as The Motley Fool and Investopedia.
Read the original:
AI Ambassadors: 3 Stocks Bridging the Gap Between Humanity and Machine - InvestorPlace
Posted in Artificial General Intelligence
Comments Off on AI Ambassadors: 3 Stocks Bridging the Gap Between Humanity and Machine – InvestorPlace
AGI in Less Than 5 years, Says Former OpenAI Employee – – 99Bitcoins
Posted: at 8:48 am
Dig into the latest AI Crypto news as we explore the future of artificial general intelligence (AGI) and uncover its potential to surpass human abilities, according to Leopold Aschenbrenners essay.
AGI in less than 5 years. How do you intend to spend your last few years alive? (kidding).
The internets on fire after former OpenAI safety researcher Leopold Aschenbrenner unleashed Situational Awareness, a no-holds-barred essay series on the future of Artificial General Intelligence.
It is 165 pages long and fresh as of June 4. It examines where AI stands now and where its headed.
(Twitter)
In some ways, this is: LINE GOES UP OOOOOOOOOOH ITS HAPPENING ITS HAPPENING.
Reminiscent in some ways to this old Simpsons joke:
But Aschenbrenner envisions AGI systems becoming smarter than you or I by the decades end, ushering in an era of true superintelligence. Alongside this rapid advancement, he warns of significant national security implications not seen in decades.
AGI by 2027 is strikingly plausible, Aschenbrenner claims, suggesting that AGI machines will outperform college graduates by 2025 or 2026. To put this in perspective, suppose GPT-4 training took 3 months. In 2027, a leading AI lab will be able to train a GPT-4-level model in a minute.
Aschenbrenner urges the AI community to adopt what he terms AGI realism, a viewpoint grounded in three core principles related to national security and AI development in the U.S.
He argues that the industrys smartest minds, like Ilya Sutskever, who famously failed to unseat CEO Sam Altman in 2023, are converging on this perspective, acknowledging the imminent reality of AGI.
Aschenbrenners latest insights follow his controversial exit from OpenAI amid accusations of leaking info.
DISCOVER: The Best AI Crypto to Buy in Q2 2024
On Tuesday a dozen-plus staffers from AI heavyweights like OpenAI, Anthropic, and Googles DeepMind have raised red flags against AGI.
Their open letter cautions that without extra protections, AI might become an existential threat.
We believe in the potential of AI technology to deliver unprecedented benefits to humanity, the letter states. We also understand the serious risks posed by these technologies. These risks range from the further entrenchment of existing inequalities, to manipulation and misinformation, to the loss of control of autonomous AI systems potentially resulting in human extinction.
The letter takes aim at AI giants for dodging oversight in favor of fat profits. DeepMinds Neel Nanda was the only one to break ranks as the only internal researcher to endorse the letter.
AI is quickly becoming a battleground, but the message of the letter is simple: Dont punish employees for speaking out on AI dangers.
On the one hand, it can be scary to think that human creativity and the boundaries of thought are being closed in by politically correct code monkies tinkering with matrix multiplication.
On the other, the power of artificial intelligence is currently incomprehensible because it is unlike anything we have understood before.
It could be a revolution, just as the first man discovered the spark or the spinning of a stone wheel one moment, it didnt exist, and the next, it changed the face of humanity. Well see.
EXPLORE:A Complete List of Bitcoin-Friendly Countries
Disclaimer: Crypto is a high-risk asset class. This article is provided for informational purposes and does not constitute investment advice. You could lose all of your capital.
Continue reading here:
AGI in Less Than 5 years, Says Former OpenAI Employee - - 99Bitcoins
Posted in Artificial General Intelligence
Comments Off on AGI in Less Than 5 years, Says Former OpenAI Employee – – 99Bitcoins
Opinion | Will A.I. Be a Creator or a Destroyer of Worlds? – The New York Times
Posted: at 8:48 am
The advent of A.I. artificial intelligence is spurring curiosity and fear. Will A.I. be a creator or a destroyer of worlds?
In Can We Have Pro-Worker A.I.? Choosing a Path of Machines in Service of Minds, three economists at M.I.T., Daron Acemoglu, David Autor and Simon Johnson, looked at this epochal innovation last year:
The private sector in the United States is currently pursuing a path for generative A.I. that emphasizes automation and the displacement of labor, along with intrusive workplace surveillance. As a result, disruptions could lead to a potential downward cascade in wage levels, as well as inefficient productivity gains.
Before the advent of artificial intelligence, automation was largely limited to blue-collar and office jobs using digital technologies while more complex and better-paying jobs were left untouched because they require flexibility, judgment and common sense.
Now, Acemoglu, Autor and Johnson wrote, A.I. presents a direct threat to those high-skill jobs: A major focus of A.I. research is to attain human parity in a vast range of cognitive tasks and, more generally, to achieve artificial general intelligence that fully mimics and then surpasses capabilities of the human mind.
The three economists make the case that
There is no guarantee that the transformative capabilities of generative A.I. will be used for the betterment of work or workers. The bias of the tax code, of the private sector generally, and of the technology sector specifically, leans toward automation over augmentation.
But there are also potentially powerful A.I.-based tools that can be used to create new tasks, boosting expertise and productivity across a range of skills. To redirect A.I. development onto the human-complementary path requires changes in the direction of technological innovation, as well as in corporate norms and behavior. This needs to be backed up by the right priorities at the federal level and a broader public understanding of the stakes and the available choices. We know this is a tall order.
Tall is an understatement.
In an email elaborating on the A.I. paper, Acemoglu contended that artificial intelligence has the potential to improve employment prospects rather than undermine them:
It is quite possible to leverage generative A.I. as an informational tool that enables various different types of workers to get better at their jobs and perform more complex tasks. If we are able to do this, this would help create good, meaningful jobs, with wage growth potential, and may even reduce inequality. Think of a generative A.I. tool that helps electricians get much better at diagnosing complex problems and troubleshoot them effectively.
This, however, is not where we are heading, Acemoglu continued:
The preoccupation of the tech industry is still automation and more automation, and the monetization of data via digital ads. To turn generative A.I. pro-worker, we need a major course correction, and this is not something thats going to happen by itself.
Acemoglu pointed out that unlike the regional trade shock that decimated manufacturing employment after China entered the World Trade Organization in 2001, The kinds of tasks impacted by A.I. are much more broadly distributed in the population and also across regions. In other words, A.I. threatens employment at virtually all levels of the economy, including well-paid jobs requiring complex cognitive capabilities.
Four technology specialists Tyna Eloundou and Pamela Mishkin, both on the staff of OpenAI, with Sam Manning, a research fellow at the Centre for the Governance of A.I., and Daniel Rock at the University of Pennsylvania provided a detailed case study on the employment effects of artificial intelligence in their 2023 paper, GPTs Are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models.
More here:
Opinion | Will A.I. Be a Creator or a Destroyer of Worlds? - The New York Times
Posted in Artificial General Intelligence
Comments Off on Opinion | Will A.I. Be a Creator or a Destroyer of Worlds? – The New York Times
Can AI ever be smarter than humans? | Context – Context
Posted: at 8:48 am
Whats the context?
"Artificial general intelligence" (AGI) - the benefits, the risks to security and jobs, and is it even possible?
LONDON - When researcher Jan Leike quit his job at OpenAI last month, he warned the tech firm's "safety culture and processes (had) taken a backseat" while it trained its next artificial intelligence model.
He voiced particular concern about the company's goal to develop "artificial general intelligence", a supercharged form of machine learning that it says would be "smarter than humans".
Some industry experts say AGI may be achievable within 20 years, but others say it will take many decades, if at all.
But what is AGI, how should it be regulated and what effect will it have on people and jobs?
OpenAI defines AGI as a system "generally smarter than humans". Scientists disagree on what this exactly means.
"Narrow" AI includes ChatGPT, which can perform a specific, singular task. This works by pattern matching, akin to putting together a puzzle without understanding what the pieces represent, and without the ability to count or complete logic puzzles.
"The running joke, when I used to work at Deepmind (Google's artificial intelligence research laboratory), was AGI is whatever we don't have yet," Andrew Strait, associate director of the Ada Lovelace Institute, told Context.
IBM has suggested that artificial intelligence would need at least seven critical skills to reach AGI, including visual and auditory perception, making decisions with incomplete information, and creating new ideas and concepts.
Narrow AI is already used in many industries, but has been responsible for many issues, like lawyers citing "hallucinated" - made up - legal precedents and recruiters using biased services to check potential employees.
AGI still lacks definition, so experts find it difficult to describe the risks that it might pose.
It is possible that AGI will be better at filtering out bias and incorrect information, but it is also possible new problems will arise.
One "very serious risk", Strait said, was an over-reliance on the new systems, "particularly as they start to mediate more sensitive human-to-human relationships".
AI systems also need huge amounts of data to train on and this could result in a massive expansion of surveillance infrastructure. Then there are security risks.
"If you collect (data), it's more likely to get leaked," Strait said.
There are also concerns over whether AI will replace human jobs.
Carl Frey, a professor of AI and work at the Oxford Internet Institute, said an AI apocalypse was unlikely and that "humans in the loop" would still be needed.
But there may be downward pressure on wages and middle-income jobs, especially with developments in advanced robotics.
"I don't see a lot of focus on using AI to develop new products and industries in the ways that it's often being portrayed. All applications boil down to some form of automation," Frey told Context.
As AI develops, governments must ensure there is competition in the market, as there are significant barriers to entry for new companies, Frey said.
There also needs to be a different approach to what the economy rewards, he added. It is currently in the interest of companies to focus on automation and cut labour costs, rather than create jobs.
"One of my concerns is that the more we emphasise the downsides, the more we emphasise the risks with AI, the more likely we are to get regulation, which means that we restrict entry and that we solidify the market position of incumbents," he said.
Last month, the U.S. Department of Homeland Security announced a board comprised of the CEOs of OpenAI, Microsoft, Google, and Nvidia to advise the government on AI in critical infrastructure.
"If your goal is to minimise the risks of AI, you don't want open source. You want a few incumbents that you can easily control, but you're going to end up with a tech monopoly," Frey said.
AGI does not have a precise timeline. Jensen Huang, the chief executive of Nvidia, predicts that today's models could advance to the point of AGI within five years.
Huang's definition of AGI would be a program that can improve on human logic quizzes and exams by 8%.
OpenAI has indicated that a breakthrough in AI is coming soon with Q* (pronounced Q-Star), a secretive project reported in November last year.
Microsoft researchers have said that GPT-4, one of OpenAI's generative AI models, has "sparks of AGI". However, it does not "(come) close to being able to do anything that a human can do", nor does it have "inner motivation and goals" - another key aspect in some definitions of AGI.
But Microsoft President Brad Smith has rejected claims of a breakthrough.
"There's absolutely no probability that you're going to see this so-called AGI, where computers are more powerful than people, in the next 12 months. It's going to take years, if not many decades, but I still think the time to focus on safety is now," he said in November.
Frey suggested there would need to be significant innovation to get to AGI, due to both limitations in hardware and the amount of training data available.
"There are real question marks around whether we can develop AI on the current path. I don't think we can just scale up existing models (with) more compute, more data, and get to AGI."
Read more from the original source:
Posted in Artificial General Intelligence
Comments Off on Can AI ever be smarter than humans? | Context – Context
The 3 phases of AI evolution that could play out this century – Big Think
Posted: at 8:48 am
Excerpted from Our Next Reality 2024 Nicholas Brealey Publishing. Reprinted with permission. This article may not be reproduced for any other use without permission.
Its clear theres a lot of fear and misinformation about the risks and role of AI and the metaverse in our society going forward. It may be helpful to take a three-phase view of how to approach the problem. In the next 1-10 years, we should look at AI as tools to support our lives and our work, making us more efficient and productive. In this period, the proto-metaverse will be the spatial computing platform we go to learn, work, and play in more immersive ways.
In the following 11-50 years, as more and more people are liberated from the obligation of employment, we should look at AI as our patron which supports us to explore our interests in arts, culture, and science, or whatever field we want to pursue. Most will also turn to the metaverse as a creative playground for expression, leisure, and experimentation. In the third phase, after 50+ years (if not sooner), I would expect the worlds many separate AGI (artificial general intelligence) systems will have converged into a single ASI (artificial superintelligence) with the wisdom to unite the worlds approximately 200 nations and help us manage a peaceful planet with all its citizens provided for and given the choice of how they want to contribute to the society.
At this point, the AI system will have outpaced our biological intelligence and limitations, and we should find ways to deploy it outside our solar system and spread intelligence life into all corners of the galaxy and beyond. At this third stage, we should view AI as our children, for these AI beings will all have a small part of us in them.
Just like we possess in our genes a small part of all the beings that preceded us in the tree of life. They will hence forth be guided by all the memes humans have created and compiled throughout our history, from our morals and ethics to our philosophy and arts. The metaverse platform will then become an interface for us to explore and experience the far reaches of the Universe together with our children although our physical bodies may still be on Earth. Hopefully, these children will view us as their honorable ancestors and treat us the way Eastern cultures treat their elderly, with respect and care. As with all children, they will learn their values and morals by observing us. Its best we start setting a better example for them by treating each other as we would like AIs to treat us in the future.
Of course, the time frames above are only estimates, so could happen faster or slower than described, but the phases will likely occur in that order, if we are able to sustainably align future AGI/ASI systems. If for some reason, we are not able to align AGI/ASI, or they are misused by bad actors to catastrophic outcomes then the future could be quite dark.
I must however reiterate that my biggest concerns have always been around the risk of misuse of all flavors of AI by bad actor humans (vs. an evil AGI), and we need to do all in our power to prevent those scenarios. On the other hand, Ive increasingly become more confident that any superintelligent being we create will more likely be innately ethical and caring, rather than aggressive and evil.
If we take the right calculated actions in the coming decade, it could very well be the beginning of a new age of prosperity for mankind and all life everywhere.
Carl Jung said, The more you understand psychology, the less you tend to blame others for their actions. I think we can all attest that there is truth in this statement simply by observing our own mindset when interacting with young children. Remember the last time you played a board game with kids; did you do all possible to crush them and win? Of course not. When we dont fear something, we gain added patience and understanding. Well, the ASI we are birthing wont just understand psychology fully, but all arts, sciences, history, ethics, and philosophy. With that level of wisdom, it should be more enlightened than any possible human, and attain a level of understanding we cant even imagine.
A 2022 paper from a group of respected researchers in the space also found linkages between compassion and intelligence. In July 2023, Elon Musk officially entered the AI race with a new company called xAI, and the objective function of their foundational model is simply stated as understand the Universe. So it seems he shares my view that giving AI innate curiosity and a thirst for knowledge can help bring forth some level of increased alignment. Thus, you can understand why I reserve my fear, mainly for our fellow man. Still, it certainly couldnt hurt if we all started to set a better example for our budding prodigy and continue to investigate more direct means to achieve sustainable alignment.
We are near the end of the hundred-thousand-year ignorance and aimless toil phase of the Anthropocene epoch and will soon turn the page to start a new age of enlightenment far beyond our dreams.
There are many today who are calling for the end of civilization or even the end of humans on Earth due to recent technological progress. If we take the right calculated actions in the coming decade, it could very well be the beginning of a new age of prosperity for mankind and all life everywhere. We are near the end of something. We are near the end of the hundred-thousand-year ignorance and aimless toil phase of the Anthropocene epoch and will soon turn the page to start a new age of enlightenment far beyond our dreams.
When we do find a solution for AI alignment, and peacefully transition our world to the next phase of progress, the societal benefits will be truly transformational. It could lead us to an exponential increase in human understanding and capabilities. It will bring us near-infinite productivity and limitless clean energy to the world. The inequality, health, and climate issues that plague the world today could disappear within a relatively short period. And we can start to think more about plans at sci-fi time scales to go boldly where no one has gone before.
Read this article:
The 3 phases of AI evolution that could play out this century - Big Think
Posted in Artificial General Intelligence
Comments Off on The 3 phases of AI evolution that could play out this century – Big Think
Roundup: AI and the Resurrection of Usability – Substack
Posted: at 8:48 am
Support Independent Journalism: WorkforceAI provides independent coverage of artificial intelligence and HR. Get a paid subscription today to support our efforts and become exceptionally well-informed.
Mays product announcements by Google and OpenAI emphasized a number of embedded audio and visual capabilities, but none were more important than the faster processing times and conversational voice interfaces they enable. With these products, AI takes a dramatic step toward an almost natural interaction between users and systems, and thats critical improving usability and, along with it, effectiveness, adoption and business results.
This isnt so much about AI so much as its about UI. Users are sure to see natural language and more important, spoken natural language as a simpler, flexible and more useful approach to advanced technology. As AI Business wrote, they shift a products focus toward the consumer rather than the loftier goal of developing artificial general intelligence or an AI that thinks like a human.
This is important because usability directly affects business results. The great majority of enterprise software errors 92% are related to users, design or processes, according to research by in question Knoa Software. When users have less to learn about how a UI works, they complete their work more quickly and efficiently while requiring less training and support.
And todays UIs do require a certain amount of knowledge in order to make them do anything. This button prints a document, that button saves work to disk, for example. But the rules governing even simple tasks vary: To close a window, the Macs operating system requires clicking a red dot in a windows top left corner. Windows wants you to click an X in the top right. To capture a screen shot, Mac users enter command-shift-3. Windows users enter WindowsKey-shift-s.
By introducing more sophisticated voice interfaces, AI developers are moving toward an environment where each user can develop their own approach. To resume a paused video, one user might say OK. Keep going, while another says resume play. Some might say set a timer for 45 minutes while others say wake me at 2:45.
When it comes to increasing efficiency, every little bit helps.
Last edition, we asked about brand recognition in AI. Sorry, Google, but OpenAI got the most attention, named by 75% of our readers. The rest said they followed somebody else.
AI: Perception and Reality with Author and Analyst Geoff Webb [Podcast]
Author, industry leader and analyst Geoff Webb discusses AI and how vendors market it, along with reskilling and some of the unexpected challenges that go along with it. [WorkforceAI]
AI and Assessments, With PDRI's Elaine Pulakos [Podcast]
I speak with Elaine Pulakos, the CEO of PDRI by Pearson. She spends a lot of time thinking about AI and its impact on assessments. Theres a lot of questions in the space, covering everything from AIs role in best practices to how it can help develop assessments themselves. We cover that and more. [WorkforceAI]
Why SMBs are Good Prospects for the Right AI Solutions
Large employers get most of the attention, but small businesses present a solid opportunity for solutions providers who are developing AI products. For one thing, SMBs are heavy users of technology. Nearly all of Americas small companies have put at least one technology platform to use. Plus, the market has room for growth. Although they recognize the value of AI, most SMBs have yet to jump in. [WorkforceAI]
HR Departments Believe in AI, But Most Have Yet to Adopt
More than two-thirds of HR professionals have yet to adopt AI, and only a third believe they understand how they could incorporate the technology into their work, according to research by Brightmine. Nearly a quarter of HR departments havent been involved in discussions with executives about the technology and its use. The company says demystification is needed. [WorkforceAI]
Generative AI Leads in Business Adoption
Generative AI is the most frequently deployed AI solution in business, Gartner said, but concerns about measuring and demonstrating its value is a major barrier to adoption. Embedding generative AI into existing applications is the most-often used approach to leveraging the technology, with some 34% of respondents calling it their primary way of using AI. [WorkforceAI]
If you have news to share, send a press release or email to our editors at news@workforceai.news.
Image: iStock
Share
Continue reading here:
Posted in Artificial General Intelligence
Comments Off on Roundup: AI and the Resurrection of Usability – Substack
The Dark Side of AI: Financial Gains Lead to Oversight Evasion, Say Insiders – CMSWire
Posted: at 8:48 am
The Gist
Leading artificial intelligence companies avoid effective oversight because of money and operate without sufficient accountability government or other industry standards, former and current employees said in a letter published today.
In other words, they get away with a lot and that's not great news for a technology that comes with risks including human extinction.
"We are hopeful that these risks can be adequately mitigated with sufficient guidance from the scientific community, policymakers, and the public," the group wrote in the letter titled, "A Right to Warn about Advanced Artificial Intelligence." "However, AI companies have strong financial incentives to avoid effective oversight, and we do not believe bespoke structures of corporate governance are sufficient to change this."
The letter was signed by seven former OpenAI employees, four current OpenAI employees, one former Google DeepMind employee and one current Google DeepMind employee. It was also endorsed by AI powerhousesYoshua Bengio, Geoffrey Hinton and Stuart Russell.
While the group believes in the potential of AI technology to deliver unprecedented benefits to humanity, it says risks include:
"AI companies possess substantial non-public information about the capabilities and limitations of their systems, the adequacy of their protective measures, and the risk levels of different kinds of harm," the group wrote. "However, they currently have only weak obligations to share some of this information with governments, and none with civil society. We do not think they can all be relied upon to share it voluntarily."
The list of employees who shared their names (others were listed anonymously) includes: Jacob Hilton, formerly OpenAI; Daniel Kokotajlo, formerly OpenAI; Ramana Kumar, formerly Google DeepMind; Neel Nanda, currently Google DeepMind formerly Anthropic; William Saunders, formerly OpenAI; Carroll Wainwright, formerly OpenAI; and Daniel Ziegler, formerly OpenAI.
This isn't the first time Hilton spoke publicly about his former company. And he was pretty vocal today on X as well.
Kokotajlo, who worked on OpenAI, quit last month and was vocal about it in a public forum as well. He said he "Quit OpenAI due to losing confidence that it would behave responsibly around the time of AGI (artificial general intelligence)." Saunders, also on the governance team, departed along with Kokotajlo.
Wainright's time at OpenAI dates back at least to the debut of ChatGPT. Ziegler, according to this LinkedIn profile, was with OpenAI from 2018 to 2021.
Related Article: Musk, Wozniak and Thousands of Others: 'Pause Giant AI Experiments'
Leading AI companies won't give up critical information surrounding the development of AI technologies on their own, according to this group. Today, it's up to current and former employees rather than governments that can hold them accountable to the public.
"Yet," the group wrote, "broad confidentiality agreements block us from voicing our concerns, except to the very companies that may be failing to address these issues. Ordinary whistleblower protections are insufficient because they focus on illegal activity, whereas many of the risks we are concerned about are not yet regulated."
These employees fear various forms of retaliation, given the history of such cases across the industry.
Related Article: OpenAI Names Sam Altman CEO 5 Days After It Fired Him
Here's the gist of what this group calls on leading AI companies to do:
AI companies should not:
AI companies should:
OpenAI had no public response to the group's letter. In its most recent tweet, it shared its post about deceptive uses of AI.
"OpenAI is committed to enforcing policies that prevent abuse and to improving transparency around AI-generated content," the company wrote May 30. "That is especially true with respect to detecting and disrupting covert influence operations (IO), which attempt to manipulate public opinion or influence political outcomes without revealing the true identity or intentions of the actors behind them."
Have a tip to share with our editorial team? Drop us a line:
Original post:
The Dark Side of AI: Financial Gains Lead to Oversight Evasion, Say Insiders - CMSWire
Posted in Artificial General Intelligence
Comments Off on The Dark Side of AI: Financial Gains Lead to Oversight Evasion, Say Insiders – CMSWire
Analyzing the Future of AI – Legal Service India
Posted: at 8:48 am
The future of artificial intelligence (AI) promises transformative potential, impacting various domains such as healthcare, transportation, finance, and entertainment. As AI technology evolves rapidly, key trends are emerging and shaping its trajectory.
One significant trend is the rise of artificial general intelligence (AGI), where AI systems develop cognitive abilities similar to humans, enabling them to tackle complex tasks across different domains. AGI has the potential to revolutionize industries and societies, empowering machines with human-like problem-solving capabilities.
Artificial General Intelligence (AGI) represents the pinnacle of artificial intelligence, aspiring to create systems that can match or even surpass human capabilities in any intellectual endeavour. Unlike narrow AI, which is specifically designed to excel in a limited set of tasks, AGI aims for true universality, enabling it to tackle a vast range of problems and adapt to diverse situations, much like a human mind.
This ambitious goal requires AGI systems to possess a broad spectrum of cognitive abilities, including the capacity to learn from experience, reason logically, solve complex problems, and even engage in creative thought processes. Achieving this level of intelligence would mark a significant leap forward in AI, transcending the limitations of current systems and opening up a world of possibilities for how we interact with technology.
Another key trend is the integration of AI into everyday devices and systems, leading to the expansion of the Internet of Things (IoT) and smart environments. AI-powered devices can gather and analyze vast data, enabling autonomous decision-making and enhancing efficiency, convenience, and quality of life. This trend is reshaping our interactions with the physical world, creating a more connected and intelligent environment.
With the increasing prevalence and influence of AI in society, ethical considerations and responsible governance are paramount. Issues like algorithmic bias, data privacy, transparency, accountability, and the impact on employment necessitate collaboration among policymakers, industry leaders, researchers, and civil society. Frameworks and guidelines must be established to promote ethical AI use, mitigate risks, and prevent unintended consequences.
The future of AI holds significant technological advancements, including deep learning, reinforcement learning, natural language understanding, and robotics. These advances will enable AI systems to perform complex tasks with enhanced accuracy and efficiency, unlocking new avenues for innovation and discovery. From personalized medicine to predictive maintenance, AI-driven solutions offer the potential to solve pressing challenges and contribute to a more sustainable and equitable future.
Predictive maintenance uses data analysis, smart sensors, and computer learning to keep track of how machines are doing. It helps us guess when a machine might break down so we can fix it before it stops working. This prevents unexpected shutdowns and saves us time and money.
However, ethical and societal concerns must remain at the forefront as AI evolves. These advancements require careful consideration to ensure responsible and fair use of AI while mitigating potential risks and unintended consequences. Collaboration and dialogue among various stakeholders are essential to establish ethical guidelines and frameworks that guide the development and deployment of AI, maximizing its benefits for society while safeguarding its integrity and potential for good.
While AI presents numerous opportunities for positive change, it also brings significant challenges. These include concerns about job displacement due to automation, the potential for misuse of AI for malicious purposes, and the need to ensure ethical development and deployment that respects human rights, diversity, and inclusion. Additionally, the concentration of AI power in the hands of a few large tech companies and the potential for AI-driven surveillance and control raise concerns about privacy and civil liberties.
The future of AI holds immense potential for societal transformation and human well-being. However, realizing this potential requires careful consideration of ethical, social, and governance implications. Continued investment in research, education, and cross-disciplinary collaboration is crucial. By harnessing the power of AI responsibly and ethically, we can unlock a future of innovation, creativity, and progress that benefits all of humanity.
Written By: Rana Saman, 4th Year Law Student At Al-Ameen College Of Law
Go here to read the rest:
Posted in Artificial General Intelligence
Comments Off on Analyzing the Future of AI – Legal Service India







