Even the Pope has something to say about artificial intelligence – Cointelegraph

Over the past year, theres been no shortage of scientists, tech CEOs, billionaires and lawmakers sounding the alarm over artificial intelligence (AI) and now, even the Pope wants to talk about it too.

In a hefty 3,412-word letter dated Dec. 8, Pope Francis the head of the Catholic Church warned of the potential dangers of AI to humanity and what needs to be done to control it. The letter came as the Roman Catholic Church prepares to celebrate World Day of Peace on Jan. 1, 2024.

Pope Francis wants to see an international treaty to regulate AI to ensure it is developed and used ethically otherwise, we risk falling into the spiral of a technological dictatorship.

The threat of AI arises when developers have a desire for profit or thirst for power that overpowers ones wish to exist freely and peacefully, the Pope explained.

Technologies that fail to do this aggravate inequalities and conflicts and, therefore, can never count as true progress, he added.

Meanwhile, the emergence of AI-generated fake news is a serious problem, added the Pope, which could lead to growing mistrust in the media.

The Pope was recently a victim of generative AI when a fake image surfaced of him wearing a luxury white puffer jacket went viral in March.

Pope Francis, however, also acknowledged the benefits of AI in enabling more efficient manufacturing, easier transport and more ready markets, as well as a revolution in processes of accumulating, organizing and confirming data.

But hes also concerned that AI will benefit those controlling it and leave a large portion of the population without employment to pay for a living:

Pope Francis has long warned about the misuse of emerging technologies, stating that both theoretical and practical moral principles need to be embedded into them. He is, however, often seen as more tech-savvy and forward-looking than his predecessors.

Pope Francis recent remarks come after a year of outcry from all corners of the world over the potential dangers of AI.

Tech leaders, such as Tesla CEO Elon Musk and Apple co-founder Steve Wozniak, have expressed concern about how rapidly AI is advancing. It prompted them and more than 2,600 tech leaders and researchers to sign a petitionto pause AI developments in March 2023, sharing concerns that AI more advanced than GPT-4 can pose profound risks to society and humanity.

United States President Joe Biden has also expressed concerns. His administration released an executive order on the safe, secure, and trustworthy development and use of artificial intelligence in late October to address risks posed by AI.

Related: NFT and Islamic education: A new frontier to teach religion?

Even Hollywood filmmakers and celebrities are adding their thoughts to the issue.

In July, Canadian filmmaker James Cameron reportedly said he had been warning of the dangers of AI since The Terminator, which he directed nearly 40 years ago.

I warned you guys in 1984 and you didn't listen, Cameron told CTV News.

I think the weaponization of AI is the biggest danger [...] I think that we will get into the equivalent of a nuclear arms race with AI, and if we dont build it, the other guys are for sure going to build it, and so then itll escalate, he added.

Magazine: Experts want to give AI human souls so they dont kill us all

See the article here:

Even the Pope has something to say about artificial intelligence - Cointelegraph

Exploring the limitations of artificial intelligence for businesses Gene Marks – Atlanta Small Business Network

While artificial intelligence is taking the world by storm, many have expressed anxieties over the technologys rapid acceptance by the business community. Even though the benefits of platforms such as ChatGPT are clear, there remain many unknowns that make AI adoption difficult, especially for entrepreneurs and owners of smaller companies.

On this episode of The Small Business Show, host Jim Fitzpatrick is joined by Gene Marks, author, tech columnist for Forbes and CEO of The Marks Group. Marks recently covered artificial intelligence in an article examining the pros and cons of using generative text platforms for accounting purposes. Now, he shares his insights into the potential challenges of embracing AI and why he still believes the technology is a powerful tool for business owners.

Key Takeaways

1.Although it is easy to feel pessimistic about the proliferation of artificial intelligence, Marks notes that it is nothing more than a developers tool: a technology that is evolving into an experts assistant.

2.Unfortunately, artificial intelligence still has limitations, math being one of them. When providing subjective information, generative text platforms excel, but when providing objective data, AI tools struggle to maintain accuracy, although the technology is likely to improve.

3.Rather than using artificial intelligence as a replacement for human editing, Marks recommends that entrepreneurs use generative text platforms as a means for handling grunt work and lowering the costs of hiring writers.

4.Google is also leveraging artificial intelligence to assist entrepreneurs. Google My Business, for example, can direct customers to localized brands based on data collected from small businesses.

5.Marks notes that the question of when it is a good time to start a business depends on the business being opened. Not all enterprises will be successful at all times, which is why entrepreneurs must select their industry carefully and make their decisions based on local factors rather than national ones.

ASBN, from startup to success, we are your go-to resource for small business news, expert advice, information, and event coverage.

While youre here, dont forget to subscribe to our email newsletter for all the latest business news know-how from ASBN.

Read more here:

Exploring the limitations of artificial intelligence for businesses Gene Marks - Atlanta Small Business Network

Sam Altman on OpenAI and Artificial General Intelligence – TIME

If 2023 was the year artificial intelligence became a household topic of conversation, its in many ways because of Sam Altman, CEO of the artificial intelligence research organization OpenAI. Altman, who was named TIMEs 2023 CEO of the Year spoke candidly about his November oustingand reinstatementat OpenAI, how AI threatens to contribute to disinformation, and the rapidly advancing technologys future potential in a wide-ranging conversation with TIME Editor-in-Chief Sam Jacobs as part of TIMEs A Year in TIME event on Tuesday.

Altman shared that his mid-November sudden removal from OpenAI proved a learning experienceboth for him and the company at large. We always said that some moment like this would come, said Altman. I didnt think it was going to come so soon, but I think we are stronger for having gone through it.

Read More: CEO of the Year 2023: Sam Altman

Altman insists that the experience ultimately made the company strongerand proved that OpenAIs success is a team effort. Its been extremely painful for me personally, but I just think its been great for OpenAI. Weve never been more unified, he said. As we get closer to artificial general intelligence, as the stakes increase here, the ability for the OpenAI team to operate in uncertainty and stressful times should be of interest to the world.

I think everybody involved in this, as we get closer and closer to super intelligence, gets more stressed and more anxious, he explained of how his firing came about. The lesson he came away with: We have to make changes. We always said that we didnt want AGI to be controlled by a small set of people, we want it to be democratized. And we clearly got that wrong. So I think if we don't improve our governance structure, if we dont improve the way we interact with the world, people shouldnt [trust OpenAI]. But were very motivated to improve that.

The technology has limitless potential, Altman saysI think AGI will be the most powerful technology humanity has yet inventedparticularly in democratizing access to information globally. If you think about the cost of intelligence and the equality of intelligence, the cost falling, the quality increasing by a lot, and what people can do with that, he said, it's a very different world. Its the world that sci-fi has promised us for a long timeand for the first time, I think we could start to see what thats gonna look like.

Still, like any other previous powerful technology, that will lead to incredible new things, he says, but there are going to be real downsides.

Read More: Read TIMEs Interview With OpenAI CEO Sam Altman

Altman admits that there are challenges that demand close attention. One particular concern to be wary of, with 2024 elections on the horizon, is how AI stands to influence democracies. Whereas election interference circulating on social media might look straightforward todaytroll farmsmake one great meme, and that spreads outAltman says that AI-fueled disinformation stands to become far more personalized and persuasive: A thing that Im more concerned about is what happens if an AI reads everything youve ever written online and then right at the exact moment, sends you one message customized for you that really changes the way you think about the world.

Despite the risks, Altman believes that, if deployment of AI is safe and placed responsibly in the hands of people, which he says is OpenAIs mission, the technology has the potential to create a path where the world gets much more abundant and much better every year.

I think 2023 was the year we started to see that, and in 2024, well see way more of it, and by the time the end of this decade rolls around, I think the world is going to be in an unbelievably better place, he said. Though he also noted: No one knows what happens next. I think the way technology goes, predictions are often wrong.

A Year in TIME was sponsored by American Family Insurance, The Macallan, and Smartsheet.

View original post here:

Sam Altman on OpenAI and Artificial General Intelligence - TIME

2023: The year we played with artificial intelligence and weren’t sure what to do about it – Post Register

State Alabama Alaska Arizona Arkansas California Colorado Connecticut Delaware Florida Georgia Hawaii Idaho Illinois Indiana Iowa Kansas Kentucky Louisiana Maine Maryland Massachusetts Michigan Minnesota Mississippi Missouri Montana Nebraska Nevada New Hampshire New Jersey New Mexico New York North Carolina North Dakota Ohio Oklahoma Oregon Pennsylvania Rhode Island South Carolina South Dakota Tennessee Texas Utah Vermont Virginia Washington Washington D.C. West Virginia Wisconsin Wyoming Puerto Rico US Virgin Islands Armed Forces Americas Armed Forces Pacific Armed Forces Europe Northern Mariana Islands Marshall Islands American Samoa Federated States of Micronesia Guam Palau Alberta, Canada British Columbia, Canada Manitoba, Canada New Brunswick, Canada Newfoundland, Canada Nova Scotia, Canada Northwest Territories, Canada Nunavut, Canada Ontario, Canada Prince Edward Island, Canada Quebec, Canada Saskatchewan, Canada Yukon Territory, Canada

Zip Code

Country United States of America US Virgin Islands United States Minor Outlying Islands Canada Mexico, United Mexican States Bahamas, Commonwealth of the Cuba, Republic of Dominican Republic Haiti, Republic of Jamaica Afghanistan Albania, People's Socialist Republic of Algeria, People's Democratic Republic of American Samoa Andorra, Principality of Angola, Republic of Anguilla Antarctica (the territory South of 60 deg S) Antigua and Barbuda Argentina, Argentine Republic Armenia Aruba Australia, Commonwealth of Austria, Republic of Azerbaijan, Republic of Bahrain, Kingdom of Bangladesh, People's Republic of Barbados Belarus Belgium, Kingdom of Belize Benin, People's Republic of Bermuda Bhutan, Kingdom of Bolivia, Republic of Bosnia and Herzegovina Botswana, Republic of Bouvet Island (Bouvetoya) Brazil, Federative Republic of British Indian Ocean Territory (Chagos Archipelago) British Virgin Islands Brunei Darussalam Bulgaria, People's Republic of Burkina Faso Burundi, Republic of Cambodia, Kingdom of Cameroon, United Republic of Cape Verde, Republic of Cayman Islands Central African Republic Chad, Republic of Chile, Republic of China, People's Republic of Christmas Island Cocos (Keeling) Islands Colombia, Republic of Comoros, Union of the Congo, Democratic Republic of Congo, People's Republic of Cook Islands Costa Rica, Republic of Cote D'Ivoire, Ivory Coast, Republic of the Cyprus, Republic of Czech Republic Denmark, Kingdom of Djibouti, Republic of Dominica, Commonwealth of Ecuador, Republic of Egypt, Arab Republic of El Salvador, Republic of Equatorial Guinea, Republic of Eritrea Estonia Ethiopia Faeroe Islands Falkland Islands (Malvinas) Fiji, Republic of the Fiji Islands Finland, Republic of France, French Republic French Guiana French Polynesia French Southern Territories Gabon, Gabonese Republic Gambia, Republic of the Georgia Germany Ghana, Republic of Gibraltar Greece, Hellenic Republic Greenland Grenada Guadaloupe Guam Guatemala, Republic of Guinea, Revolutionary People's Rep'c of Guinea-Bissau, Republic of Guyana, Republic of Heard and McDonald Islands Holy See (Vatican City State) Honduras, Republic of Hong Kong, Special Administrative Region of China Hrvatska (Croatia) Hungary, Hungarian People's Republic Iceland, Republic of India, Republic of Indonesia, Republic of Iran, Islamic Republic of Iraq, Republic of Ireland Israel, State of Italy, Italian Republic Japan Jordan, Hashemite Kingdom of Kazakhstan, Republic of Kenya, Republic of Kiribati, Republic of Korea, Democratic People's Republic of Korea, Republic of Kuwait, State of Kyrgyz Republic Lao People's Democratic Republic Latvia Lebanon, Lebanese Republic Lesotho, Kingdom of Liberia, Republic of Libyan Arab Jamahiriya Liechtenstein, Principality of Lithuania Luxembourg, Grand Duchy of Macao, Special Administrative Region of China Macedonia, the former Yugoslav Republic of Madagascar, Republic of Malawi, Republic of Malaysia Maldives, Republic of Mali, Republic of Malta, Republic of Marshall Islands Martinique Mauritania, Islamic Republic of Mauritius Mayotte Micronesia, Federated States of Moldova, Republic of Monaco, Principality of Mongolia, Mongolian People's Republic Montserrat Morocco, Kingdom of Mozambique, People's Republic of Myanmar Namibia Nauru, Republic of Nepal, Kingdom of Netherlands Antilles Netherlands, Kingdom of the New Caledonia New Zealand Nicaragua, Republic of Niger, Republic of the Nigeria, Federal Republic of Niue, Republic of Norfolk Island Northern Mariana Islands Norway, Kingdom of Oman, Sultanate of Pakistan, Islamic Republic of Palau Palestinian Territory, Occupied Panama, Republic of Papua New Guinea Paraguay, Republic of Peru, Republic of Philippines, Republic of the Pitcairn Island Poland, Polish People's Republic Portugal, Portuguese Republic Puerto Rico Qatar, State of Reunion Romania, Socialist Republic of Russian Federation Rwanda, Rwandese Republic Samoa, Independent State of San Marino, Republic of Sao Tome and Principe, Democratic Republic of Saudi Arabia, Kingdom of Senegal, Republic of Serbia and Montenegro Seychelles, Republic of Sierra Leone, Republic of Singapore, Republic of Slovakia (Slovak Republic) Slovenia Solomon Islands Somalia, Somali Republic South Africa, Republic of South Georgia and the South Sandwich Islands Spain, Spanish State Sri Lanka, Democratic Socialist Republic of St. Helena St. Kitts and Nevis St. Lucia St. Pierre and Miquelon St. Vincent and the Grenadines Sudan, Democratic Republic of the Suriname, Republic of Svalbard & Jan Mayen Islands Swaziland, Kingdom of Sweden, Kingdom of Switzerland, Swiss Confederation Syrian Arab Republic Taiwan, Province of China Tajikistan Tanzania, United Republic of Thailand, Kingdom of Timor-Leste, Democratic Republic of Togo, Togolese Republic Tokelau (Tokelau Islands) Tonga, Kingdom of Trinidad and Tobago, Republic of Tunisia, Republic of Turkey, Republic of Turkmenistan Turks and Caicos Islands Tuvalu Uganda, Republic of Ukraine United Arab Emirates United Kingdom of Great Britain & N. Ireland Uruguay, Eastern Republic of Uzbekistan Vanuatu Venezuela, Bolivarian Republic of Viet Nam, Socialist Republic of Wallis and Futuna Islands Western Sahara Yemen Zambia, Republic of Zimbabwe

Read the rest here:

2023: The year we played with artificial intelligence and weren't sure what to do about it - Post Register

Diversity In Artificial Intelligence Could Help Make It More Equitable – Black Enterprise

by Daniel Johnson

December 16, 2023

Of all computer science doctorates, only 1.6% were awarded to Black doctoral candidates.

In 2019, The Guardian cited a study conducted by NYU, which emphasized the critical need for diversity in the field of artificial intelligence. The urgency behind this issue is increasing as AI becomes increasingly integrated into society, Dana Metaxa, a PhD candidate and a researcher at Stanford University focused on issues of internet and democracy, told the outlet. Essentially, the lack of diversity in AI is concentrating an increasingly large amount of power and capital in the hands of a select subset of people.

As we head into 2024, not much on that front has changed. In November, Wired talked to several prominent women in the artificial intelligence community about why they would not want a seat on the board of OpenAI following Sam Altmans coup. Timnit Gebru, who made waves when Google dismissed her following a warning she issued regarding the companys plans for AI, said that there was a better chance of her returning to Google than joining Altmans board.

Its repulsive to me, Gebru said. I honestly think theres more of a chance that I would go back to GoogleI mean, they wont have me and I wont have themthan me going to OpenAI.

It is in this subsection of artificial intelligence, the field of AI ethics, where women in tech have found a measure of success, but their work in the field often puts them at odds with the white men who control the boards and companies in Silicon Valley. Meredith Whittaker, the president of Signal, an encrypted messaging app, says the problem is really about giving people from diverse backgrounds power to effect change, as opposed to tokenizing their seats at the table.

Were not going to solve the issuethat AI is in the hands of concentrated capital at presentby simply hiring more diverse people to fulfill the incentives of concentrated capital, Whittaker told Wired. I worry about a discourse that focuses on diversity and then sets folks up in rooms with [expletive] Larry Summers without much power.

Black people in particular have felt the brunt of the way artificial intelligence is used by the police, for example.

As BLACK ENTERPRISEpreviously reported, the city of Detroit was sued by a Black woman who was arrested while eight months pregnant because officers used a facial recognition program to tie her to the crime. And, this is just one of many similar incidents.

In a November article for Esquire, Mitchell S. Jackson surmises that this is inescapable as the field of criminal justice insists on pushing to use artificial intelligence, even though the datasets those programs will use are filled with negative biases that will inevitably work against Black people.

Jackson writes, AI in policing is being implemented into that already flawed system. Its more dangerous to Black and brown people because the persistent lack of diversity in the STEM fieldsfrom which AI comesis apt to generate more built-in biases against people of color, the same people who are overpoliced and underprotected.

He continued, AI in policing is hella dangerous to my people because it operates on datacrime reports, arrest records, license plates, imagesthat is itself steeped in biases.

According to a 2023 report conducted by the Code.org Advocacy Coalition, only 78% of Black high school students high school students had access to foundational computer science courses, compared to 89% of Asian high school students and 82% of white high school students. A 2022 survey from the Computing Research Association says that two-thirds of all computer science doctorates went to non-permanent U.S. residents for whom no ethnic background is available, but almost 19% of those degrees went to white doctoral candidates and 10.1 % were awarded to Asian doctoral candidates. Only 1.6% were awarded to Black doctoral candidates, which illustrates why the diversity numbers in technology companies are as abysmal as they are.

Calvin Lawrence, the author of Hidden In White Sight, a book examining how artificial intelligence contributes to systemic racism, spoke to CNN about how the biases in AI are also a product of a lack of access. Lawrence explained that in order to get more Black people into the field, you have to at least present it as a path they can take.

You certainly dont have a lot of Black folks or data scientists participating in the process of deploying and designing AI solutions, Lawrence said. The only way you can get them to have seats at the table, you have to educate them.

RELATED CONTENT: What If Sam Altman Was A Black Woman Debate About Bias In AI Engulfs Twitter

Visit link:

Diversity In Artificial Intelligence Could Help Make It More Equitable - Black Enterprise

Should We Be Worried About AI(Artificial Intelligence)? – Medium

Name one time you went to ChatGPT and said What should I cook today?, or you said in your long-awaited weekend Where should I go today. You might have had a memory in your head of you saying that to ChatGPT. But the thing is, how reliable do we have to be with this new awaited technology?

As of 2023, ChatGPT attracted 1.7 million users. now that is a lot of people. now let's look at an example of Bing. As of March 2023, Bing AI had 1 million users. now we see how people are dependent on their AI engines. But are these 1.7 or 1 million using this free power safely and honestly? 43% of college students have used ChatGPT or similar AI tools. And 26% of K-12 teachers have caught a student cheating with ChatGPT. So it is widespread these days to see students using ChatGPT on online standardized tests. but there are many things we need to worry about AI.

Have you watched the movie Terminator? If yes, you know exactly what I mean when I say the AI revolution. if not, then you might have no idea what I am talking about. In the movie Terminator, robots gain Consciousness somehow, and they wage a war with humans. Their goal was to kill all the humans and have total superiority. now people have been scared of how much AI has grown and grown. From robots that could only walk, to robots that can be your friend, it is pretty disturbing to see how much this tech has grown. But you do not have to worry about the Robo apocalypse. This will never happen.

Have you wondered about a life where after you wake up from your bed, a Robo housemaid is cleaning up your room for you? Then you walk downstairs and you see a robo-cook preparing breakfast for you. Believe it or not, this is already happening at this point in the generation. Soon everything will be fully automated and we will not have to do any work. But have you even realized the cons of this happening? This will immediately tip over the economy and will result in absolute war and riots if we do not do anything about it. This is already when big companies are firing employees to replace robots with them. But are you worried that these big companies will someday fire you? but you do not have to worry, because these companies have got it in hand.

Are you kind of scared and disturbed about this new tech after reading this article? That is okay because everyone has a reason to be scared of this new tech. So the way that I think we should use this technology to make the world a better place worth living in today. And the way we achieve that is by working hard and not giving up.

Sources

Nerdynav https://nerdynav.com/chatgpt-cheating-statistics/#:~:text=43%25%20of%20college%20students%20have,per%20a%20Daily%20Mail%20survey.

New York Times https://www.nytimes.com/2023/06/10/business/ai-jobs-work.html

before you go!

fill out this reader satisfaction form

https://docs.google.com/forms/d/1PFnrZbOeG4r8KJrpDA1-tFqf6mWt2YQgLVC_a7_b3tc/edit

More here:

Should We Be Worried About AI(Artificial Intelligence)? - Medium

WilmU Gains Edge in Artificial Intelligence and Machine Learning – DELCO.Today

Wilmington University is on the cutting edge in computer science, part of an elite group of institutions in the Amazon Artificial Intelligence (AI) / Machine Learning Educator program.

These transformational technologies are very fast moving, and theres a huge demand for people with machine-learning and computer science skills, saidJodee Vallone,assistant chair ofComputer Sciencein the College of Technology. We are excited to be part of a community of learners in which we have early access to education we might not have otherwise.

Through the AWS (Amazon Web Service) Machine Learning University, WilmU faculty will receive free training and advice, with the option of curriculum with ready-to-use education tools. Students can receive AWS certification in machine learning, a significant boost in the job market.

Computer science is a growing field, expected to grow 13 percent through 2026, according to the U.S. Bureau of Labor Statistics. The College of Technology is actively recruiting adjuncts to meet the demand. More than 40 educators, including adjuncts, teach more than 1,000 computer science students at WilmU; 30 percent of those students are women, compared to the national average of more than 15 percent.

Diversity is a key need in computer science and technology and one of the things Amazon is looking for, Vallone said. Its exciting to be on the cutting edge of this tech revolution.

WilmU is already an education partner with Amazon, providing hourly distribution center employees with access to over 150 certificate, associate, and bachelor degree programs.

Learn more at Wilmington University.

Read the original:

WilmU Gains Edge in Artificial Intelligence and Machine Learning - DELCO.Today

Opinion: Here’s how we can safeguard privacy amid the rise of artificial intelligence – The Globe and Mail

Open this photo in gallery:

A robot called Pepper is positioned near an entrance to a Microsoft Store in Boston on March 21, 2019.Steven Senne/The Associated Press

Ann Cavoukian is executive director of the Global Privacy and Security by Design Centre and the former three-term information and privacy commissioner of Ontario.

A few decades ago, artificial intelligence wasnt nearly as pervasive and neither were the risks that come with it. Fast forward to today, and both the potential and the pitfalls of this incredible technology are glaringly obvious. It is no wonder the world has become consumed with finding a solution that is able to mitigate the risks of using data, while allowing for benefits to be realized in a sustainable way as technology evolves.

That is why the Privacy by Design principles, which I first began developing in the nineties as the way forward, are essential. Newly codified by the International Organization for Standardization, this approach is now the international standard for data privacy management and protection.

The Privacy by Design principles, or ISO 31700-1, have the power to guide us toward a future where innovation does not slow down and privacy isnt an afterthought. Rather, they ensure that privacy is ingrained in the DNA of technology and built into every layer right from the beginning, at the design stage.

While local laws may differ from country to country, principles are borderless. Adhering to this internationally-recognized standard will be the only way that our global community can set itself up for a future that leverages data to its fullest potential, in a transparent and responsible way.

The current age has sometimes been accurately referred to as the fourth industrial revolution, where technology, connectivity, analytics and automation inform everything we do in business and at home. But theres one caveat: Such a transformation cannot be successful if it comes at the expense of our data privacy.

This digital revolution certainly offers the promise of convenience, and, more importantly, the opportunity to use technology to do social good. But with the infinite volume of data we share with companies (sometimes even unknowingly), there are understandable concerns around how it will be managed and respected.

Canadians can be proud that the first program certified in ISO 31700-1 in the world is TELUSs Data for Good program. It serves as a global example for business, industry and government on how to ensure that data is respected at every stage of innovation.

The groundbreaking program gives researchers access to high-quality, strongly de-identified and aggregated datasets to address societal issues, such as developing efficient transportation systems in response to natural disasters, or supporting evidence-based environmental sustainability initiatives.

The program was built with Privacy by Design principles embedded into every layer to make sure that it allows researchers to access useful data. But it does not put data, and specifically privacy, at risk far from it. With these principles in place, they are helping to build trust in technology and create a better world.

In an era where citizens recognize and care about their data more than ever before, it is critical that we get this right. There are organizations that recognize the importance of fostering trust in the digital world and lead the way forward by collaborating at an international level to develop the cross-functional co-operation the world needs. Doing so requires a commitment to education, transparency, accountability, responsible innovation, participatory design and dedication to ensuring that respect for data is always the first priority.

We must champion these principles at an international scale to protect our rights and set technology up for sustainable success especially now, as AIs use grows exponentially and legislation is being developed in jurisdictions across the globe. Without these principles, we simply wont realize the full potential of our innovation.

Privacy by Design is not just a good idea it is essential to mitigate the potential risks of AI and protect our digital future.

Read the rest here:

Opinion: Here's how we can safeguard privacy amid the rise of artificial intelligence - The Globe and Mail

The New Addition to Caudalie’s Resveratrol-Lift Collection – Who What Wear

One of the main reasons so many of us tend to steer clear of retinol isthat while the popular ingredient may be a holy grail when it comes to reversing signs of aging, not all skin responds well to such a powerful vitamin. Caudalie's newest serum was made with everyone in mind. It'sformulated with resveratrol as a high-performing retinol alternative. "The patented combination of resveratrol, micro hyaluronic acid, and new vegan collagen 1 works synergistically to deliver the best results,"Kwitman tells us.Resveratrol and retinol may work in a similar way, but to an in vitro test on ingredients association, one is much more versatile than the other.

But what is resveratrol? According to Kwitman, it's harnessed from the grapevine stalk and works to increase activity in the anti-aging cells. Made from 98% natural origin ingredients, the Resveratrol-Lift Instant Firming Serum also happens to be vegan, recyclable, and gentle, which makes it an effective alternative for retinol. "The serum is clinically proven to be three times more effective than retinol to firm and lift," she says. "This is critical because by the time you reach 40 years old, you have lost nearly 40% of your collagen and 50% of your hyaluronic acid." In other words, the older you get, the more changes you'll notice to the skin. Using Caudalie's patented resveratrol formula can help pump the breaks.

Original post:
The New Addition to Caudalie's Resveratrol-Lift Collection - Who What Wear

Best Resveratrol Supplements 2024 – Top Product Brands on the Market – Deccan Herald

Click Here To Buy From Official Website

The manufacturers of Glucotrust ensure that they integrate the best natural ingredients to craft this unique product for tackling diabetes. The product contains no parabens or preservatives that could deteriorate the health condition in the long run. All the benefits derived from the product come from natural sources. Let us take a glance at them;

Gymnema Sylvestre: Gymnema Sylvestre is the main component in numerous sugar-controlling pills. The rich content of antioxidants in the natural ingredient ensures that no inflammation or injury can be sustained. Consequently, the rich content of vitamins, minerals, and amino acids cannot be ignored as they increase the ingredient's potency by manifolds.

Biotin Vitamin B7: Biotin Vitamin B7 plays a crucial role in the absorption of natural glucose into the bloodstream and converting the same for energy. It also stimulates the stem cells to produce beta cells, aggravating insulin production and diverting complications.

Chromium: Chromium nullifies all the complications and deficiencies related to diabetes. In addition, it propels better sugar absorption and usage for energy requirements. There is an increased insulin production and enhanced sensitivity that can regulate sugar levels.

Manganese: Numerous medical health experts promote the consumption of manganese for a better nervous system, guided by the aggravated brain's cognitive functioning. It increases the production of leptin in the body, which defers the cravings for fast food or excessive carbohydrates. Such functionality prevents the sugar from getting oxidized into the bloodstream and promotes better sleep for energy recovery.

Licorice Root: Glucotrust shares the inclusion of Licorice root into its manufacture in consonance with its predecessors. Licorice root is an undeniable part of staple Ayurvedic medicine. Not many know that Licorice harbors glycyrrhizin, which inhibits glucose absorption into the bloodstream.

Cinnamon: Also known as the king of spices, Cinnamon promotes better digestion and healthy sugar levels in the body. It stimulates increased insulin production and better glucose metabolism that promotes the overall well-being of a diabetic individual. The presence of antioxidant properties helps tackle issues like inflammation, diseases, and irregular sleep schedules to give a better life.

Zinc: Zinc has a plethora of never-ending benefits. Industry experts feel that there is not one aspect of human health that zinc does not help improve. Three hundred working enzymes in the human body require an adequate supply of zinc for promoting metabolism, reproduction, growth, and immune system functionalities. Besides reducing the sugar levels, it also tackles the triglycerides that keep the cholesterol levels under check.

Juniper Berries: Do you know an evergreen shrub that offers multiple health benefits besides being a natural sugar inhibitor? Juniper berries are a great anti-diabetic ingredient that can reduce sugar cravings by manifolds. Its combination with chromium aids in aggravated fat loss and assists in the diabetic journey.

Glucotrust promotes numerous advantages of its consumption, making it one of the most sought-after products for diabetic patients. Knowing the benefits can convince any individual to give this product a try.

Weight loss

People suffering from obesity or carrying more body weight than recommended are the ones who suffer from ailments like diabetes. Glucotrust aids in reducing sugar consumption and promotes increased metabolism for getting rid of excess weight.

Controlling Appetite

The major reason behind people gaining extra weight is their lack of tongue control. People crave good food that has a lot of spices and carbohydrates to treat their taste buds. However, such food remains detrimental to health and opens the door for numerous ailments. Glucotrust prohibits food cravings and lowers the appetite to a considerable level.

Diabetic Assistance

The main benefit of Glucotrust is to keep diabetes under control. It facilitates better sugar absorption in the human body and converts it into energy required to fulfill the day's responsibilities. The extensive set of ingredients included in Glucotrust's manufacture substantiates the effectiveness concerning its sugar control.

Boosted Immune System

One of the vital benefits of Glucotrust is to offer a better immune system. Ever since the onslaught of COVID-19, people understood the importance of building immunity for sustaining in a world vulnerable to viruses. Glucotrust offers an amalgamation of immunity and weight loss to keep the diabetes level under control. Thus, the benefits are multiple.

Promotes Deep Sleep

Increased sugar levels can mess with the hormonal secretion in the human body. Hormonal disbalance often causes an increase in cortisol levels that puts people under extreme stress. Such a situation deprives them of the necessary sleep to sustain. Glucotrust mitigates cortisol levels and enhances sleep quality.

Is Glucoredi safe?

Yes, Glucoredi is safe for human consumption and remains devoid of side effects.

What is the main benefit of Glucoredi?

Glucoredi helps control type-2 diabetes.

How to consume Glucoredi?

According to the manufacturers, consuming Glucoredi twice a day will give optimal results.

Make your life better with the best resveratrol supplement of your choice! The options mentioned above are the most trusted ones and have passed the necessary tests for approval.

Advertising and Marketing by:

This content was marketed by Brandingbyexperts.com on behalf of their client.

For queries reach out support@brandingbyexperts.com

Here is the original post:
Best Resveratrol Supplements 2024 - Top Product Brands on the Market - Deccan Herald

Google Admits Gemini AI Demo Was at Least Partially Faked

Google misrepresented the way its Gemini Pro can recognize a series of images and admitted to speeding up the footage.

Google has a lot to prove with its AI efforts — but it can't seem to stop tripping over its own feet.

Earlier this week, the tech giant announced Gemini, its most capable AI model to date, to much fanfare. In one of a series of videos, Google showed off the mid-level range of the model dubbed Gemini Pro by demonstrating how it could recognize a series of illustrations of a duck, describing the changes a drawing went through at a conversational pace.

But there's one big problem, as Bloomberg columnist Parmy Olson points out: Google appears to have faked the whole thing.

In its own description of the video, Google admitted that "for the purposes of this demo, latency has been reduced, and Gemini outputs have been shortened for brevity." The video footage itself is also appended with the phrase "sequences shortened throughout."

In other words, Google misrepresented the speed at which Gemini Pro can recognize a series of images, indicating that we still don't know what the model is actually capable of.

In the video, Gemini wowed observers by using its multimodal thinking chops to recognize illustrations at what appears to be a drop of a hat. The video, as Olson suggests, also offered us "glimmers of the reasoning abilities that Google’s DeepMind AI lab have cultivated over the years."

That's indeed impressive, considering any form of reasoning has quickly become the next holy grail in the AI industry, causing intense interest in models like OpenAI's rumored Q*.

In reality, the demo wasn't just significantly sped up to make it seem more impressive, but Gemini Pro is likely still stuck with the same old capabilities that we've already seen many times before.

"I think these capabilities are not as novel as people think," Wharton professor Ethan Mollick tweeted, showing how ChatGPT was effortlessly able to identify the simple drawings of a duck in a series of screenshots.

Did Google actively try to deceive the public by speeding up the footage? In a statement to Bloomberg Opinion, a Google spokesperson said it was made by "using still image frames from the footage, and prompting via text."

In other words, Gemini was likely given plenty of time to analyze the images. And its output may have then been overlaid over video footage, giving the impression that it was much more capable than it really was.

"The video illustrates what the multimode user experiences built with Gemini could look like," Oriol Vinyals, vice president of research and deep learning lead at Google’s DeepMind, wrote in a post on X.

Emphasis on "could." Perhaps Google should've opted to show the actual capabilities of its Gemini AI instead.

It's not even the first time Google has royally screwed up the launch of an AI model. Earlier this year, when the company announced its ChatGPT competitor, a demo infamously showed Bard making a blatantly false statement, claiming that NASA's James Webb Space Telescope took the first image of an exoplanet.

As such, Google's latest gaffe certainly doesn't bode well. The company came out swinging this week, claiming that an even more capable version of its latest model called Gemini Ultra was able to outsmart OpenAI's GPT-4 in a test of intelligence.

But from what we've seen so far, we're definitely going to wait and test it out for ourselves before we take the company's word.

More on Gemini: Google Shows Off "Gemini" AI, Says It Beats GPT-4

The post Google Admits Gemini AI Demo Was at Least Partially Faked appeared first on Futurism.

Continue reading here:
Google Admits Gemini AI Demo Was at Least Partially Faked

Ex-OpenAI Board Member Refuses to Say Why She Fired Sam Altman

The now-former OpenAI board member who was instrumental in the firing of Sam Altman has spoken — but she's still staying mum where it matters.

Mum's The Word

The now-former OpenAI board member who was instrumental in the initial firing of CEO Sam Altman has spoken — but she's still staying mum on why she pushed him out in the first place.

In an interview with the Wall Street Journal, 31-year-old Georgetown machine learning researcher and erstwhile OpenAI board member Helen Toner was fairly open with her responses about the logistics of the failed coup at the company, but terse when it came to the reasoning behind it.

"Our goal in firing Sam was to strengthen OpenAI and make it more able to achieve its mission," the Australian-born researcher said as her only explanation of the headline-grabbing chain of events.

As the New York Times reported in the midst of the Thanksgiving hubbub, Toner and Altman butted heads the month prior because she published a paper critical of the firm's safety protocols (or lack thereof) and laudatory of those undertaken by Anthropic, which was created by former OpenAI employees who left over similar concerns.

Altman reportedly confronted Toner during their meeting and because he believed, per emails viewed by the NYT, that "any amount of criticism from a board member carries a lot of weight."

After the tense exchange, the CEO brought his concerns about Toner's criticisms up with other board members, which ended up reinforcing those board members' own doubts about his leadership, the WSJ reports. Soon after, Altman himself was on the chopping block over vague allegations of dishonesty — although we still don't know what exactly he was supposedly being dishonest about.

Intimidating

As the company weathered its tumult amid a nearly full-scale revolt from staffers who said they'd leave and follow Altman to Microsoft if he wasn't reinstated, Toner and OpenAI cofounder and chief scientist Ilya Sutskever ended up resigning, the report explains, which paved the way for the CEO's return.

In her interview with the WSJ, however, the Georgetown researcher suggested that her resignation was forced by a company attorney.

"[The attorney] was trying to claim that it would be illegal for us not to resign immediately," Toner said, "because if the company fell apart we would be in breach of our fiduciary duties."

With the exit of the Aussie academic and Rand Corporation scientist Tasha McCauley, another of those who voted for Altman's ouster, from the board, there are now no women on OpenAI's governing body — but in this interview at least, Toner was all class.

"I think looking forward," she said, "is the best path from here."

More on OpenAI: Sam Altman Got So Many Texts After Being Fired It Broke His Phone

The post Ex-OpenAI Board Member Refuses to Say Why She Fired Sam Altman appeared first on Futurism.

See the article here:
Ex-OpenAI Board Member Refuses to Say Why She Fired Sam Altman

Nicki Minaj Fans Are Using AI to Create “Gag City”

Fans anxiously awaiting the release of Nicki Minaj's latest album have occupied themselves with AI to create their own

Gag City

Fans are anxiously awaiting the drop of Onika "Nicki Minaj" Maraj-Petty's "Pink Friday 2" — and in the meantime, they've occupied themselves with artificial intelligence image generators to create visions of a Minajian utopia known as "Gag City."

The entire "Gag City" gambit began with zealous (and perhaps overzealous) fans tweeting at the Queens-born diva to tell her how excited — or "gagged," to use the drag scene etymology that spread among Maraj-Petty's LGBTQ and queer-friendly fanbase — they are for her first album in more than five years.

Replete with dispensaries, burger joints, and a high-rise shopping mall, Gag City has everything a Barb (as fans call themselves) could ask for.

Gag City, the fan-created AI kingdom for Nicki Minaj, trends on X/Twitter ahead of ‘Pink Friday 2.’ pic.twitter.com/jm3iGS9fBO

— Pop Crave (@PopCrave) December 6, 2023

Barbz Hug

As memetic lore will have you believe, these tributes to Meraj-Petty were primarily created with Microsoft's Bing AI image generator. The meme went so deep that people began claiming that her fanbase generating Gag City imagery caused Bing to crash, which allegedly led to the image generator blocking Nicki Minaj-related prompts.

gag city residents have demolished bing head office after their continued sabotage of nicki minaj’s name in their image creator pic.twitter.com/OOpL2Jzj7h

— Xeno? (@AClDBLEEDER) December 6, 2023

When Futurism took to Bing's image creator AI to see what all the fuss was about, we too discovered that you couldn't search for anything related to Minaj. However, the same was true when we inputted other celebrities' names as well, suggesting that Bing, like Google, may intentionally block the names of famous people in an apparent effort to circumvent deepfakes.

Brand Opportunities

As creative as these viral Gag City images have been, it was only a matter of time before engagement-hungry brands tried to get in on the fun and effectively ruin it.

From Spotify changing its location to the imaginary Barb metropolis and introducing "Gag City" as a new "sound town" to KFC's social media manager telling users to "DM" the account, the meme has provided a hot pink branding free-for-all.

The Bing account itself even posted a pretty excellent-looking AI-generated Gag City image.

Next stop: Friday ? https://t.co/J1pRCZcbTd pic.twitter.com/ujG7BsJWUC

— Bing (@bing) December 6, 2023

Sleazy brand bandwagoning aside, the Gag City meme and its many interpretations provide an interesting peek into what the future of generative AI may hold in a world dominated by warring fandoms and overhyped automation.

More on AI imaginationPeople Cannot Stop Dunking on that Uncanny “AI Singer-Songwriter”

The post Nicki Minaj Fans Are Using AI to Create “Gag City” appeared first on Futurism.

Link:
Nicki Minaj Fans Are Using AI to Create “Gag City”

Silicon Valley Guys Casually Calculating Probability Their AI Will Destroy Humankind

P(doom) has become the go-to shorthand among AI researchers and tech CEOs for describing the likelihood of AI destroying humanity.

Doom and Gloom

If you find yourself talking to a tech bro about AI, be warned that they might ask you about your "p(doom)" — the hot new statistic that's become part of the everyday lingo among Silicon Valley researchers in recent months, The New York Times reports.

P(doom), or the probability of doom, is a quasi-empirical way of expressing how likely you think AI will destroy humanity — y'know, the kind of cheerful stuff you might talk about over a cup of coffee.

It lets other AI guys know where you stand on the tech without getting too far into the weeds on what exactly constitutes an existential risk. Someone with a p(doom) of 50 percent might be labeled a "doomer," like short-lived interim CEO of OpenAI Emmet Shear, while another with 5 percent might be your typical optimist. Wherever people stand, it now serves, at the very least, as a useful bit of small talk.

"It comes up in almost every dinner conversation," Aaron Levie, CEO of the cloud platform Box, told the NYT.

Scaredy Cats

It should come as no surprise that jargon like p(doom) exists. Fears over the technology, both apocalyptic and mundane, have blown up with the explosive rise of generative AI and large language models like OpenAI's ChatGPT. In many cases, the leaders of the tech, like OpenAI CEO Sam Altman, have been more than willing to play into those fears.

Where the term originated isn't a matter of record. The NYT speculates that it more than likely came from the philosophy forum LessWrong over a decade ago, first used by a programmer named Tim Tyler as a way to "refer to the probability of doom without being too specific about the time scale or the definition of 'doom,'" he told the paper.

The forum's founder, Eliezer Yudkowsky, is himself a noted AI doomsayer who has called for the bombing of data centers to stave off armageddon. His p(doom) is "yes," he told NYT, transcending mere mathematical estimates.

Best Guess

Few opinions could outweigh those of AI's towering trifecta of so-called godfathers, whose contrite cautions on the tech have cast a shadow over the industry that is decidedly ominous. One of them, Geoffrey Hinton, left Google last year, stating that he regretted his life's work while soberly warning of AI's risk of eliminating humanity.

Of course, some in the industry remain unabashed optimists. Levie, for instance, told the NYT that his p(doom) is "about as low as it could be." What he fears is not an AI apocalypse, but that premature regulation could stifle the technology.

On the other hand, it could also be said that the focus on pulp sci-fi AI apocalypses in the future threatens to efface AI's existing but-not-as-exciting problems in the present. Boring issues like mass copyright infringement will have a hard time competing against visions of Terminators taking over the planet.

At any rate, p(doom)'s proliferation indicates that there's at least a current of existential self-consciousness among those developing the technology — though whether that affects your personal p(doom) is, naturally, left up to you.

More on AI: Top Execs at Sports Illustrated's Publisher Fired After AI Debacle

The post Silicon Valley Guys Casually Calculating Probability Their AI Will Destroy Humankind appeared first on Futurism.

The rest is here:
Silicon Valley Guys Casually Calculating Probability Their AI Will Destroy Humankind

Busted! Drive-Thru Run by "AI" Actually Operated by Humans in the Philippines

The AI, which takes orders from drive-thru customers at Checkers and Carl's Jr, relies on humans for most of its customer interactions.

Mechanical Turk

An AI drive-thru system used at the fast-food chains Checkers and Carl's Jr isn't the perfectly autonomous tech it's been made out to be. The reality, Bloomberg reports, is that the AI heavily relies on a backbone of outsourced laborers who regularly have to intervene so that it takes customers' orders correctly.

Presto Automation, the company that provides the drive-thru systems, admitted in recent filings with the US Securities and Exchange Commission that it employs "off-site agents" in countries like the Philippines who help its "Presto Voice" chatbots in over 70 percent of customer interactions.

That's a lot of intervening for something that claims to provide "automation," and is yet another example of tech companies exaggerating the capabilities of their AI systems to belie the technology's true human cost.

"There’s so much hype around AI that everyone is misunderstanding what this tool is," Shelly Palmer, who runs a tech consulting firm, told Bloomberg. "Everybody thinks that AI is some kind of magic."

Change of Tune

According to Bloomberg, the SEC informed Presto in July that it was being investigated for claims "regarding certain aspects of its AI technology."

Beyond that, no other details have been made public about the investigation. What we do know, though, is that the probe has coincided with some revealing changes in Presto's marketing.

In August, Presto's website claimed that its AI could take over 95 percent of drive-thru orders "without any human intervention" — clearly not true, given what we know now. In a show of transparency, that was changed in November to claim 95 percent "without any restaurant or staff intervention," which is technically true, yes, but still seems dishonest.

That shift is part of Presto's overall pivot to its new "humans in the loop" marketing shtick, which upholds its behind the scenes laborers as lightening the workload for the actual restaurant workers. The whole AI thing, it would seem, is just packing it comes in, and the mouthpiece that frustrated customers have to deal with.

"Our human agents enter, review, validate and correct orders," Presto CEO Xavier Casanova told investors during a recent earnings call, as quoted by Bloomberg. "Human agents will always play a role in ensuring order accuracy."

Know Its Limits

The huge hype around AI can obfuscate both its capabilities and the amount of labor behind it. Many tech firms probably don't want you to know that they rely on millions of poorly paid workers in the developing world so that their AI systems can properly function.

Even OpenAI's ChatGPT relies on an army of "grunts" who help the chatbot learn. But tell that to the starry-eyed investors who have collectively sunk over $90 billion into the industry this year without necessarily understanding what they're getting into.

"It highlights the importance of investors really understanding what an AI company can and cannot do," Brian Dobson, an analyst at Chardan Capital Marketts, told Bloomberg.

More on AI: Nicki Minaj Fans Are Using AI to Create "Gag City"

The post Busted! Drive-Thru Run by "AI" Actually Operated by Humans in the Philippines appeared first on Futurism.

Read the original post:
Busted! Drive-Thru Run by "AI" Actually Operated by Humans in the Philippines

Elon Musk Says CEO of Disney Should Be Fired, Seemingly for Hurting His Feelings

X owner Elon Musk lashed out at Disney CEO Bob Iger on Thursday, tweeting that

Another day, another person of note being singled out by conspiracy theorist and X owner Elon Musk.

The mercurial CEO's latest target is Disney CEO Bob Iger, whose empire recently pulled out of advertising on Musk's much-maligned social media network.

Along with plenty of other big names in the advertising space, Disney decided to call it quits after Musk infamously threw his weight behind an appalling and deeply antisemitic conspiracy theory.

Instead of engaging in some clearly much-needed introspection, Musk lashed out at Iger this week, posting that "he should be fired immediately."

"Walt Disney is turning in his grave over what Bob has done to his company," he added.

To get a coherent answer as to why Musk made the demand takes some unpacking, so bear with us.

Musk implied that Disney was to blame for not pulling its ads from Meta, following a lawsuit alleging the much larger social media company had failed to keep child sexual abuse material (CSAM) off of its platform.

"Bob Eiger thinks it’s cool to advertise next to child exploitation material," Musk wrote, misspelling Iger's name, in response to a tweet that argued sex exploration material on Meta was "sponsored" by Disney. "Real stand up guy."

To be clear, Meta has an extremely well-documented problem with keeping disgusting CSAM off of its platforms. Just last week, the Wall Street Journal found that there have been instances of Instagram and Facebook actually promoting pedophile accounts, making what sounds like an already dangerous situation even worse.

At the end of the day, nobody's a real winner here. Iger's own track record is less-than-stellar, especially when it comes to Disney's handling of Florida's "Don't Say Gay" bill.

Yet in many ways, Musk is the pot calling the kettle black. Why? Because X-formerly-Twitter has its own considerable issue with CSAM. Especially following Musk's chaotic takeover last year, the New York Times found back in February that Musk is falling far short of making "removing child exploitation" his "priority number one," as he declared last year.

Since then, child abuse content has run rampant on the platform. Worse yet, in July the platform came under fire for reinstating an account that posted child sex abuse material.

Meanwhile, instead of taking responsibility for all of the hateful things he's said, Musk has attempted to rally up his base on X, arguing that advertisers were conspiring against him and his "flaming dumpster" of a social media company.

During last month's New York Times DealBook Summit, the embattled CEO accused advertisers of colluding to "blackmail" him "with advertising" — a harebrained idea that highlights his escalating desperation.

At the time, after literally telling advertisers to go "fuck" themselves, Musk took the opportunity to take a potshot at Iger as well.

"Hey Bob, if you're in the audience, that's how I feel," he added for emphasis. "Don't advertise."

More on the beef: Twitter Is in Extremely Deep Trouble

The post Elon Musk Says CEO of Disney Should Be Fired, Seemingly for Hurting His Feelings appeared first on Futurism.

More:
Elon Musk Says CEO of Disney Should Be Fired, Seemingly for Hurting His Feelings

Meta’s New Image-Generating AI Is Trained on Your Instagram and Facebook Posts

Earlier this week, Meta announced a new AI image generator dubbed

Cashing In

Earlier this week, Meta announced a new AI image generator dubbed "Imagine with Meta AI."

And while it may seem like an otherwise conventional tool meant to compete with the likes of Google's DALL-E 3, Diffusion, and Midjourney, Meta's underlying "Emu" image-synthesis model has a dirty little secret.

What's that? Well, as Ars Technica points out, the social media company trained it using a whopping 1.1 billion Instagram and Facebook photos, per the company's official documentation — the latest example of Meta squeezing every last drop out of its user base and its ever-valuable data.

In many ways, it's a data privacy nightmare waiting to unfold. While Meta claims to only have used photos that were set to "public," it's likely only a matter of time until somebody finds a way to abuse the system. After all, Meta's track record is abysmal when it comes to ensuring its users' privacy, to say the least.

So Creative

Meta is selling its latest tool, which was made available exclusively in the US this week, as a "fun and creative" way to generate "content in chats."

"This standalone experience for creative hobbyists lets you create images with technology from Emu, our image foundation model," the company's announcement reads. "While our messaging experience is designed for more playful, back-and-forth interactions, you can now create free images on the web, too."

Meta's Emu model uses a process called "quality-tuning" to compare the "aesthetic alignment" of comparable images, setting it apart from the competition, as Ars notes.

Other than that, the tool is exactly what you'd expect. With a simple prompt, it can spit out four photorealistic images of skateboarding teddy bears or an elephant walking out of a fridge, which can then be shared on Instagram or Facebook — where, perhaps, they'll be scraped by the next AI.

Earlier this year, Meta's president for global affairs Nick Clegg told Reuters that the company has been crawling through Facebook and Instagram posts to train its Meta AI virtual assistant as well as its Emu image model.

At the time, Clegg claimed that Meta was excluding private messages and posts, avoiding public datasets with a "heavy preponderance of personal information."

Instead of immediately triggering a massive outcry and lawsuits over possible copyright infringement like Meta's competitors, the social media company can crawl its own datasets, which come courtesy of its users and its expansive terms of service.

But relying on Instagram selfies and Facebook family albums comes with its own inherent risks, which may well come back to haunt the Mark Zuckerberg-led social giant.

More on Meta: Facebook Has a Gigantic Pedophilia Problem

The post Meta's New Image-Generating AI Is Trained on Your Instagram and Facebook Posts appeared first on Futurism.

More:
Meta's New Image-Generating AI Is Trained on Your Instagram and Facebook Posts

This Cartoonish New Robot Dog Somehow Looks Even Scarier

Chinese robotics company called Weilan recently showed off a creepy, cartoonish-looking robot dog called

Dog Days

We've come across plenty of robot dogs over the years that can dance, speak using ChatGPT, or even assist doctors in hospitals.

But they all have one thing in common: they look like lifeless machines on four stilts.

In an apparent effort to put the "dog " back into "robodog," a Chinese robotics company called Weilan recently showed off an entirely new class of robotic quadruped called "BabyAlpha" — essentially half cartoon dog and half robot.

The company may have overshot its goal a little bit, though, ending up with an even more terrifying-looking machine that looks like it belongs in a "M3GAN"-esque horror flick.

Robot's Best Friend

The small robot canine has a spotted head, a cute little nose, and two floppy-looking ears.

According to the company's website, which we crudely translated using Google, the robot is "especially designed for family companionship scenarios."

"BabyAlpha likes to be by your side," the website reads adding that the little robot has "endless technological superpowers" thanks to AI. Not creepy at all!

Weilan is also targeting its pet as a way to teach children either English or Chinese or keep track of younger family members through a video call tool.

But we can't shake the feeling that BabyAlpha is exactly the kind of thing that kickstarts a series of unfortunate events in a shlocky horror movie.

In case you do trust your children to be around a BabyAlpha, the companion will cost the equivalent of around $1,700 when it goes on sale.

More on robot dogs: Oh Great, They Put ChatGPT Into a Boston Dynamics Robot Dog

The post This Cartoonish New Robot Dog Somehow Looks Even Scarier appeared first on Futurism.

Read more from the original source:
This Cartoonish New Robot Dog Somehow Looks Even Scarier

NASA Says It’s Trying to Bring the Hubble Back Online

NASA is working on bringing the Hubble Telescope back online, but the orbital observatory is getting very old.

Major Tom?

NASA is working on bringing the Hubble Space Telescope back online, but given its recent setbacks, the agency's insistence that it's "in good health" may be wishful thinking.

In an update, NASA said that it's still working to bring the aging telescope back to life after a series of issues that led it to automatically enter safe mode (read: shut down) three times over the course of a few weeks, with the final one lasting until now.

Starting on November 19, the agency began having issues problems with the gyroscopes or "gyros" — not to be confused with the delicious Greek meat — which helps orient the telescope in whatever direction it needs to point. Between that date and November 29, the gyro issues led to automatic power-downs thrice. That last safe mode, it seems, has remained in effect until now.

Aging Instruments

Installed back in 2009 during the fifth and final Space Shuttle servicing mission that saw NASA astronauts replacing and fixing Hubble instruments IRL, the remaining three of the six gyros aboard the telescope have clearly seen better days. Indeed, with its update to its previous statement about the science operations shutoff, the agency seems to be admitting as much.

"Based on the performance observed during the tests, the team has decided to operate the gyros in a higher-precision mode during science observations," the statement reads. "Hubble’s instruments and the observatory itself remain stable and in good health."

These latest Hubble setbacks have resurrected talks of a private servicing mission for the 33-year-old telescope that was supposed to be decommissioned nearly two decades ago.

At the end of 2022, NASA and SpaceX announced that they were jointly looking into whether it would be feasible to send up a private mission "at no cost to the government" to fix various issues on the telescope. That study has apparently been completed, but nobody knows what the findings were just yet.

In the meantime, NASA will hopefully be able to bring Hubble back online itself because, let's face it, we're not ready to say goodbye.

More on NASA: Space Station Turns 25, Just in Time to Die

The post NASA Says It's Trying to Bring the Hubble Back Online appeared first on Futurism.

Read more from the original source:
NASA Says It's Trying to Bring the Hubble Back Online

Government Program to Recycle Plastic Bags Canceled After "Abysmal Failure"

An online directory that directed people to locations where they could drop off plastic bags and film to be recycled has been shut down.

Trash Tier

The Earth is drowning in a sea of used plastic bags and other one-time-use plastic products, such as blister packaging and utensils, all of which are polluting our soil, waterways, and inside our bodies in the form of microplastics.

In an effort to fight this ever-growing sea of refuse, the US government kickstarted a nationwide online directory that directed people to locations where they could drop off plastic bags and film to be recycled. Unfortunately, according to The Guardian, the program has been shuttered for good after ABC News found in May that a good amount of the discarded plastic wasn't getting recycled after all.

"Plastic film recycling had been an abysmal failure for decades and it’s important that plastic companies stop lying to the public," said Beyond Plastics president Judith Enck to The Guardian. "Finally, the truth is coming out."

The online national directory, having the approval of the US Environmental Protection Agency and local administrations, had a list of about 18,000 locations for recycling dropoff, according to The Guardian. Locations included stores like Target and Walmart.

The program purported that the plastic would get recycled once you drop them off, but ABC News used tracking tags on plastic trash and found that many of the tags ended up in landfills, incinerators or sorting locations not associated with recycling.

The Plastics

This issue with the directory list is not an isolated incident. The country's recycling system is broken. A report last year from Greenpeace revealed that out of 51 million tons of plastic coming out of American homes, only 2.4 million tons gets recycled — a staggeringly low proportion.

Plastic is a big problem because it is made from fossil fuels, which is the biggest driver of global warming. Materials such as paper and metal are recycled at a higher rate, according to Greenpeace.

While many countries and organizations have focused on decarbonizing transportation and other sectors in our modern world, the use of plastic is trending upwards, with the amount of plastic products estimated to triple by 2060, from 60 million tons in 2019 to 1,231 million tons in less than 40 years.

That mountain of refuse represents not just an incredible amount of pollution, in other words — but also frustrating wasted efforts in fighting climate change.

More on recycling: Scientists Say Recycling Has Backfired Spectacularly

The post Government Program to Recycle Plastic Bags Canceled After "Abysmal Failure" appeared first on Futurism.

Read the original here:
Government Program to Recycle Plastic Bags Canceled After "Abysmal Failure"