The AI Doctor Orders More Tests – Bloomberg – Bloomberg

Few U.S. industries are growing as fast as health care, but the bigpublic-cloud companiesAmazon.com, Microsoft, Googlehave struggled to crack the $3.2trillion market. Even as hospitals and insurers collect mountains of health data on individual Americans, most of their spending on extra data storage is for old-school systems on their own premises, according to researcher IDC.

The public-cloud kingpins are trying to lure health-care providers with artificially intelligent cloud services that can act like doctors. The companies are testing, and in some cases marketing, AI software that automates mundane tasks including data entry; consulting work like patient management and referrals; and even the diagnostic elements of highly skilled fields such as pathology.

Amazon Web Services, the dominant cloud provider, is processing and storing genomics data for biotech companies and clinical labs. No.2 Microsofts cloud unit plans to store DNA records, and its Healthcare Next system provides automated data entry and certain cancer treatment recommendations to doctors based on visible symptoms. Google seems to be betting most heavily on health-care analysis as a way to differentiate its third-place cloud offerings. Gregory Moore, vice president for health care, says hes readying Google Cloud for a world of diagnostics as a service. In this world, AI couldalways be on hand to give doctors better informationor replace them altogether.

The cloud division is refining its genomics data analysis and working to make Google Glass, the augmented-reality headgear that consumers didnt want, a product more useful to doctors. German cancer specialist Alacris Theranostics GmbH leans on Google infrastructure to pair patients with drug therapies, something Google hopes more companies will do. Health-care systems are ready, says Moore, an engineer and former radiologist. People are seeing the potential of being able to manage data at scale.

In November, Google researchers showed off an AI system that scanned images of eyes to spot signs of diabetic retinopathy, which causes vision loss among people with high sugar levels. Another group of the companys researchers in March said they had used similar software to scan lymph nodes. They said theyd identified breast cancer from aset of 400 images with 89percent accuracy, a better record than most pathologists. Last year the University of Colorado at Denver moved its health research labs data to Googles cloud to support studies on genetics, maternal health, and the effect of legalized marijuana on the number and severity of injuries to young men. Michael Ames, the universitys project director, says he expects eventually to halve the cost of processing some 6million patientrecords.

However impressive Googles AI analysis gets, the health-care industry isnt exactly a gaggle of early adopters, says James Wang, an analyst at ARK Investment Management LLC. They can have the lowest error rate and the greatest algorithm, but getting it into a hospital is a whole other problem, he says. Most electronic medical records are likely to remain locked inside health companies for the foreseeable future, says Robert Mittendorff, a biotech investor at Norwest Venture Partners. Indeed, Googles first major effort in the industry, an online health records service, folded in 2011 because the company couldnt convince potential customers their data would be safe.

The most important business stories of the day.

Get Bloomberg's daily newsletter.

Moore says things have changed since then and that hes working with Stanford and the Broad Institute, plus about a dozen companies in the health-care industry and defense contractor Northrop Grumman Corp. For now, his primary focus is wrangling more health-care companies onto Googles cloud, because the more data he can get on Googles servers, the faster its AI systems will learn. There literally have to be thousands of algorithms to even come close to replicating what a radiologist can do on a given day, he says. Its not going to be all solved tomorrow.

The bottom line: Big cloud companiesespecially Googleare experimenting with AI diagnostics and other systems to attract medical clients.

See more here:

The AI Doctor Orders More Tests - Bloomberg - Bloomberg

Nvidia Trounces Google And Huawei In AI Benchmarks, Startups Nowhere To Be Found – Forbes

Nvidia uses the new Ampere A100 GPU and the Selene Supercomputer to break MLPerf performance records

Artificial Intelligence (AI) training numbers based on the suite of the new MLPerf 0.7 benchmark performance numbers were released today (7/29/20) and once again, Nvidia takes the performance crown. Eight companies submitted numbers on systems based on both AMD and Intel CPU processors and using a variety of AI accelerators from Google, Huawei, and Nvidia. The increase in peak performance for each MLPerf benchmark by the leading platform was 2.5x or more. The new benchmark also added new tests for additional emerging AI workloads.

As a brief background, MLPerf is an organization established to develop benchmarks for effectively and consistently testing systems running a wide range of AI workloads, including training and inference processing. The organization has gained wide industry support from semiconductor and IP companies, tools vendors, systems vendors, and the research and academic communities. First launched in 2018, updates and new benchmarking results have been announced for training about once a year, even though the goal is once a quarter.

The benefit of the MLPerf benchmark is not only seeing the advancements by each vendor, but the overall advancements of the industry, especially as new workloads are added. For the latest training version 0.7, new workloads were added for Natural Language Processing (NLP) using Bidirectional Encoder Representations from Transformers (BERT), recommendation systems using Deep Learning Recommendation Machines (DRLM), and reinforcement learning using Minigo. Note that using Minigo for re-enforced may also serve as a baseline for AI gaming applications. The benchmark results are reported as either commercially available (on-premise or in the cloud), preview (products coming to the market in the next six months, or research and development (systems still in earlier development). The most important near-term results are those that are commercially available or in preview. There is an open division, but that had no material impact on the overall result.

The companies and institutions that submitted results included Alibaba, Dell EMC, Fujitsu, Google, Inspur, Intel, Nvidia, and the Shenzhen Institute of Advanced Technology. The largest number of submissions came from Nvidia, which is not surprising given that the company recently built its own supercomputer (ranked #7 in the TOP500 supercomputer list and #2 in the Green500 supercomputer list), which is based on its latest Ampere A100 GPUs. This system, called Selene allows the company considerable flexibility in test different workloads and system configurations. In the MLPerf test results, the number of GPU accelerators range from two to 2048 in the commercially available category and 4096 in the research and development category.

All of the systems were based on AMD and Intel CPUs paired with one of the following accelerators: the Google TPU v3, the Google TPU v4, the Huawei Ascend910, the Nvidia Tesla V100 (in various configurations), or the Nvidia Ampere A100. Noticeably absent were the chip startups like Cerebras, Esperanto, Groq, Graphcore, Habana (an Intel company), and SambaNova. This is especially surprising because all of these companies are listed as contributors or supporters of MLPerf. There is a long list of other AI chips startups that are also not represented. Intel submitted performance numbers but only in the preview category for its upcoming Xeon Platinum processors, not for its recently acquired Habana AI accelerators. With only Intel submitting processor-only numbers, there is nothing to compare them to and the performance is well below the systems using accelerators. It is also worth noting that Google and Nvidia were the only companies that submitted performance numbers for all the different benchmark categories, but Google only submitted complete benchmark numbers for the TPU v4, which is in the preview category.

Each benchmark is ranked in terms of the execution time of the benchmark. Because of the high number of system configurations, the best way to compare the result is to normalize the execution time to each AI accelerator by dividing the execution time by the number of accelerators. This is not perfect because the performance per accelerator does typically increase with the number of accelerators and/or some workloads appear to have optimized performance around certain system configuration, but the results appear relatively consistent even when comparing the performance numbers of systems with relatively similar numbers of accelerators. The clear winner was Nvidia. Nvidia-based systems dominated all eight benchmarks for commercially available solutions. If considering all categories, including preview, the Google TPU v4 had the fastest per accelerator execution time for recommendations.

The platforms with the top performance results for each MLPerf 0.7 benchmark

Overall, the benchmarks increased from 2.5x to 3.3x from the 0.6 version benchmark categories, which include image classification, object detection, and translation. Interestingly, Nvidias previous generation GPU, the Tesla V100 scored best in three categories non-recurrent translation, recommendation, and reinforcement learning, the latter two being new MLPerf categories. This is not completely surprising because the Ampere had significant changes in the architecture that will also improve performance in inference processing. It will be interesting to see how the Ampere A100 systems score in the next generation of inference benchmarks that should be released later this year. Another development to note is the emergence of AMD Epyc processors in the top performance benchmarks because of their presence in the new Nvidia DGX A100 systems and DGX SuperPods with Nvidias new Ampere A100 accelerators.

Summary of the top MLPerf benchmark results and the performance improvements from version 0.6 to ... [+] versions 0.7

Nvidia continues to lead the pack, not just because of its lead in GPUs, but also its leadership in complete systems, software, libraries, trained models, and other tools for AI developers. Yet, every other company offering AI chips and solutions offers comparisons to Nvidia without the supporting benchmark numbers. MLPerf is not perfect. The results should be published more than once a year and the results should include an efficiency ranking (performance/watt) for the system configurations, two points the organization is working to achieve. However, MLPerf was developed as an industry collaboration and represents the best method of evaluating AI platforms. It is time for everyone else to submit MLPerf numbers to support their claims.

Visit link:

Nvidia Trounces Google And Huawei In AI Benchmarks, Startups Nowhere To Be Found - Forbes

Advancing AI by Understanding How AI Systems and Humans Interact – Windows IT Pro

Artificial intelligence as a technology is rapidly growing, but much is still being learned about how AI and autonomous systems make decisions based on the information they collect and process.

To better explain those relationships so humans and autonomous systems can better understand each other and collaborate more deeply, researchers at PARC, the Palo Alto Research Center, have been awarded a multi-million dollar federal government contract to create an "interactive sense-making system" that could answer many related questions.

The research for the proposed system, called COGLE (CommonGroundLearning andExplanation), is being funded by the Defense Advanced Research Projects Agency (DARPA),using an autonomous Unmanned Aircraft System (UAS) test bed but would later be applicable to a variety of autonomous systems.

The idea is that since autonomous systems are becoming more widely used, it would behoove humans who are using them to understand how the systems behave based on the information they are provided, Mark Stefik, a PARC research fellow who runs the lab's human machine collaboration research group, told ITPro.

"Machine learning is becoming increasing important," said Stefik. "As a consequence, if we are building systems that are autonomous, we'd like to know what decisions they will make. There is no established technique to do that today with systems that learn for themselves."

In the field of human psychology, there is an established history about how people form assumptions about things based on their experiences, but since machines aren't human, their behaviors can vary, sometimes with results that can be harmful to humans, said Stefik.

In one moment, an autonomous machine can do something smart or helpful, but then the next moment it can do something that is "completely wonky, which makes things unpredictable," he said. For example, a GPS system seeking the shortest distance between two points could erroneously and catastrophically send a user driving over a cliff or the wrong way onto a one-way street. Being able to delve into those autonomous "thinking" processes to understand them is the key to this research, said Stefik.

The COGLE research will help researchers pursue answers to these issues, he said. "We're insisting that the program be explainable," for the autonomous systems to say why they are doing what they are doing. "Machine learning so far has not really been designed to explain what it is doing."

The researchers involved with the project will essentially have roles educators and teachers for the machine learning processes to improve their operations and make it more useable and even more human like, said Stefik. "It's a sort of partnership where humans and machines can learn from each other."

That can be accomplished in three ways, he added, including reinforcement at the bottom level, using reasoning patterns like the ones humans use at the cognitive or middle level, and through explanation at the top sense-making level. The research aims to enable people to test, understand, and gain trust in AI systems as they continue to be integrated into our lives in more ways.

The research project is being conducted under DARPA's Explainable Artificial Intelligence (XAI) program, which seeks to create a suite of machine learning techniques that produce explainable models and enable human users to understand, appropriately trust, and effectively manage the emerging generation of artificially intelligent partners.

PARC, which is a Xerox company, is conducting the COGLE work with researchers at Carnegie Mellon University, West Point, the University of Michigan, the University of Edinburgh and the Florida Institute for Human & Machine Cognition. The key idea behind COGLE is to establish common ground between concepts and abstractions used by humans and the capabilities learned by a machine. These learned representations would then be exposed to humans using COGLE's rich sense-making interface, enabling people to understand and predict the behavior of an autonomous system.

Read the rest here:

Advancing AI by Understanding How AI Systems and Humans Interact - Windows IT Pro

A new AI claims it can help remove racism on the web. So I put it to work – ZDNet

Can AI flag truly problematic content?

I tend to believe technology can't solve every problem.

Why, it's not even managed to solve the vast problems caused by technology.

Yet when I received an email headlined: "AI to remove racism," how could I not open it? After all, AI has already removed so many things. Human employment, for example.

The email came on behalf of a company called UserWay. It claims to have a widget that is "the world's most advanced AI-based auto-remediation technology."

Within its paid offering, UserWay now has what it calls an AI-Powered Content Moderator. This, it hopes, will allow companies to ensure their websites identify problematic language -- reflecting racial bias, gender bias, age bias, and disability slurs, for example -- so that they can decide whether to change it or remove it.

As far as UserWay is concerned, this is "the first general, cross-website content moderation tool designed specifically for the greater public."

UserWay says it performed a test on 500,000 websites and found that 52% had examples of racial bias, 24% had examples of gender bias, and 12% featured age bias.

To give you an example of the sorts of words and phrases flagged, these include whitelist, blacklist, black sheep, chairman, and mankind. As well as language of the more overtly offensive sort.

Naturally, I asked UserWay to undertake another test. I gave it the names of some well-known news and business websites and wondered which of these might be great offenders. Or not. At least according to this AI.

I fear that, given our fractious times, your own political antennae may already be sending signals of acceptance or rejection. Please bear with me, as one or two of these results might surprise.

UserWay says it examined a representative sample of 10,000 pages from each of these sites -- ranging from Fox News to The Huffington Post, from The Daily Caller to The New York Times -- and then offered me its artificially intelligent conclusions.

The AI declared that, overall, the Washington Examiner had the most problematic content, followed by The Daily Caller. This wasn't so much because of pages with racial bias, but because of pages with gender bias and racial slurs.

But before you cheer for your side or begin to throw objects, please let me tell you which site -- according to UserWay -- had the most pages including racial bias. It was, in this sample, ESPN.com. Followed by CNBC.com.

And what if I told you that this AI believes ACLU.org has more problematic pages than FoxNews.com?

While you're digesting that, I'll add this: The AI also declared that FoxNews.com has fewer pages with gender bias than do CNN.com and The Washington Post's website.

I have no interest in besmirching any of these sites. At least publicly.

These results may make one or two people wonder, however, whether racism, sexism, and gender bias aren't the exclusive preserve of one political bent or another. It may also make some wonder about the very essence of AI as a content moderator.

A considerable element of such AI is the selection of criteria by which it makes its decisions. That's why the companies that operate the sites have to decide themselves which words and phrases are acceptable and which aren't.

If there's one thing that's sure about AI, it's that human nuance is not its strength. Sometimes it'll identify words and phrases without exactly understanding the context. And, who knows, certain terms that are currently acceptable may not be so positively received in even a few months' time.

When I asked UserWay how it chooses the words and phrases to be flagged, it told me it "curates the terms internally based on our own research."

Which did tend a little toward Facebook-speak.

Talking of which, I asked UserWay to look at Facebook.com too. Oddly, it couldn't produce any results.

UserWay's Founder and CEO Allon Mason told me: "It seems that Facebook is proactively preventing scanners and bots from scanning its site."

I'm taken aback.

Read more here:

A new AI claims it can help remove racism on the web. So I put it to work - ZDNet

Spotify Offers Personalized Artificial Intelligence Experience With The Weeknd – HYPEBEAST

With the help of new artificial intelligence technology, Spotify is providing fans with a highly-personalized way to experience The Weeknds critically acclaimed After Hours album. The microsite experience features a life-like version of The Weeknd, who will have a one-on-one chat with fans.

Upon entering the site, The Weeknds alter ego will appear on the screen. Hell start out by addressing each fan by name, and, based on listening data, share how theyve streamed his music over the years. The AI experience then turns into an intimate listening session of After Hours, one that is just between the individual and The Weeknd. Spotifys new Alone With Me experience comes after the streaming platform gave fans an exclusive remote listening party and Q&A to celebrate the release of The Weeknds new album back in March. The Alone With Me session gives the title of the album a whole new meaning.

Join The Weeknd in the Alone With Me experience on Spotifys website now.

In other music-related news, check out former U.S. President Barack Obamas annual Summer playlist.

More:

Spotify Offers Personalized Artificial Intelligence Experience With The Weeknd - HYPEBEAST

Futuristic Impacts of AI Over Businesses and Society – Analytics Insight

In the past decade, artificial intelligence (AI) has made it to mainstream society from academic journals. The technology has achieved numerous milestones when it comes to digital transformation across society including businesses, education, and healthcare as well. Today people can do the tasks which were not even possible ten years back.

The proportion of organizations using AI in some form rose from 10 percent in 2016 to 37 percent in 2019 and that figure is extremely likely to rise further in the coming year, according to Gartners 2019 CIO Agenda survey.

While the breakthroughs in surpassing human ability at human pursuits, such as chess, make headlines, AI has been a standard part of the industrial repertoire since at least the 1980s. Then production-rule or expert systems became a standard technology for checking circuit boards and detecting credit card fraud. Similarly, machine-learning (ML) strategies like genetic algorithms have long been used for intractable computational problems, such as scheduling, and neural networks not only to model and understand human learning but also for basic industrial control and monitoring.

Moreover, AI is also the core of some of the most successful companies in history in terms of market capitalizationApple, Alphabet, Microsoft, and Amazon. Along with information and communication technology (ICT) more generally, the technology has revolutionized the ease with which people from all over the world can access knowledge, credit, and other benefits of a contemporary global society. Such access has helped lead to a massive reduction of global inequality and extreme poverty, for example by allowing farmers to know fair prices, the best crops, and giving them access to accurate weather predictions.

Following the trends, we can say that there will be big winners and losers as collaborative technologies, robots and artificial intelligence transform the nature of work. Moreover, data expertise will become exponentially more important. Across various organizations, the role of a senior manager in a deeply data-driven world is about to shift, thanks to the AI revolution. It is estimated that information hoarders will slow the pace of their organizations and forsake the power of artificial intelligence while competitors exploit it.

In the future, judgments about consumers and potential consumers will be made instantaneously and many organizations will put cybersecurity on par with other intelligence and defense priorities. Besides, open-source information and artificial intelligence collection will provide opportunities for global technological parity and soon predictive analytics and artificial intelligence could play an even more fundamental role in content creation.

With the growth of AI-enabled technologies in the future, societies will face challenges in realizing technologies that benefit humanity instead of destroying and intruding on the human rights of privacy and freedom of access to information. Also, the surging capabilities of robots and artificial intelligence will see a range of current jobs supplanted, where professional roles such as doctors, lawyers, and accountants could be replaced by artificial intelligence by the year 2025.

Moreover, low-skill workers will reallocate to tasks that are non-susceptible to computerization. All the risks will arise out of human activity from certain technological development in this technology, synthetic biology, nano techno, and artificial intelligence.

Share This ArticleDo the sharing thingy

About AuthorMore info about author

Smriti is a Content Analyst at Analytics Insight. She writes Tech/Business articles for Analytics Insight. Her creative work can be confirmed @analyticsinsight.net. She adores crushing over books, crafts, creative works and people, movies and music from eternity!!

Excerpt from:

Futuristic Impacts of AI Over Businesses and Society - Analytics Insight

These young immigrant brothers are teaching A.I. to high-schoolers for free: We want to give kids ‘a lucky break’ – CNBC

Since 20-something brothers Haroon and Hamza Choudery immigrated to Brooklyn, New York, from from rural Pakistan in 1998, their lives have been changed by technology in both amazing and devastating ways.

Technology provides a nice living for the brothers: Haroon, 26, has a well-paying AI job at a healthcare company, and Hamza, 24, works at WeWork.

But their uncle has seen the other side.

The Chouderys' uncle used his life savings to finance a New York City taxi medallion in 2013 (which, at the time,cost as much as $1.3 million). But thanks to technology, the ride-share boom left the medallion worth just 20% of its original value, Haroon says.

"As you can imagine, starting from scratch after over two decades of working as a taxi driver had a devastating effect on the trajectory of his life."

This whiplash technology launching their careers while devastating their elder also had an effect on Haroon and Hamza. Inspired in part by the experience, the brothers co-founded a nonprofit calledAI for Anyone.

The idea behind the AI literacy organization is to use "our privilege to help those that are less privileged avoid getting into situations where their livelihoods are destroyed, whether it be through like automation replacing their jobs or whether it be through automation being designed to accommodate the needs of more affluent and more well off people and not really taken the the underrepresented populations into account when they're making their decisions," Haroon says.

Both in the classroom and online, AI for Anyone teaches students the basics of artificial intelligence, increases awareness of AI's role in society and shows how the technology can be used.

"We had support that really gave us a lot of lucky breaks," Haroon says, referring to the opportunities they were afforded after coming to the US. "We want to ... help give [kids] a lucky break in the form of some knowledge that may help them make a pivot in their lives," he says.

It wasn't just about lucky breaks for Haroon and Hamza. There was a lot of hard work too.But it is true that the brothers have lived some version of the American Dream.

After coming to the US when Haroon was 6 and Hamza was 4, their family lived with nine relatives in a two-bedroom apartment in Brooklyn, and later on a poultry farm in Maryland on the Eastern Shore of Maryland. Their father worked any number of jobs from baker to taxi driver to tow truck driver.

Haroon and Hamza Choudery with their father, Shabbir Choudery and their sister, Rahat Choudery.

Photo courtesy A.I for Anyone

Haroon (left) and Hamza Choudery in Pakistan.

Photo courtesy A.I. for Anyone

In 2011,Haroon won a Gates Millennium scholarship,which gave him a full ride (including tuition, housing, food and transportation) to both Penn State for undergrad and to University of California, Berkeley, where he got his masters in information and data science. After college, Haroon did data science work for Mark Cuban Companies and was a technology consultant at Deloitte Consulting. He is now a data scientist at Komodo Health.

Hamza graduated magna cum laude from the University of Maryland. He previously worked at Facebook, and now works in business operations at WeWork.

Today, living in New York City, the brothers could easily spend a couple of dollars on cup of coffee the same amount their family had to live on for a month in Pakistan. Living in both realities, Hamza says,"has contextualized the poverty and it has also contextualized the success."

And it has been the brothers' "call to action" to launch their education initiative.

In 2017, Haroon, Hamza and their friend Mac McMahon started AI for Anyone with $5,000 of their own money.

The idea wasto educate those who might be at risk of having their livelihood affected by artificial intelligence and arm them for the future.

The organization's team goes to schools to present workshopsthat teach kidswho might not learn about AI otherwise, because as important as it is to the future of work, itis not part of a regular high school curriculum.

So far, AI for Anyone has taught approximately 50 workshops, reaching over 55,000 people, according to the Chouderys. It also has a monthly newsletter,All About AI, with over 33,000 subscribers, as well as a new podcast,AI For You. (One episode has an interview with Hod Lipson, a well-renowned professor in the AI space.)

The non-profit is now funded by corporate sponsorships from Hypergiant and Komodo Health, so the workshops are free to students and teachers.

Haroon Choudery, a co-founder of A.I. for Anyone, teaching a workshop.

Photo courtesy A.I. for Anyone

Even the pandemic has not stopped AI for Anyone, as the team has taken their seminars virtual.

The first virtual workshop in April was a partnership with the Mark Cuban Foundation, the billionaire tech entrepreneur's philanthropy, via a connection Haroon made through the work he did at Mark Cuban Companies.

"When COVID-19 hit, Haroon and I reconnected and realized we were both thinking about ways to teach AI in a bite-sized way to kids stuck at home," saysRyan Kline, an associate at Mark Cuban Companies. "AI for Anyone is doing great work in fundamental AI education, reaching wide audiences of young students."

They collaborated to digitize the AI for Anyone workshop. Then if students want to learn more, they can be funneled into the Mark Cuban Foundation's Intro to AI Bootcamps, a collaboration between the Mark Cuban Foundation, Microsoft and Walmart, which was announced in 2019.

Cuban shared about the workshop on LinkedIn.

"We see AI for Anyone as providing a spark for hundreds of students to advance their AI learning, and hope that many AI for Anyone graduates will apply to participate in the Mark Cuban Foundation Bootcamps as we expand nationwide," Kline says.

AI for Anyone is still growing, but the purpose is clear for the founders.

"A.I. for Anyone works [as] one of the most appropriate and most fitting ways for us to use our privilege to give back to those that are less privileged than us," Haroon says.

See also:

Amid the coronavirus pandemic, many companies could replace their workers with robots

Merck CEO on success: I was one of a 'few inner city black kids' who rode bus 90 minutes to school

Barack Obama: This is what you can do to reform the system that leads to police brutality

The rest is here:

These young immigrant brothers are teaching A.I. to high-schoolers for free: We want to give kids 'a lucky break' - CNBC

How artificial intelligence outsmarted the superbugs – The Guardian

One of the seminal texts for anyone interested in technology and society is Melvin Kranzbergs Six Laws of Technology, the first of which says that technology is neither good nor bad; nor is it neutral. By this, Kranzberg meant that technologys interaction with society is such that technical developments frequently have environmental, social and human consequences that go far beyond the immediate purposes of the technical devices and practices themselves, and the same technology can have quite different results when introduced into different contexts or under different circumstances.

The saloon-bar version of this is that technology is both good and bad; it all depends on how its used a tactic that tech evangelists regularly deploy as a way of stopping the conversation. So a better way of using Kranzbergs law is to ask a simple Latin question: Cui bono? who benefits from any proposed or hyped technology? And, by implication, who loses?

With any general-purpose technology which is what the internet has become the answer is going to be complicated: various groups, societies, sectors, maybe even continents win and lose, so in the end the question comes down to: who benefits most? For the internet as a whole, its too early to say. But when we focus on a particular digital technology, then things become a bit clearer.

A case in point is the technology known as machine learning, a manifestation of artificial intelligence that is the tech obsession de nos jours. Its really a combination of algorithms that are trained on big data, ie huge datasets. In principle, anyone with the computational skills to use freely available software tools such as TensorFlow could do machine learning. But in practice they cant because they dont have access to the massive data needed to train their algorithms.

This means the outfits where most of the leading machine-learning research is being done are a small number of tech giants especially Google, Facebook and Amazon which have accumulated colossal silos of behavioural data over the last two decades. Since they have come to dominate the technology, the Kranzberg question who benefits? is easy to answer: they do. Machine learning now drives everything in those businesses personalisation of services, recommendations, precisely targeted advertising, behavioural prediction For them, AI (by which they mostly mean machine learning) is everywhere. And it is making them the most profitable enterprises in the history of capitalism.

As a consequence, a powerful technology with great potential for good is at the moment deployed mainly for privatised gain. In the process, it has been characterised by unregulated premature deployment, algorithmic bias, reinforcing inequality, undermining democratic processes and boosting covert surveillance to toxic levels. That it doesnt have to be like this was vividly demonstrated last week with a report in the leading biological journal Cell of an extraordinary project, which harnessed machine learning in the public (as compared to the private) interest. The researchers used the technology to tackle the problem of bacterial resistance to conventional antibiotics a problem that is rising dramatically worldwide, with predictions that, without a solution, resistant infections could kill 10 million people a year by 2050.

The team of MIT and Harvard researchers built a neural network (an algorithm inspired by the brains architecture) and trained it to spot molecules that inhibit the growth of the Escherichia coli bacterium using a dataset of 2,335 molecules for which the antibacterial activity was known including a library of 300 existing approved antibiotics and 800 natural products from plant, animal and microbial sources. They then asked the network to predict which would be effective against E coli but looked different from conventional antibiotics. This produced a hundred candidates for physical testing and led to one (which they named halicin after the HAL 9000 computer from 2001: A Space Odyssey) that was active against a wide spectrum of pathogens notably including two that are totally resistant to current antibiotics and are therefore a looming nightmare for hospitals worldwide.

There are a number of other examples of machine learning for public good rather than private gain. One thinks, for example, of the collaboration between Google DeepMind and Moorfields eye hospital. But this new example is the most spectacular to date because it goes beyond augmenting human screening capabilities to aiding the process of discovery. So while the main beneficiaries of machine learning for, say, a toxic technology like facial recognition are mostly authoritarian political regimes and a range of untrustworthy or unsavoury private companies, the beneficiaries of the technology as an aid to scientific discovery could be humanity as a species. The technology, in other words, is both good and bad. Kranzbergs first law rules OK.

Every cloud Zeynep Tufekci has written a perceptive essay for the Atlantic about how the coronavirus revealed authoritarianisms fatal flaw.

EU ideas explained Politico writers Laura Kayali, Melissa Heikkil and Janosch Delcker have delivered a shrewd analysis of the underlying strategy behind recent policy documents from the EU dealing with the digital future.

On the nature of loss Jill Lepore has written a knockout piece for the New Yorker under the heading The lingering of loss, on friendship, grief and remembrance. One of the best things Ive read in years.

Excerpt from:

How artificial intelligence outsmarted the superbugs - The Guardian

Google to ramp up AI efforts to ID extremism on YouTube – TechCrunch

Last week Facebook solicited help with what it dubbed hard questions including how it should tackle the spread of terrorism propaganda on its platform.

Yesterday Google followed suit with its own public pronouncement, via an op-ed in the FTnewspaper, explaining how its ramping up measures to tackle extremist content.

Both companies have been coming under increasing political pressure in Europe especially to do more to quash extremist content with politicians including in the UK and Germany pointing the finger of blame at platforms such as YouTube for hosting hate speech and extremist content.

Europe has suffered a spate of terror attacks in recent years, with four in the UK alone since March. And governments in the UK and France are currently considering whether to introduce a new liability for tech platforms that fail to promptly remove terrorist content arguing that terrorists are being radicalized with the help of such content.

Earlier this month the UKs prime minister also called for international agreements between allied, democratic governments to regulate cyberspace to prevent the spread of extremism and terrorist planning.

While in Germany a proposal that includes big fines for social media firms that fail to take down hate speech has already gained government backing.

Besides the threat of fines being cast into law, theres an additional commercial incentive for Google after YouTube faced an advertiser backlash earlier this yearrelated to ads being displayed alongside extremist content, with several companies pulling their ads from the platform.

Google subsequentlyupdated the platforms guidelinesto stop ads being served to controversial content, including videos containing hateful content and incendiary and demeaning content so their makers could no longer monetize the content via Googles ad network. Although the company still needs to be able to identify such content for this measure to be successful.

Rather than requesting ideas for combating the spread of extremist content, as Facebook did last week, Google is simply stating what its plan of action is detailingfour additional steps it says its going to take, and conceding that more action is needed to limit the spread of violent extremism.

While we and others have worked for years to identify and remove content that violates our policies, the uncomfortable truth is that we, as an industry, must acknowledge that more needs to be done. Now, writesKent Walker, general counselGoogle in a blog post.

The four additional steps Walker lists are:

Despite increasing political pressure over extremism and the attendant bad PR (not to mention threat of big fines) Google is evidently hoping to retain its torch-bearing stance as a supporter of free speech by continuing to host controversial hate speech on its platform, just in a way that means it cant be directly accused of providing violent individuals with a revenue stream. (Assuming its able to correctly identify all the problem content, of course.)

Whether this compromise will please either side on the remove hate speech vs retain free speech debate remains to be seen. The risk is it will please neither demographic.

The success of the approach will also stand or fall on how quickly and accurately Google is able to identify content deemed a problem and policing user-generated content at such scale is a very hard problem.

Its not clear exactly how many thousands of content reviewers Google employs at this point weve asked and will update this post with any response.

Facebook recently added an additional 3,000 to its headcount, bringing the total number of reviewers to 7,500. CEO Mark Zuckerberg also wants to apply AI to the content identification issue but has previously said its unlikely to be able to do this successfully for many years.

Touching on what Google has been doing already to tackle extremist content, i.e. prior to these additional measures, Walker writes: We have thousands of people around the world who review and counter abuse of our platforms. Our engineers have developed technology to prevent re-uploads of known terrorist content using image-matching technology. We have invested in systems that use content-based signals to help identify new videos for removal. And we have developed partnerships with expert groups, counter-extremism agencies, and the other technology companies to help inform and strengthen our efforts.

Go here to see the original:

Google to ramp up AI efforts to ID extremism on YouTube - TechCrunch

Cleverbot.com – a clever bot – speak to an AI with some …

About Cleverbot

The site Cleverbot.com started in 2006, but the AI was 'born' in 1988, when Rollo Carpenter saw how to make his machine learn. It has been learning ever since!

Things you say to Cleverbot today may influence what it says to others in future. The program chooses how to respond to you fuzzily, and contextually, the whole of your conversation being compared to the millions that have taken place before.

Many people say there is no bot - that it is connecting people together, live. The AI can seem human because it says things real people do say, but it is always software, imitating people.

You'll have seen scissors on Cleverbot. Using them you can share snippets of chats with friends on social networks. Now you can share snips at Cleverbot.com too!

When you sign in to Cleverbot on this blue bar, you can:

Tweak how the AI responds - 3 different ways!Keep a history of multiple conversationsSwitch between conversationsReturn to a conversation on any machinePublish snippets - snips! - for the world to seeFind and follow friendsBe followed yourself!Rate snips, and see the funniest of themReply to snips posted by othersVote on replies, from awful to great!Choose not to show the scissors

Read more:

Cleverbot.com - a clever bot - speak to an AI with some ...

MIT’s AI streaming software aims to stop those video stutters – TechCrunch

MITs Computer Science and Artificial Intelligence Lab (CSAIL) wants to ensure your streaming video experience stays smooth. A research team led by MIT professor Mohammad Alizadeh has developed an artificial intelligence (dubbed Pensieve) that can select the best algorithms for ensuring video streams both without interruption, and at the best possible playback quality.

The method improves upon existing tech, including the adaptive bitrate (ABR) method used by YouTube that throttles back quality to keep videos playing, albeit with pixelation and other artifacts. The AI can select different algorithms depending on what kind of network conditions a device is experiencing, cutting down on the downsides associated with any one method.

During experimentation, the CSAIL research team behind this method found that video streamed with between 10 and 30 percent less rebuffing, with 10 to 25 percent improved quality. Those gains would definitely add up to a significantly improved experience for most video viewers, especially over a long period.

The difference in CSAILs Pensieve approach vs. traditional methods is mainly in its use of a neural network instead of sticking to a strictly algorithmic-based approach. The neural net learns how to optimize through a reward system that incentivizes smoother video playback, rather than setting out defined rules about what algorithmic techniques to use when buffering video.

Researchers say the system is also potentially tweakable on the user end, depending on what they want to prioritize in playback: You could, for instance, set Pensieve to optimize for playback quality, or conversely, for playback speed, or even for conservation of data.

The team is making their project code open source for Pensieve at SIGCOMM next week in LA, and they expect that when trained on a larger data set, it could provide even greater improvements in terms of performance and quality. Theyre also now going to test applying it to VR video, since the high bitrates required for a quality experience there are well suited to the kinds of improvements Pensieve can offer.

See the rest here:

MIT's AI streaming software aims to stop those video stutters - TechCrunch

Verizon and IBM Will Partner on 5G and AI – The Motley Fool

Verizon (NYSE:VZ) and IBM (NYSE:IBM) announced on Wednesday an extensive new partnership that would focus on a host of forward-looking technology, including 5G, edge computing, and artificial intelligence (AI).

The companies plan to use Verizon's high-speed, low-latency wireless 5G network, multi-access edge computing (MEC) capabilities, and Internet of Things (IoT) devices and sensors, and combine them with IBM's expertise in AI, hybrid multicloud, edge computing, asset management, and connected operations.

Image source: Getty Images.

By joining forces and leveraging each business's unique expertise, Verizon and IBM will initially offer mobile asset tracking and management solutions designed to help enterprises "improve operations, optimize production quality, and help clients enhance worker safety."

IBM and Verizon will also work to develop combined solutions for 5G and edge computing such as near-real-time cognitive automation for industrial applications. The combined solutions could help clients "detect, locate, diagnose and respond to system anomalies, monitor asset health and help predict failures in near real-time."

Many computation-intensive tasks happen at a data center, which can be thousands of miles from where the information is generated. Edge computing takes the processing of data from the cloud and moves it closer to the source, or the "edge."

The companies plan to use Verizon's lightning-fast 5G network to increase the number of IoT devices that can be used in a particular geographic area. This will give organizations the ability to interact with those devices in near real time by bringing the necessary computing power within close proximity of the devices.

Verizon and IBM hope to develop "innovative new applications" that could include remote control robotics, near-real-time cognitive video analysis, and industrial plant automation.

Visit link:

Verizon and IBM Will Partner on 5G and AI - The Motley Fool

Frankenstein fears hang over AI – Financial Times

The technology industry is facing up to the world-shaking ramifications of artificial intelligence. There is now a recognition that AI will disrupt how societies operate, from education and employment to how data will be collected about people.

Machine learning, a form of advanced pattern recognition that enables machines to make judgments by analysing large volumes of data, could greatly supplement human thought. But such soaring capabilities have stirred almost Frankenstein-like fears about whether developers can control their creations.

Failures of autonomous systems like the death last yearof a US motorist in a partially self-driving car from Tesla Motors have led to a focus on safety, says Stuart Russell, a professor of computer science and AI expert at the University of California, Berkeley. That kind of event can set back the industry a long way, so there is a very straightforward economic self-interest here, he says.

Alongside immigration and globalisation, fears of AI-driven automation are fuelling public anxiety about inequality and job security. The election of Donald Trump as US president and the UKs vote to leave the EU were partly driven by such concerns. While some politicians claim protectionist policies will help workers, many industry experts say most jobs losses are caused by technological change, largely automation.

Global elites those with high income and educational levels, who live in capital cities are considerably more enthusiastic about innovation than the general population, the FT/Qualcomm Essential Future survey found. This gap, unless addressed, will continue to cause political friction.

Vivek Wadhwa, a US-based entrepreneur and academic who writes about ethics and technology, thinks the new wave of automation has geopolitical implications: Tech companies must accept responsibility for what theyre creating and work with users and policymakers to mitigate the risks and negative impacts. They must have their people spend as much time thinking about what could go wrong as they do hyping products.

The industry is bracing itself for a backlash. Advances in AI and robotics have brought automation to areas of white-collar work, such as legal paperwork and analysing financial data. Some 45 per cent of US employees work time is spent on tasks that could be automated with existing technologies, a study by McKinsey says.

Industry and academic initiatives have been set up to ensure AI works to help people. These include the Partnership on AI to Benefit People and Society, established by companies including IBM, and a $27m effort involving Harvard and the Massachusetts Institute of Technology. Groups like Open AI, backed by Elon Musk and Google, have made progress, says Prof Russell: Weve seen papers...that address the technical problem of safety.

There are echoes of past efforts to deal with the complications of a new technology. Satya Nadella, chief executive of Microsoft, compares it to 15 years ago when Bill Gates rallied his companys developers to combat computer malware. His trustworthy computing initiative was a watershed moment. In an interview with the FT, Mr Nadella said he hoped to do something similar to ensure AI works to benefit humans.

AI presents some thorny problems, however. Machine learning systems derive insights from large amounts of data. Eric Horvitz, a Microsoft executive, told a US Senate hearing late last year that these data sets may themselves be skewed. Many of our data sets have been collected...with assumptions we may not deeply understand, and we dont want our machine-learned applications...to be amplifying cultural biases, he said.

Last year, an investigation by news organisation ProPublica found that an algorithm used by the US justice system to determine whether criminal defendants were likely to reoffend, had a racial bias. Black defendants with a low risk of reoffending were more likely than white ones to be labelled as high risk.

Greater transparency is one way forward, for example making it clear what information AI systems have used. But the thought processes of deep learning systems are not easy to audit.Mr Horvitz says such systems are hard for humans to understand. We need to understand how to justify [their] decisions and how the thinking is done.

As AI comes to influence more government and business decisions, the ramifications will be widespread. How do we make sure the machines we train dont perpetuate and amplify the same human biases that plague society? asks Joi Ito, director of MITs Media Lab.

Executives like Mr Nadella believe a mixture of government oversight including, by implication, the regulation of algorithms and industry action will be the answer. He plans to create an ethics board at Microsoft to deal with any difficult questions thrown up by AI.

He says: I want...an ethics board that says, If we are going to use AI in the context of anything that is doing prediction, that can actually have societal impact...that it doesnt come with some bias thats built in.

Making sure AI systems benefit humans without unintended consequences is difficult. Human society is incapable of defining what it wants, says Prof Russell, so programming machines to maximise the happiness of the greatest number of people is problematic.

This is AIs so-called control problem: the risk that smart machines will single-mindedly pursue arbitrary goals even when they are undesirable. The machine has to allow for uncertainty about what it is the human really wants, says Prof Russell.

Ethics committees will not resolve concerns about AI taking jobs, however. Fears of a backlash were apparent at this years World Economic Forum in Davos as executives agonised over how to present AI. The common response was to say machines will make many jobs more fulfilling though other jobs could be replaced.

The profits from productivity gains for tech companies and their customers could be huge. How those should be distributed will become part of the AI debate. Whenever someone cuts cost, that means, hopefully, a surplus is being created, says Mr Nadella. You can always tax surplus you can always make sure that surplus gets distributed differently.

Additional reporting by Adam Jezard

More here:

Frankenstein fears hang over AI - Financial Times

ai – Wiktionary

English[edit]Etymology 1[edit]

Originated 168595, from Brazilian Portuguese a, from Old Tupi.

ai (plural ais or ai)

Contraction of aight.

ai

ai

From Proto-Albanian *a-ei (compound of proclitic particle a and ei), from Proto-Indo-European *hy- (he, this (one)). Compare Latin is, German er, Lithuanian js, Sanskrit (aym)).

ai msg (accusative at, dative atij, ablative atij)

forms of ai

Albanian personal pronouns

ai

ai

ai

From Proto-Oceanic *wai, from Proto-Eastern Malayo-Polynesian *wai, from Proto-Central-Eastern Malayo-Polynesian *wai, from Proto-Malayo-Polynesian *wahi.

ai

ai

Chuukese possessive determiners

ai

Borrowed from Portuguese ai, from Old Tupi ai.

aim (plural ais)

ai

ai

From Latin allium.

aim (plural ais)

ai

From Proto-Polynesian *qai, from Proto-Malayo-Polynesian *qasiq.

ai

ai

Hiri Motu personal pronouns

ai

ai

ai

ai

From Proto-Yeniseian *a (I). Compare Assan aj (I), Arin aj (I), and Pumpokol ad (I).

ai

From English eye.

ai

From English I

ai

From English high.

ai

ai

a + i

ai

ai (Latin spelling, Hebrew spelling )

a

ai

Compare Russian (oj, ow!).

i: IPA(key): /a/

a: IPA(key): //

i! or a!

ai

ai

ai

ai

ai

ai

ai

ai

Ai! Pisei no prego! Ouch! I stepped on the nail!

ai

ai (masculine plural possessive)

From Latin allium / alium.

aim (uncountable)

declension of ai (singular only)

Inflected form of avea (to have).

ai

From an old or proto-Romanian form ae, from Latin habs[1].

ai

Probably from a Vulgar Latin *eas, from Latin habbs.

ai

(tu) ai (modal auxiliary, second-person singular form of avea, used with infinitives to form conditional tenses)

ai

ai

From English eye.

ai

From English aye, ay.

ai

From Proto-Malayo-Chamic *air, from Proto-Malayo-Sumbawan *wair, from Proto-Sunda-Sulawesi *wair, from Proto-Malayo-Polynesian *wahi.

ai

From English eye.

ai

From English I.

ai

From English eye.

ai

From Proto-Vietic *e.

ai

ai

ai

Read more:

ai - Wiktionary

Microsoft Pix Camera imitates Prisma with its AI-powered filters – Engadget

These artsy filters may sound a lot like what standalone app, Prisma, does, but Microsoft's implementation was developed by Microsoft's Asia research lab in collaboration with Skype. According to a company blog post, Pix Styles use texture, pattern, and tones learned by deep neural networks from famous works of art instead of altering the photo uniformly like other similar apps. Microsoft researcher Josh Weisberg told Engadget that the app uses two different techniques, run in tandem to save time, to produce these effects. "Our approach lends itself to styles based on source images (that are used to train the network) that are not paintings, such as the fire effect," he said in an email.

The initial 11 Styles filters are named Glass, Petals, Bacau, Charcoal, Heart, Fire, Honolulu, Zing, Pop, Glitter and Ripples -- more will be added in the coming weeks. Pix Paintings creates a timeline of your picture as if it were being painted in real time, giving you a short video of its creation. The Paintings feature is accessed with a button that shows up when you apply a new Style, and you can share or save the resulting short video (or GIF) it makes, too.

"These are meant to be fun features," said Microsoft's Josh Weisberg in a blog post. "In the past, a lot of our efforts were focused on using AI and deep learning to capture better moments and better image quality. This is more about fun. I want to do something cool and artistic with my photos."

All this AI magic works right on your iPhone or iPad and won't access the cloud, saving your data plan and decreasing your wait time. You can still use Pix's other features with the new styles, adding frames and cropping your still photos. Microsoft Pix Camera is available now in the App Store and as a free update to existing owners, as well.

Read more from the original source:

Microsoft Pix Camera imitates Prisma with its AI-powered filters - Engadget

Experts Baffled by Why NASA’s “Red Crew” Wear Blue Shirts

Red Crew, Blue Crew

Had it not been for the heroics of three members of NASA's specialized "Red Crew," NASA's absolutely massive — and incredibly expensive — Space Launch System (SLS) likely wouldn't have made it off the ground this week.

During the launch, the painfully delayed Mega Moon Rocket sprang a hydrogen leak. The Red Crew ventured into the dangerous, half-loaded launch zone to fix it live. Incredible work indeed, although in spite of their heroics, keen-eyed observers did notice something strange about the so-called Red Crew: they, uh, don't wear red?

"How is it we spent $20B+ on this rocket," tweeted Chris Combs, a professor at the University of Texas San Antonio, "but we couldn't manage to get some RED SHIRTS for the Red Team."

Alas, the rumor is true. Red shirts seemed to be out of the budget this year — perhaps due to the ungodly amount of money spent on the rocket that these guys could have died while fixing — with the Red Crew-mates donning dark blue shirts instead. Per the NYT, they also drove white cars, which feels like an additional miss.

A leftover from last night that’s still bothering me:

how is it we spent $20B+ on this rocket but we couldn’t manage to get some RED SHIRTS for the Red Team pic.twitter.com/FO10Y6mg3H

— Chris Combs (@DrChrisCombs) November 16, 2022

Packing Nuts

For their part, the Red Crew didn't seem to care all that much, at least not in the moment. They were very much focused on needing to "torque" the "packing nuts," as they reportedly said during a post-launch interview on NASA TV. In other words, they were busy with your casual rocket science. And adrenaline, because, uh, risk of death.

"All I can say is we were very excited," Red Crew member Trent Annis told NASA TV, according to the NYT. "I was ready to get up there and go."

"We were very focused on what was happening up there," he added. "It's creaking, it's making venting noises, it's pretty scary."

In any case, shoutout to the Red Crew. The Artemis I liftoff is historic, and wouldn't have happened if they hadn't risked it all. They deserve a bonus, and at the very least? Some fresh new shirts.

READ MORE: When NASA'S moon rocket sprang a fuel leak, the launch team called in the 'red crew.' [The New York Times]

More on the Artemis I launch: Giant Nasa Rocket Blasts off Toward the Moon

The post Experts Baffled by Why NASA’s “Red Crew” Wear Blue Shirts appeared first on Futurism.

Read the original here:

Experts Baffled by Why NASA’s “Red Crew” Wear Blue Shirts

Former Facebook Exec Says Zuckerberg Has Surrounded Himself With Sycophants

Conviction is easy if you're surrounded by a bunch of yes men — which Mark Zuckerberg just might be. And $15 billion down the line, that may not bode well.

In just about a year, Facebook-turned-Meta CEO Mark Zuckerberg's metaverse vision has cost his company upwards of $15 billion, cratering value and — at least in part — triggering mass company layoffs. That's a high price tag, especially when the Facebook creator has shockingly little to show for it, both in actual technology and public interest.

Indeed, it seems that every time Zuckerberg excitedly explains what his currently-legless metaverse will one day hold, he's met with crickets — and a fair share of ridicule — at the town square. Most everyone finds themselves looking around and asking themselves the same question: who could this possibly be for, other than Zucko himself?

That question, however, doesn't really seem to matter to the swashzuckling CEO, who's either convinced that the public wants and needs his metaverse just as much as he does, or is simply just convicted to the belief that one day people will finally get it. After all, he's bet his company on this thing and needs the public to engage to stay financially viable long-term.

And sure, points for conviction. But conviction is easy if you're surrounded by a bunch of yes men — which, according to Vanity Fair, the founder unfortunately is. And with $15 billion down the line, that may not bode well for the Silicon Valley giant.

"The problem now is that Mark has surrounded himself with sycophants, and for some reason he's fallen for their vision of the future, which no one else is interested in," one former Facebook exec told Vanity Fair. "In a previous era, someone would have been able to reason with Mark about the company's direction, but that is no longer the case."

Given that previous reports have revealed that some Meta employees have taken to marking metaverse documents with the label "MMA" — "Make Mark Happy" — the revelation that he's limited his close circle to people who only agree with him isn't all that shocking. He wants the metaverse, he wants it bad, and he's put a mind-boggling amount of social and financial capital into his AR-driven dream.

While the majority of his many thousands of employees might disagree with him — Vanity Fair reports that current and former metamates have written things like "the metaverse will be our slow death" and "Mark Zuckerberg will single-handedly kill a company with the metaverse" on the Silicon Valley-loved Blind app — it's not exactly easy, or even that possible, to wrestle with the fact that you may have made a dire miscalculation this financially far down the road.

And if you just keep a close circle of people who just agree with you, you may not really have to confront that potential for failure. At least not for a while.

The truth is that Zuckerberg successfully created a thing that has impacted nearly every single person on this Earth. Few people can say that. And while it can be argued that the thing he built has, at its best, created some real avenues for connection, that same creation also seems to have led to his own isolation, in life and at work.

How ironic it is that he's marketed his metaverse on that same promise of connection, only to become more disconnected than ever.

READ MORE: "Mark Has Surrounded Himself with Sycophants": Zuckerberg's Big Bet on the Metaverse Is Backfiring [Vanity Fair]

More on the Meta value: Stock Analyst Cries on Tv Because He Recommended Facebook Stock

The post Former Facebook Exec Says Zuckerberg Has Surrounded Himself With Sycophants appeared first on Futurism.

Go here to read the rest:

Former Facebook Exec Says Zuckerberg Has Surrounded Himself With Sycophants

Celebrities’ Bored Apes Are Hilariously Worthless Now

The value of Bored Ape Yacht Club NFTs has absolutely plummeted, leaving celebrities with six figure losses, a perhaps predictable conclusion.

Floored Apes

The value of Bored Ape Yacht Club NFTs have absolutely plummeted, leaving celebrities with six figure losses, in a perhaps predictable conclusion to a bewildering trend.

Earlier this year, for instance, pop star Justin Bieber bought an Ape for a whopping $1.3 million. Now that the NFT economy has essentially collapsed in on itself, as Decrypt points out, it's worth a measly $69,000.

Demand Media

NFTs, which represent exclusive ownership rights to digital assets — but usually, underwhelmingly, just JPGs and GIFs — have absolutely plummeted in value, spurred by the ongoing crypto crisis and a vanishing appetite.

Sales volume of the blockchain knickknacks has also bottomed out. NFT sales declined for six straight months this year, according to CryptoSlam.

According to NFT Price Floor, the value of the cheapest available Bored Ape dipped down to just 48 ETH, well below $60,000, this week. In November so far, the floor price fell 33 percent.

Meanwhile, the crypto crash is only accelerating the trend, with the collapse of major cryptocurrency exchange FTX leaving its own mark on NFT markets.

Still Kicking

Despite the looming pessimism, plenty of Bored Apes are still being sold. In fact, according to Decrypt, around $6.5 million worth of Apes were moved on Tuesday alone, an increase of 135 percent day over day.

Is the end of the NFT nigh? Bored Apes are clearly worth a tiny fraction of what they once were, indicating a massive drop off in interest.

Yet many other much smaller NFT marketplaces are still able to generate plenty of hype, and millions of dollars in sales.

In other words, NFTs aren't likely to die out any time soon, but they are adapting to drastically changing market conditions — and leaving celebrities with deep losses in their questionable investments.

READ MORE: Justin Bieber Paid $1.3 Million for a Bored Ape NFT. It’s Now Worth $69K [Decrypt]

More on NFTs: The Latest Idea to Make People Actually Buy NFTs: Throw in a House

The post Celebrities' Bored Apes Are Hilariously Worthless Now appeared first on Futurism.

Link:

Celebrities' Bored Apes Are Hilariously Worthless Now

Panicked Elon Musk Reportedly Begging Engineers Not to Leave

According to former Uber engineer Gergely Orosz,

Elon Musk's Twitter operations are still in free fall.

Earlier this week, the billionaire CEO sent an email to staff telling them that they "need to be extremely hardcore" and work long hours at the office, or quit and get three months severance, as The Washington Post reports.

Employees had until 5 pm on Thursday to click "yes" and be part of Twitter moving forward or take the money and part ways. The problem for Musk? According to former Uber engineer Gergely Orosz, who has had a close ear to Twitter's recent inner turmoil, "far fewer than expected [developers] hit 'yes.'"

So many employees called Musk's bluff, Orosz says, that Musk is now "having meetings with top engineers to convince them to stay," in an  embarrassing reversal of his public-facing bravado earlier this week.

Twitter has already been rocked by mass layoffs, cutting the workforce roughly in half. Instead of notifying them, employees had access to their email and work computers revoked without notice.

Even that process was bungled, too, with some employees immediately being asked to return to the company after Musk's crew realized it had sacked people it needed.

According to Orosz's estimations, Twitter's engineering workforce may have been cut by a whopping 90 percent in just three weeks.

Musk has been banging the war drums in an active attempt to weed out those who aren't willing to abide by his strict rules and those who were willing to stand up to him.

But developers aren't exactly embracing that kind of tyranny.

"Sounds like playing hardball does not work," Orosz said. "Of course it doesn't."

"From my larger group of 50 people, 10 are staying, 40 are taking the severance," one source reportedly told Orosz. "Elon set up meetings with a few who plan to quit."

In short, developers are running for the hills — and besides, they're likely to find far better work conditions pretty much anywhere else.

"I am not sure Elon realizes that, unlike rocket scientists, who have relatively few options to work at, [developers] with the experience of building Twitter only have better options than the conditions he outlines," Orosz argued.

Then there's the fact that Musk has publicly lashed out at engineers, mocking them and implying that they were leading him on.

Those who spoke out against him were summarily fired.

That kind of hostility in leadership — Musk has shown an astonishing lack of respect — clearly isn't sitting well with many developers, who have taken up his to get three months of severance and leave.

"I meant it when I called Elon's latest ultimatum the first truly positive thing about this Twitter saga," Orosz wrote. "Because finally, everyone who had enough of the BS and is not on a visa could finally quit."

More on Twitter: Sad Elon Musk Says He's Overwhelmed In Strange Interview After the Power Went Out

The post Panicked Elon Musk Reportedly Begging Engineers Not to Leave appeared first on Futurism.

See the article here:

Panicked Elon Musk Reportedly Begging Engineers Not to Leave

Startup Says It’s Building a Giant CO2 Battery in the United States

Italian startup Energy Dome has designed an ingenious battery that uses CO2 to store energy, and it only needs non-exotic materials like steel and water.

Italian Import

Carbon dioxide has a bad rep for its role in driving climate change, but in an unexpected twist, it could also play a key role in storing renewable energy.

The world's first CO2 battery, built by Italian startup Energy Dome, promises to store renewables on an industrial scale, which could help green energy rival fossil fuels in terms of cost and practicality.

After successfully testing the battery at a small scale plant in Sardinia, the company is now bringing its technology to the United States.

"The US market is a primary market for Energy Dome and we are working to become a market leader in the US," an Energy Dome spokesperson told Electrek. "The huge demand of [long duration energy storage] and incentive mechanisms like the Inflation Reduction Act will be key drivers for the industry in the short term."

Storage Solution

As renewables like wind and solar grow, one of the biggest infrastructural obstacles is the storage of the power they produce. Since wind and solar sources aren't always going to be available, engineers need a way to save excess power for days when it's less sunny and windy out, or when there's simply more demand.

One obvious solution is to use conventional battery technology like lithium batteries, to store the energy. The problem is that building giant batteries from rare earth minerals — which can be prone to degradation over time — is expensive, not to mention wasteful.

Energy Dome's CO2 batteries, on the other hand, use mostly "readily available materials" like steel, water, and of course CO2.

In Charge

As its name suggests, the battery works by taking CO2, stored in a giant dome, and compressing it into a liquid by using the excess energy generated from a renewable source. That process generates heat, which is stored alongside the now liquefied CO2, "charging" the battery.

To discharge power, the stored heat is used to vaporize the liquid CO2 back into a gas, powering a turbine that feeds back into the power grid. Crucially, the whole process is self-contained, so no CO2 leaks back into the atmosphere.

The battery could be a game-changer for renewables. As of now, Energy Dome plans to build batteries that can store up to 200 MWh of energy. But we'll have to see how it performs as it gains traction.

More on batteries: Scientists Propose Turning Skyscrapers Into Massive Gravity Batteries

The post Startup Says It's Building a Giant CO2 Battery in the United States appeared first on Futurism.

See the original post here:

Startup Says It's Building a Giant CO2 Battery in the United States