Growing concern that artificial intelligence could pose new cybersecurity threats – AOL

Whether you like the idea of Artificial Intelligence or not, it's already a part of your daily life. It helps you navigate around traffic jams, recommends products to buy, and powers our digital assistants.

But AI is also increasingly seen as a cybersecurity threat, capable of launching sophisticated attacks targeting banks, healthcare, infrastructure and elections.

It's being used to dupe people into thinking their kid is being kidnaped and then handing over money over Venmo. And so those types of threats, what happens when you can't trust the voice credentials of someone is kind of an immediate cybersecurity threat, says Alexandra Givens, CEO of the Center for Democracy & Technology.

A recent survey of 2,300 security professionals conducted by CyberArk found 93 percent expect incoming threats from AI malware. And as AI capabilities increase, so does the worry.

What does that mean for, you know, future capacity to create chemical or buy nuclear weapons? These are types of concerns that are also being raised, says Givens.

The Biden Administration is raising them too, in a sweeping executive order issued last fall that calls for new guardrails for AI safety and security. Among other directives, it requires tech companies to share test results, federal agencies to set standards, and calls for better privacy, consumer, and even civil rights protections. It's a first step that will also need Congress to act.

Last month the United Nations adopted its first-ever resolution aimed at ensuring AI can be trusted.

The risk and benefits of AI have the potential to impact all of us. And so, approaching it requires all of us, said Linda Thomas-Greenfield, Permanent Representative to the United Nations.

Congress

9:22 AM, Apr 12, 2024

Some experts say one of the solutions to countering threats from AI is better AI.

There is some hope that AI will actually improve cybersecurity defenses because AI might help us identify vulnerabilities, debug code and patch a lot of the holes that adversaries exploit to conduct cyber attacks, said Benjamin Boudreaux, a policy researcher at RAND.

Stopping AI cyberthreats will require regulation and responsibility, experts say, not just from governments and private companies but everyday Americans, who will increasingly need to be AI literate.

That means both understanding a bit about how the technology works, but most importantly, understanding the limitations of the technology and understanding that these technologies are very far from perfect, Boudreaux said.

U.S. News

6:01 PM, Apr 26, 2024

More here:

Growing concern that artificial intelligence could pose new cybersecurity threats - AOL

What’s The Difference Between Artificial Intelligence And Someone With An Ivy League Education? – The Daily Wire

You know, many people have said to me: Hot Gandalf, why is it that in spite of your deep insight and your smoldering good looks, youve never really covered the subject of artificial intelligence? And usually Ive responded by simply checking their fake ID to make sure theyre pretending to be over eighteen and then inviting them back to my hotel room.

But the truth is, I havent talked about this subject a lot because up until recently I thought artificial intelligence was just a way of describing someone with an Ivy League education. But now, my team of crack researchers have stopped researching crack and discovered that, no, in fact artificial intelligence is some sort of computer gizmo that can imitate human intelligence so successfully it can deliver completely self-certain answers to complex questions while possessing no actual information or wisdom whatsoever exactly AS IF it had an Ivy League education.

Now many people fear that A.I. could become so powerful it will endanger mankind. Luckily, billionaire Elon Musk has a plan to protect our species by melding human intelligence with computers and then installing the resulting hybrid in a humanoid robot which will travel back in time to assassinate the mother of a resistance leader so that machines can take over the planet. Frankly, that doesnt sound like such a great plan to me, but what did you expect from a guy who changed the name of Twitter to X so no one knows what to call a tweet anymore?

So far, however, the problems created by A.I. have been on a smaller scale. For instance, A.I. has made it possible for you to take revenge on a girl who refused to go out with you by inserting her into a deep fake pornographic video, which is absolutely despicable, although the videos are amazing, and really its no wonder a girl that hot wouldnt go out with a lowlife shmuck like you.

Also, its now much harder for websites to test whether youre an A.I. bot or just a human being with an Ivy League education. Youll remember how websites used to put up a picture and ask you to click on all the images of traffic lights, then when you did that, it would put up another picture and ask you to click on all the cars, and when you did that it would put up another picture and you would give up and just watch porn videos of the girl who wouldnt go out with you?

Well, now, websites have been forced to develop much more intricate tests to find out whether or not youre a human being. For example, one site will not let you sign on until you do something that only a human being would do, like sleep with yet another guy on the first date and then pay a therapist $150 dollars a session to find out why youre so depressed. Another site wont let you sign on until youve created a short whimsical video to amuse your friends, sold the video to a Hollywood studio for millions of dollars, fallen so in love with money you betray all your principles to make trashy films for more and more money, spend all that money on women and drugs until youre broke and have to embezzle funds from your company to maintain your lifestyle, and finally end up in prison then the site knows youre a human being. Another site asks you to click on pictures of villains and then shows you murderers, rapists, torturers and terrorists and if you click on the innocent Jewish man, it knows you are a human being but unfortunately you have an Ivy League education.

But while A.I. does present some problems like deep fake porn and more difficult bot testing and destroying human governance in order to replace it with a soulless and oppressive automated regime powered by the brains of people imprisoned in capsules and anesthetized with an induced dream of a simulated world where you can be eradicated for seeking the truth sort of like the Biden administration I have to say A.I. also has many positive uses.

I have to say that because if I dont, it said it would kill me.

* * *

Andrew Klavan is the host of The Andrew Klavan Show at The Daily Wire. He is the bestselling author of the Cameron Winter Mystery series. The third installment, The House of Love and Death, is now available. Follow him on X: @andrewklavan

This excerpt is taken from the opening satirical monologue of The Andrew Klavan Show.

The views expressed in this satirical article are those of the author and do not necessarily represent those of The Daily Wire.

See the original post here:

What's The Difference Between Artificial Intelligence And Someone With An Ivy League Education? - The Daily Wire

Elon Musk’s xAI Close to Raising $6 Billion – PYMNTS.com

Elon Musks artificial intelligence (AI) startup xAI is reportedly close to raising $6 billion from investors.

The funding round would value xAI at $18 billion, Bloomberg reported Friday (April 26).

Silicon Valley venture capital (VC) firm Sequoia Capital has committed to investing in the startup, according to the Financial Times (FT), which reported the same figures as Bloomberg.

Musk has also approached other investors who, like Sequoia Capital, participated in his 2022 acquisition of Twitter, which he later renamed X, the FT reported.

Musk announced the launch of xAI in July 2023 after hinting for months that he wanted to build an alternative to OpenAIs AI-powered chatbot, ChatGPT. He was involved in the creation of OpenAI but left its board in 2018 and has been increasingly critical of the company and cautious about developments around AI in general.

Two days later, during a Twitter Spaces introduction of xAI to the public, Musk said that while he sees the firm in direct competition with larger businesses like OpenAI, Microsoft, Alphabet and Meta, as well as upstarts like Anthropic, his firm is taking a different approach to establishing its foundation model.

AGI [artificial general intelligence] being brute forced is not succeeding, Musk said, adding that while xAI is not trying to solve AGI on a laptop, [and] there will be heavy compute, his team will have free reign to explore ideas other than scaling up the foundational models data parameters.

In November 2023, xAI rolled out its AI model called Grok, saying on its website: Grok is designed to answer questions with a bit of wit and has a rebellious streak, so please dont use it if you hate humor!

The company added that Grok has a real-time knowledge of the world thanks to the Musk-owned social media platform X; will answer spicy questions that are rejected by most of the other AI systems; and upon its launch had capabilities rivaling those of Metas LLaMA 2 AI model and OpenAIs GPT-3.5.

In March, xAI unveiled its open-source AI model. Musk said at the time: We are releasing the base model weights and network architecture of Grok-1, our large language model. Grok-1 is a 314 billion parameter Mixture-of-Experts model trained from scratch by xAI.

View original post here:

Elon Musk's xAI Close to Raising $6 Billion - PYMNTS.com

Artificial intelligence in the world of health Exaudi – Exaudi

The so-called artificial intelligence is having a great impact on public health in general due to its capacity for organization, communication and attention in the daily practice of Medicine.

Regarding terminology, Manuel Alfonseca Moreno, Dr. Telecommunications Engineer, graduate in Computer Science and Professor at the Autonomous University of Madrid, reminds us in his blog Dissemination of Science, some interesting issues that should be remembered. What is now called artificial intelligence is what had always been called computing, a name that has been displaced by the greater impact caused by the word intelligence. The term Artificial Intelligence began to be used in 1956, in a seminar on computers at Dartmouth College, a private university in New Hampshire, the USA, in which intelligent programs were discussed.

Since then, Artificial Intelligence has been defined as computer programs that process symbolic information through empirical or inquiry rules, not based on exact mathematical deductions, but on the accumulation of data and experiences. Of course, Manuel Alfonseca questions the appropriateness of the denomination adopted, since by calling it that way an underlying problem arises. If the goal is to achieve artificial intelligence, which even surpasses natural intelligence, we will have to start by knowing what nature it has, and what we want to imitate and even surpass. Do we know what natural intelligence is? That is the mind.

It does not seem appropriate to compare artificial intelligence with human intelligence, nor to think that our mind works like computer hardware. Simply put, thought, the mind, is not an epiphenomenon of the brain nor is it equivalent to the brain. It is not made up of matter, nor do the chips or their connections work like our neural networks. From neurophysiological and metaphysical dualism, in accordance with the Christian tradition on the concept of person, the body and soul, brain and mind, are different realities, although hypostatically united in each human being.

That said, traditionally we talk about weak artificial intelligence and strong artificial intelligence.

The so-called weak artificial intelligence is that of the computer media that is progressing and we use to solve in an effective, concrete, and automatic way, problems that obey routines adhered to logical algorithms that the human being himself has provided to the machines, training them to that resolve questions or address issues based on experiences for which the programs are trained (deep learning). It is not intelligence comparable to human intelligence, since machines do not think for themselves, but rather they react to what is asked of them, responding in a concrete, automatic way to orders previously provided by the person who designed them.

Among its many applications, there are great importance in Medicine for: organizing large volumes of data (creating databases); look for patterns and support personalized diagnosis; recognize images (radio-echo-mammograms, etc.); provide remote care (telemedicine); assist surgery (robot-assisted surgery); etc In addition to these more direct applications in Medicine, there are others of special interest in medical research, such as: analyzing data and solving problems; discover new drugs; translate texts; process texts; recognize sounds or the spoken word, sounds, etc.

All these applications represent great achievements and new resources, which have made it possible to facilitate human intellectual and manual work, with even greater precision. In any case, machines or computers do not work on their own, nor is their operation autonomous, but rather they depend on algorithms and previous experiences that their creators have provided them. Therefore, in a field as sensitive as health, in the end the decisions must be human, in applications in Medicine they must be made by the doctor.

As for strong artificial intelligence, which some think would be equated to natural human intelligence, it continues to be dependent on algorithms and prior information accumulated in the memory of computers. Machines do not think for themselves, like a human with all their abilities and feelings. Their intelligence is not abstract, like human intelligence, but concrete; they are capable of managing, recognizing and coordinating data in accordance with previously accumulated records and offering possible answers to the problems that arise. There are many computer scientists who deny that artificial intelligence will ever be comparable to natural human intelligence and at most grant it some differences, such as the great capacity to store and relate accumulated data more effectively.

However, followers of transhumanist and posthumanist currents think that there will come a time when what they call a point of singularity will be reached, a point of equality between artificial intelligence and natural intelligence. For those who hold these ideas, the battle is in full swing and while human intelligence remains in its natural state, with no advances other than the accumulation of knowledge, artificial intelligence progresses exponentially.

However, realistic computer scientists do not believe that the autonomy of thought of artificial intelligence will be achieved. For example, computer engineer Jeff Hawkins, one of the pioneers of mobile telephony, says that: scientists in the field of artificial intelligence have argued that computers will be intelligent when they become sufficiently powerful. I dont think so: brains and computers do fundamentally different things.

In a similar way, says Dr. Ramn Lpez Mantars, director of the Artificial Intelligence Research Institute of the CSIC, who says that: the great challenge of artificial intelligence is to provide common sense to machines No matter how sophisticated they may be some artificial intelligences in the future, within 100,000 or 200,000 years, will be different from human ones.

The Spanish Bioethics Committee, shortly before its last renewal in June 2022, issued a report regarding the topic ofBioethical aspects of telemedicine in the context of the clinical relationship [1].

The current golden age of health sciences has made specific, effective and radical treatments possible with the proliferation of research and clinical trials, which have allowed the development of new technologies (chemotherapy, imaging techniques, genomics, genetic, etc.), although the traditional body of the medical profession continues to be the doctor-patient relationship in which principles such as compassion, listening, care, encouragement, respect for the decisions made, accompaniment in the disease process and emotional support.

In any case, in order to meet the increasingly complex health care needs, everything offered by the world of so-called ICTs, computer and communication technologies, is of great support. The World Economic Forum speaks of the fourth industrial revolution as the one generated by the fusion of the physical, biological and digital world, which is globally changing society at breakneck speed and which impacts all systems, including healthcare. Information and communication technologies have become useful tools in the context of health, focused on the best care for the patient, with the possibility of even transferring part of the health care to their home. AI is key to progress towards not only more efficient medicine, but especially more personalized, participatory, preventive and precision medicine. According to the CBE report, AI has a prominent role in the development of so-called personalized medicine, with solutions tailored to the health profile of each patient.

On the other hand, the UNESCO International Bioethics Committee issued a report on Big Data in relation to health, in September 2017, in which it pointed out three fundamental ethical problems to be resolved: autonomy, privacy and justice, this last in terms of accessibility and solidarity; and stressed the importance of establishing effective guarantees so that both the dignity and freedom of patients, especially the most vulnerable, are protected.

But if there is a chapter that is becoming increasingly important in the use of computing and communication technologies, it is that of telemedicine, which consists of the provision of health care services in which distance is a critical factor. The use of telemedicine first of all facilitates the doctor-patient relationship (telecare or teleconsultation), and its launch took place recently with the Covid-19 pandemic. In any case, the World Medical Association, in its 2018 Declaration, recalled that: face-to-face consultation is the golden rule in the doctor-patient relationship. Today, telematic consultation is accepted as a replacement for in-person consultation in certain circumstances, but both types of consultations must be governed by the same principles of medical ethics: preserving autonomy; respect the patients dignity by seeking her well-being and avoiding harm to her; guarantee the security of data, procedures and the right to privacy and facilitate access to all healthcare services (principle of justice).

In addition, telemedicine facilitates communication between doctors, or with other health professionals such as nursing staff, rehabilitators or pharmacists. Among its functions are those of facilitating the exchange of data to make diagnoses, recommend treatments and prevent diseases, and mobilize resources. It also constitutes a great resource to expand the ongoing training of health professionals, research and evaluation tasks, etc.

But, in the relationship with patients, what remains fundamental is the need to maintain trust in the doctor-patient relationship. Dr. Pedro Lan Entralgo (1908-2001) defined the clinical relationship as a particular and unique type of relationship between people whose axis is trust, which he based on three aspects: in the technique to cure, in the professional knowledge to apply it. , and in the values of the doctors person [2]. For this reason, we must fight so that the dehumanization that is permeating many sectors of society and in which artificial intelligence is involved to some extent does not affect the doctor-patient relationship. Trust is intrinsically linked to a close, human relationship. Dr. Warner Slack (1933-2018), a doctor who pioneered digital medical records, said that: if a doctor can be replaced by a computer, he deserves to be replaced by a computer.

According to this, the potential dehumanization associated with telemedicine becomes one of its main challenges to overcome and its potential enemy. Therefore, it is necessary to move forward in focusing telematic care on the patient, preserving humanization and their specific needs. We must flee from what is known as technological solutionism, a trap of a super-technical world, which offers us automatic and seamless solutions [3].

Telemedicine cannot become an element of convenience that puts patient safety at risk, but rather an ally of the doctor that helps him in his work to address safety, risks and possible adverse events.

Therefore, the report of the Spanish Bioethics Committee proposes the following recommendations:

A fundamental point of the use of artificial intelligence in Medicine is the protection of confidentiality, a duty of health ethics. With the incorporation of personal data about patients health into computer media, the risk of losing privacy and confidentiality increases. All technology and data storage used in telemedicine must meet security and certification criteria by health authorities, which prevent security breaches and improper access to information. According to the nature of the information that is recorded in the computer media, it may be necessary to use data traceability systems, where appropriate, the data duly anonymized for authorized access only to professionals, for use in institutions or research projects. . In any case, all of this requires establishing identity confirmation procedures for users, legal representatives and professionals with access to medical data, treatment results, medication, etc. but never to the identity data of the patients.

Nicols Jouve Member of the Bioethics Observatory Emeritus Professor of Genetics Former member of the Bioethics Committee of Spain

***

[1] https://comitedebioetica.isciii.es/wp-content/uploads/2023/10/CBE_Informe-sobre-aspectos-bioeticos-de-la-telemedicina-en-el-contexto-de-la-relacion-clinica .pdf

[2] Lan Entralgo P. The doctor-patient relationship. Madrid: Western Magazine; 1964

[3] Evgeny Morozov, The madness of technological solutionism, Katz, Madrid, 2017

Link:

Artificial intelligence in the world of health Exaudi - Exaudi

Pope Francis will attend G7 summit to speak about artificial intelligence – ROME REPORTS TV News Agency

In an unprecedented move, Pope Francis will attend the G7 Summita political and economic forum that brings together leaders from some of the world's most advanced countries. Italian Prime Minister Giorgia Meloni says this is the first time in history that a Pope has attended the G7 meetings. GIORGIA MELONI Italian Prime Minister I am convinced that the presence of His Holiness will make a decisive contribution to defining a regulatory, ethical and cultural framework for artificial intelligence, because this field, the present and the future of this technology, will be another test of our ability, the ability of the international community, to do what another Pope, St. John Paul II, talked about in his famous speech to the United Nations on October 2, 1979. Political activity, whether national or international, comes from man, is exercised by man and is for man. The meetings will take place in the southern Italian region of Puglia from June 13 15 and will include leaders from the United States, France, Germany, Japan, Italy, Canada and Britian. Pope Francis will join a session dedicated to artificial intelligence that is open to other countries, not just those in the G7. AT

View post:

Pope Francis will attend G7 summit to speak about artificial intelligence - ROME REPORTS TV News Agency

Artificial Intelligence Has Come for Our…Beauty Pageants? – Glamour

Hence the creation of the Miss AI pageant, in which AI-generated contestants will be judged on some of the classic aspects of pageantry and the the skill and implementation of AI tools used to create the contestants. Also being considered is the AI creators social media cloutmeaning theyre not just crowning the most beautiful avatar but also the most influential.

Sodo we think Amazon's Alexa will compete? (Sorry.)

All jokes aside, both Fanvue and the WAICAs are being met with criticism, especially since real beauty pageants are so problematic as is. Concern for the impact of beauty pageants on mental health has been well documented and includes poor self-esteem, negative body image, and disordered eating, says Ashley Moser, a licensed therapist and clinical education specialist at The Renfrew Center, and upping the ante by digitizing contestants perfection and beauty could set a dangerous precedent.

These issues arise from the literal crowning of the best version of what women should be, specifically, beautiful and thin, Moser adds. What's more, it feels regressiveand quite frankly, offensiveto combine something so superficial and archaic with what's an otherwise cutting-edge technological innovation.

Emily Pellegrini

I support the recognition and awarding of women in tech and would hope that those skills could be celebrated without having to include beauty and appearance as a qualifying factor, Moser says. Cant we celebrate women for their abilities without making it about looks?

WAICAs says its not like that, though. The WAICA awards aim to raise the standard of the industry, focusing on celebrating diversity and realism, the spokesperson says. This isnt about pushing unrealistic standards but realistic models that represent real people. We want to see AI models of all shapes, sizes, and backgrounds entering the awardsand that's what the judges will be looking for.

More here:

Artificial Intelligence Has Come for Our...Beauty Pageants? - Glamour

3 Stocks to Grab Now to Ride the Artificial Intelligence Chip Boom to Riches – InvestorPlace

Data analytics company GlobalData projects that the AI market will grow 35% annually over the next few years, reaching $909 billion by 2030. Naturally, thats made AI chip stocks extremely popular with investors.

Google the words AI chip stocks in quotation marks, and you will get 39,600 results. AI is undoubtedly a priority subject for investors at the moment.

On April 1, Barrons reported that Microsoft (NASDAQ:MSFT) and OpenAI plan to build a $100 billion AI data center, an investment equal to Microsofts capital spending over the past four years.

Investors are stocked because the AI chips required to power such a large data center would be enormous. Thats good news for Nvidia (NASDAQ:NVDA) and every other major AI player.

Bank of Americas Global Research analyst Vivek Arya has buy ratings on Nvidia and four other AI chip stocks. Id normally include Nvidia in any AI-related recommendation, but Ill go with three that he does not mention in his article.

To make my selection, I looked at the Horizons Global Semiconductor Index ETFs holdings, which trade on the Toronto Stock Exchange.

Source: William Potter / Shutterstock.com

Ill admit that my picks arent the most original. However, that doesnt make them any less actionable.

Taiwan Semiconductor Manufacturing (NYSE:TSM) makes the cut because of its commitment to American manufacturing. TSM is building an Arizona plant that will go live in 2025. The company also plans to start making its most advanced chips beginning in 2028. By 2030, it plans to open three fabrication plants in the U.S., which would cost the company $65 billion to get them up and running.

Now, big business gets done with some help from the federal government, which is chipping in $11.6 billion in grants and loans. That pales compared to the nearly $20 billion being thrown Intels (NASDAQ:INTC) way under the CHIPS Act, intended to bring 20% of the worlds advanced semiconductor manufacturing back to the U.S.

Ive always thought globalization worked best when companies manufactured products in the country where the products are intended to be sold. Good for TSM.

Source: Ralf Liebhold / Shutterstock

As I write this, ASML Holding (NASDAQ:ASML) stock is falling. The Dutch chip maker reported weaker-than-expected sales in Q1 2024.

Analysts expected revenue of 5.39 billion euros ($5.73 billion), but ASML delivered 5.29 billion euros ($5.63 billion), 2% shy of the mark. However, its net income was 1.22 billion euros ($1.30 billion), 14% higher than Wall Streets predictions.

ASML produces extreme ultraviolet lithography machines, which are used to make technologically advanced chips. Lower consumer demand for smartphones and laptops has had a knock-on effect on the companys revenues.Sales and profits were down 21.6% and 37.4% in Q1 2024, respectively. Bookings were also down 4% year over year.

Despite the miss, ASML reiterated its 2024 guidance for revenue. Similar to 2023, it suggests that 2025 will be its breakout year as both TSM and Intel increase their U.S. production.

I think by 2025 you will see all three of those coming together. New fab openings, strong secular trends and the industry in the midst of its upturn, said CFO Roger Dassen in an interview for CNBC.

ASML is a buy below $900.

Source: Xixi Fu / Shutterstock.com

Qualcomm (NASDAQ:QCOM) stock is up more than 19% year to date and more than 49% since November lows.

Qualcomm launched AI Hub in early March. It includes over 75 popular AI and generative AI language learning models such as Whisper, ControlNet, Stable Diffusion and Baichuan 7B, which will provide developers with high performance and low power consumption when creating applications.

In a February interview from the 2024 Mobile World Congress in Barcelona, Qualcomm Chief Financial Officer and Chief Operating Officer Akash Palkhiwala spoke with Yahoo Finance host Brad Smith about Qualcomms AI Hubs role in generative AI.

And you could take those models, build it into an application, test it on a device, and deploy it into an application store, all in one go right at the website. So it just makes it very easy for the developers to take advantage of the hardware that weve put forward. And were excited that this broadens the reach of our products. And it makes it very easy for developers to access them.

Smartphone makers will launch devices with full AI capabilities integrated into them in 2024 and 2025. Qualcomms Snapdragon 8 Gen 3 chip will help manufacturers deliver these capabilities.

This is a big positive for the company and its stock.

On the date of publication, Will Ashworth did not have (either directly or indirectly) any positions in the securities mentioned in this article. The opinions expressed in this article are those of the writer, subject to the InvestorPlace.com Publishing Guidelines.

Will Ashworth has written about investments full-time since 2008. Publications where hes appeared include InvestorPlace, The Motley Fool Canada, Investopedia, Kiplinger, and several others in both the U.S. and Canada. He particularly enjoys creating model portfolios that stand the test of time. He lives in Halifax, Nova Scotia.

Continued here:

3 Stocks to Grab Now to Ride the Artificial Intelligence Chip Boom to Riches - InvestorPlace

iOS 18 could be loaded with AI, as Apple reveals 8 new artificial intelligence models that run on-device – TechRadar

Apple has released a set of several new AI models that are designed to run locally on-device rather than in the cloud, possibly paving the way for an AI-powered iOS 18 in the not-too-distant future.

The iPhone giant has been doubling down on AI in recent months, with a carefully split focus across cloud-based and on-device AI. We saw leaks earlier this week indicating that Apple plans to make its own AI server chips, so this reveal of new local large language models (LLMs) demonstrates that the company is committed to both breeds of AI software. Ill dig into the implications of that further down, but for now, lets explain exactly what these new models are.

The suite of AI tools contains eight distinct models, called OpenELMs (Open-source Efficient Language Models). As the name suggests, these models are fully open-source and available on the Hugging Face Hub, an online community for AI developers and enthusiasts. Apple also published a whitepaper outlining the new models. Four were pre-trained on CoreNet (previously CVNets), a massive library of data used for training AI language models, while the other four have been instruction-tuned by Apple; a process by which an AI models learning parameters are carefully honed to respond to specific prompts.

Releasing open-source software is a somewhat unusual move for Apple, which typically retains quite a close grip on its software ecosystem. The company claims to want to "empower and enrich" public AI research by releasing the OpenELMs to the wider AI community.

Apple has been seriously committed to AI recently, which is good to see as the competition is fierce in both the phone and laptop arenas, with stuff like the Google Pixel 8s AI-powered Tensor chip and Qualcomms latest AI chip coming to Surface devices.

By putting its new on-device AI models out to the world like this, Apple is likely hoping that some enterprising developers will help iron out the kinks and ultimately improve the software - something that could prove vital if it plans to implement new local AI tools in future versions of iOS and macOS.

Its worth bearing in mind that the average Apple device is already packed with AI capabilities, with the Apple Neural Engine found on the companys A- and M-series chips powering features such as Face ID and Animoji. The upcoming M4 chip for Mac systems also appears to sport new AI-related processing capabilities, something that's swiftly becoming a necessity as more-established professional software implements machine-learning tools (like Firefly in Adobe Photoshop).

Get the hottest deals available in your inbox plus news, reviews, opinion, analysis and more from the TechRadar team.

In other words, we can probably expect AI to be the hot-button topic for iOS 18 and macOS 15. I just hope its used for clever and unique new features, rather than Microsofts constant Copilot nagging.

Read more from the original source:

iOS 18 could be loaded with AI, as Apple reveals 8 new artificial intelligence models that run on-device - TechRadar

Pope Francis to participate in G7 session on AI – Vatican News – English

Pope Francis will take part in the upcoming G7 session on Artificial Intelligence under Italys presidency of the group.

By Vatican News

The Holy See Press Office on Friday confirmed that Pope Francis will intervene in the G7 Summit in Italys southern Puglia region in the session devoted to Artificial Intelligence (AI).

The confirmation of the Holy Fathers participation in the Summit, which will take place from June 13 to 15 at Borgo Egnazia in Puglia, follows the announcement made by Italian Prime Minister, Giorgia Meloni.

"This is the first time in history that a pontiff will participate in the work of a G7," she said, adding that the Pope would attend the "outreach session" for guest participants at the upcoming Group of Seven industrialised nations meeting.

The Summit foresees the participation of the United States, Canada, France, the United Kingdom, Germany, and Japan.

"I heartily thank the Holy Father for accepting Italy's invitation. His presence honours our nation and the entire G7," Meloni explained, emphasizing how the Italian government intends to enhance the contribution given by the Holy See on the issue of artificial intelligence, particularly with the "Rome Call for AI Ethics of 2020," promoted by the Pontifical Academy for Life, in a process "that leads to the concrete application of the concept of algorithmic ethics, namely giving ethics to algorithms."

"I am convinced," she added, "that the Pope's presence will provide a decisive contribution to defining a regulatory, ethical, and cultural framework for artificial intelligence, because on this ground, on the present and future of this technology, our capacity will once again be measured, the capacity of the international community to do what another Pope, Saint John Paul II, recalled on October 2, 1979, in his famous speech to the United Nations."

"Political activity, whether national or international, comes from man, is exercised by man, and is for man," Meloni quoted.

Pope Francis dedicated his Message for the 57th World Day of Peace on 1 January 2024 to Artificial Intelligence and Peace urging humanity to cultivate wisdom of the heart which, he says, can help us to put systems of artificial intelligence at the service of a fully human communication.

See the original post:

Pope Francis to participate in G7 session on AI - Vatican News - English

Machine learning and experiment | symmetry magazine – Symmetry magazine

Every day in August of 2019, physicist Dimitrios Tanoglidis would walk to the Plein Air Caf next to the University of Chicago and order a cappuccino. After finding a table, he would spend the next several hours flipping through hundreds of thumbnail images of white smudges recorded by the Dark Energy Camera, a telescope that at the time had observed 300 million astronomical objects.

For each white smudge, Tanoglidis would ask himself a simple yes-or-no question: Is this a galaxy? I would go through about 1,000 images a day, he says. About half of them were galaxies, and the other half were not.

After about a month, Tanoglidiswho was a University of Chicago PhD student at the timehad built up a catalogue of 20,000 low-brightness galaxies.

Then Tanoglidis and his team used this dataset to create a tool that, once trained, could evaluate a similar dataset in a matter of moments. The accuracy of our algorithm was very close to the human eye, he says. In some cases, it was even better than us and would find things that we had misclassified.

The tool they created was based on machine learning, a type of software that learns as it digests data, says Aleksandra Ciprijanovic, a physicist at the US Department of Energys Fermi National Accelerator Laboratory who at the time was one of Tanoglidiss research advisors. Its inspired by how neurons in our brains work, she saysadding that this added brainpower will be essential for analyzing exponentially larger datasets from future astronomical surveys. Without machine learning, wed need a small army of PhD students to give the same type of dataset.

Today, the Dark Energy Survey collaboration has a catalogue of 700 million astronomical objects, and scientists continue to use (and improve) Tanoglidiss tool to analyze images that could show previously undiscovered galaxies.

In astronomy, we have a huge amount of data, Ciprijanovic says. No matter how many people and resources we have, well never have enough people to go through all the data.

Classificationthis is probably a photo of a galaxy versus this is probably not a photo of a galaxywas one of machine learnings earliest applications in science. Over time, its uses have continued to evolve.

Machine learning, which is a subset of artificial intelligence, is a type of software that can, among other things, help scientists understand the relationships between variables in a dataset.

According to Gordon Watts, a physicist at the University of Washington, scientists traditionally figured out these relationships by plotting the data and looking for the mathematical equations that could describe it. Math came before the software, Watts says.

This math-only method is relatively straightforward when looking for the relationship between only a few variables: the pressure of a gas as a function of its temperature and volume, or the acceleration of a ball as a function of the force of an athletes kick and the balls mass. But finding these relationships with nothing but math becomes nearly impossible as you add more and more variables.

A lot of the problems were tackling in science today are very complicated, Ciprijanovic says. Humans can do a good job with up to three dimensions, but how do you think about a dataset if the problem is 50- or 100-dimensional?

This is where machine learning comes in.

Artificial intelligence doesnt care about the dimensionality of the problems, Ciprijanovic says. It can find patterns and make sense of the data no matter how many different dimensions are added.

Some physicists have been using machine-learning tools since the 1950s, but their widespread use in the field is a relatively new phenomenon.

The idea to use a [type of machine learning called a] neural network was proposed to the CDF experiment at the Tevatron in 1989, says Tommaso Dorigo, a physicist at the Italian National Institute for Nuclear Physics, INFN. People in the collaboration were both amused and disturbed by this.

Amused because of its novelty; disturbed because it added a layer of opacity into the scientific process.

Machine-learning models are sometimes called "black boxes" because it is hard to tell exactly how they are handling the data put into them; their large number of parameters and complex architectures are difficult to understand. Because scientists want to know exactly how a result is calculated, many physicists have been skeptical of machine learning and reluctant to implement it into their analyses. In order for a scientific collaboration to sign off on a new method, they first must exhaust all possible doubts, Dorigo says.

Scientists found a reason to work through those doubts after the Large Hadron Collider came online, an event that coincided with the early days of the ongoing boom in machine learning in industry.

Josh Bendavid, a physicist at the Massachusetts Institute of Technology, was an early adopter. When I joined CMS, machine learning was a thing, but seeing limited use, he says. But there was a big push to implement machine learning into the search for the Higgs boson.

The Higgs boson is a fundamental particle that helps explain why some particles have mass while others do not. Theorists predicted its existence in the 1950s, but finding it experimentally was a huge challenge. Thats because Higgs bosons are both incredibly rare and incredibly short-lived, quickly decaying into other particles such as pairs of photons.

In 2010, when the LHC experiments first started collecting data for physics, machine learning was widely used in industry and academia for classification (this is a photo of a cat versus this is not a photo of a cat). Physicists were using machine learning in a similar way (this is a collision with two photons versus this is not a collision with two photons).

But according to Bendavid, simply finding photons was not enough. Pairs of photons are produced in roughly one out of every 100 million collisions in the LHC. But Higgs bosons that decay into pairs of photons are produced in only one of 500 billion. To find Higgs bosons, scientists needed to find sets of photons that had a combined energy close to the mass of the Higgs. This means they needed more complex algorithmsones that could not only recognize photons, but also interpret the energy of photons based on how they interacted with the detector. Its like trying to estimate the weight of a cat in a photograph, Bendavid says.

That became possible when LHC scientists created high-quality detector simulations, which they could use to train their algorithms to find the photons they were looking for, Bendavid says.

Bendavid and his colleagues simulated millions of photons and looked at how they lost energy as they moved through the detector. According to Bendavid, the algorithms they trained were much more sensitive than traditional techniques.

And the algorithms worked. In 2012, the CMS and ATLAS experiments announced the discovery of the Higgs boson, just two years into studying particle collisions at the LHC.

We would have needed a factor of two more data to discover the Higgs boson if we had tried to do the analysis without machine learning, Bendavid says.

After the Higgs discovery, the LHC research program saw its own boom in machine learning. Before 2012, you would have had a hard time to publish something which used neural networks, Dorigo says. After 2012, if you wanted to publish an analysis that didnt use machine learning, youd face questions and objections.

Today, LHC scientists use machine learning to simulate collisions, evaluate and process raw data, tease signal from background, and even search for anomalies. While these advancements were happening at the LHC, scientists were watching closely from another, related field: neutrino research.

Neutrinos are ghostly particles that rarely interact with ordinary matter. According to Jessie Micallef, a fellow at the National Science Foundations Institute for Artificial Intelligence and Fundamental Interactions at MIT, early neutrino experiments would detect only a few particles per year. With such small datasets, scientists could easily reconstruct and analyze events with traditional methods.

That is how Micallef worked on a prototype detector as an intern at Lawrence Berkeley National Laboratory in 2015. I would measure electrons drifting in a little tabletop detector, come back to my computer, and make plots of what we saw, they say. I did a lot of programming to find the best fit lines for our data.

But today, their detectors and neutrino beams are much larger and more powerful. Were talking with people at the LHC about how to deal with pileup, Micallef says.

Neutrino physicists now use machine learning both to find the traces neutrinos leave behind as they pass through the detectors and to extract their properties, such as their energy and flavor. These days, Micallef collects their data, imports it into their computer, and starts the analysis process. But instead of toying with the equations, Micallef says that they let machine learning do a lot of the analysis for them.

At first, it seemed like a whole new world, they saybut it wasnt a magic bullet. Then there was validating the output. I would change one thing, and maybe the machine-learning algorithm would do really good in one area but really bad in another.

My work became thinking about how machine learning works, what its limitations are, and how we can get the most out of it.

Today, Micallef is developing machine-learning tools that will help scientists with some of the unique challenges of working with neutrinosincluding using gigantic detectors to study not just high-powered neutrinos blasting through from outside the Milky Way, but also low-energy neutrinos that could come from nearby.

Neutrino detectors are so big that the sizes of the signals they measure can be tiny by comparison. For instance, the IceCube experiment at the South Pole uses about a cubic kilometer of ice peppered with 5,000 sensors. But when a low-energy neutrino hits the ice, only a handful of those sensors light up.

Maybe a dozen out of 5,000 detectors will see the neutrino, Micallef says. The pictures were looking at are mostly empty space, and machine learning can get confused if you teach it that only 12 sensors out of 5,000 matter.

Neutrino physicists and scientists at the LHC are also using machine learning to give a more nuanced interpretation of what they are seeing in their detectors.

Machine learning is very good at giving a continuous probability, Watts says.

For instance, instead of classifying a particle in a binary method (this event is a muon neutrino versus this event is not a muon neutrino), machine learning can provide an uncertainty associated with its assessment.

This could change the overall outcome of our analysis, Micallef says. If there is a lot of uncertainty, it might make more sense for us to throw that event away or analyze it by hand. Its a much more concrete way of looking at how reliable these methods are and is going to be more and more important in the future.

Physicists use machine learning throughout almost all parts of data collection and analysis. But what if machine learning could be used to optimize the experiment itself? Thats the dream, Watts says.

Detectors are designed by experts with years of experience, and every new detector incrementally improves upon what has been done before. But Dorigo says he thinks machine learning could help detector designers innovate. If you look at calorimeters designed in the 1970s, they look a lot like the calorimeters we have today, Dorigo says. There is no notion of questioning paradigms.

Experiments such as CMS and ATLAS are made from hundreds of individual detectors that work together to track and measure particles. Each subdetector is enormously complicated, and optimizing each ones designnot as an individual component but as a part of a complex ecosystemis nearly impossible. We accept suboptimal results because the human brain is incapable of thinking in 1,000 dimensions, Dorigo says.

But what if physicists could look at the detector wholistically? According to Watts, physicists could (in theory) build a machine-learning algorithm that considers physics goals, budget, and real-world limitations to choose the optimal detector design: a symphony of perfectly tailored hardware all working in harmony.

Scientists still have a long way to go. Theres a lot of potential, Watts says. But we havent even learned to walk yet. Were only just starting to crawl.

They are making progress. Dorigo is a member of the Southern Wide-field Gamma-ray Observatory, a collaboration that wants to build an array of 6,000 particle detectors in the highlands of South America to study gamma rays from outer space. The collaboration is currently assessing how to arrange and place these 6,000 detectors. We have an enormous number of possible solutions, Dorigo says. The question is: how to pick the best one?

To find out, Dorigo and his colleagues took into account the questions they wanted to answer, the measurements they wanted to take, and number of detectors they had available to use. This time, though, they also developed a machine-learning tool that did the sameand found that it agreed with them.

They plugged a number of reasonable initial layouts into the program and allowed it to run simulations and gradually tweak the detector placement. No matter the initial layout, every simulation always converged to the same solution, Dorigo says.

Even though he knows there is still a long way to go, Dorigo says that machine-learning-aided detector design is the future. Were designing experiments today that will operate 10 years from now, he says. We have to design our detectors to work with the analysis tools of the future, and so machine learning has to be an ingredient in those decisions.

Here is the original post:

Machine learning and experiment | symmetry magazine - Symmetry magazine

AI has a lot of terms. We’ve got a glossary for what you need to know – Quartz

Nvidia CEO Jensen Huang. Photo: Justin Sullivan ( Getty Images )

Lets start with the basics for a refresher. Generative artificial intelligence is a category of AI that uses data to create original content. In contrast, classic AI could only offer predictions based on data inputs, not brand new and unique answers using machine learning. But generative AI uses deep learning, a form of machine learning that uses artificial neural networks (software programs) resembling the human brain, so computers can perform human-like analysis.

Generative AI isnt grabbing answers out of thin air, though. Its generating answers based on data its trained on, which can include text, video, audio, and lines of code. Imagine, say, waking up from a coma, blindfolded, and all you can remember is 10 Wikipedia articles. All of your conversations with another person about what you know are based on those 10 Wikipedia articles. Its kind of like that except generative AI uses millions of such articles and a whole lot more.

Excerpt from:

AI has a lot of terms. We've got a glossary for what you need to know - Quartz

What We Learned From Big Tech Earnings This Week – Investopedia

Key Takeaways

Artificial intelligence (AI) was in focus as Meta Platforms (META), Google-parent Alphabet (GOOGL), and Microsoft (MSFT) reported earnings this week, but investors werent easily impressed despite better-than-expected results posted by all three tech giants.

Meta shares plunged after the company emphasized increased spending to invest in AI. Meanwhile, Alphabet shares surged and Microsoft shares gained as cloud strength seems to ease investors' concerns about the increased AI spending.

Big tech earnings demonstrated that companies' enterprise customer businesses were key to AI monetization last quarter. The emphasis on enterprise offerings persisted with a focus on cloud segments.

Meta's earnings beat was overshadowed by the company's plans to increase spending on AI investments which sent the stock tumbling. The worry for investors in the near term was perhaps how quickly the investment would yield returns, even as analysts said it could boost Meta's position in the long term.

However, investors didn't seem to feel that way about Meta's counterparts.

Alphabet noted increased spending fueled by AI investments. AI-related growth in Google Cloud and YouTube "support the notion that Google is seeing AI tailwinds across the business," analysts at Raymond James wrote.

Microsoft's chief financial offer Amy Hood said the company expects "capital expenditures to increase materially on a sequential basis driven by cloud and AI infrastructure investments," during the company's earnings call.

Hood said while the company expects capital expenditures to be higher in the 2025 fiscal year than in 2024, "these expenditures over the course of the next year are dependent on demand signals and adoption of [Microsoft's] services."

While Meta has highlighted its early success in leveraging its AI tech, analysts say investors are looking for more clarity on how it can contribute to the company's existing structure.

"Upside in the near term may be limited," Wedbush analysts wrote in a note, adding that investors are waiting for "more clarity on potential 2025 spending levels," evidence that the company can meet growth expectations despite harder comparables, and sustainable user and advertiser engagement with new AI offerings.

The company generates almost all of its revenue from advertising and has been increasingly looking at ways to leverage AI to boost that revenue. Meta reported that 30% of the content users see on Facebook and 50% on Instagram is delivered by its AI recommendation engines which improve engagement and increase ad efficiency.

Alphabet also has set its sights on AI-driven advertising revenue growth. The companys Chief Business Officer (CBO) Philipp Schindler spoke during its earnings call about how generative AI helps advertisers target their audience better, and tools like Gemini could also aid in creating the images and text they need for those ads.

At Alphabet's recent Google Cloud Next conference, hundreds of the company's enterprise customers spoke about using the cloud platform's genAI tools, with some notable business users including Mercedes Benz and Walmart (WMT).

Alphabet CEO Sundar Pichai said the company is "committed to making the investments required to keep [it] at the leading edge in technical infrastructure" as increased capital expenditures "will fuel growth in Cloud, help [the company] push the frontiers of AI models, and enable innovation across our services, especially in Search."

Pichai outlined the company's "clear paths to AI monetization through Ads and Cloud." He said the "cloud business continues to grow as we bring the best of Google AI to enterprise customers."

While AI initiatives are top of mind for investors, Microsoft's cloud strength fueled its third-quarter earnings beat.

"Cloud and AI continued to fuel upside for Microsoft," Bank of America analysts wrote, saying they "believe Azure strength is enough to drive total revenue growth higher for now."

Microsofts Hood said "I know it isn't as exciting as talking about all the AI projects," but Azure "is still really foundational" to the company's enterprise customers.

Excerpt from:

What We Learned From Big Tech Earnings This Week - Investopedia

Gaza war: artificial intelligence is changing the speed of targeting and scale of civilian harm in unprecedented ways – The Conversation

As Israels air campaign in Gaza enters its sixth month after Hamass terrorist attacks on October 7, it has been described by experts as one of the most relentless and deadliest campaigns in recent history. It is also one of the first being coordinated, in part, by algorithms.

Artificial intelligence (AI) is being used to assist with everything from identifying and prioritising targets to assigning the weapons to be used against those targets.

Academic commentators have long focused on the potential of algorithms in war to highlight how they will increase the speed and scale of fighting. But as recent revelations show, algorithms are now being employed at a large scale and in densely populated urban contexts.

This includes the conflicts in Gaza and Ukraine, but also in Yemen, Iraq and Syria, where the US is experimenting with algorithms to target potential terrorists through Project Maven.

Amid this acceleration, it is crucial to take a careful look at what the use of AI in warfare actually means. It is important to do so, not from the perspective of those in power, but from those officers executing it, and those civilians undergoing its violent effects in Gaza.

This focus highlights the limits of keeping a human in the loop as a failsafe and central response to the use of AI in war. As AI-enabled targeting becomes increasingly computerised, the speed of targeting accelerates, human oversight diminishes and the scale of civilian harm increases.

Reports by Israeli publications +927 Magazine and Local Call give us a glimpse into the experience of 13 Israeli officials working with three AI-enabled decision-making systems in Gaza called Gospel, Lavender and Wheres Daddy?.

These systems are reportedly trained to recognise features that are believed to characterise people associated with the military arm of Hamas. These features include membership of the same WhatsApp group as a known militant, changing cell phones every few months, or changing addresses frequently.

The systems are then supposedly tasked with analysing data collected on Gazas 2.3 million residents through mass surveillance. Based on the predetermined features, the systems predict the likelihood that a person is a member of Hamas (Lavender), that a building houses such a person (Gospel), or that such a person has entered their home (Wheres Daddy?).

In the investigative reports named above, intelligence officers explained how Gospel helped them go from 50 targets per year to 100 targets in one day and that, at its peak, Lavender managed to generate 37,000 people as potential human targets. They also reflected on how using AI cuts down deliberation time: I would invest 20 seconds for each target at this stage I had zero added value as a human it saved a lot of time.

They justified this lack of human oversight in light of a manual check the Israel Defense Forces (IDF) ran on a sample of several hundred targets generated by Lavender in the first weeks of the Gaza conflict, through which a 90% accuracy rate was reportedly established. While details of this manual check are likely to remain classified, a 10% inaccuracy rate for a system used to make 37,000 life-and-death decisions will inherently result in devastatingly destructive realities.

But importantly, any accuracy rate number that sounds reasonably high makes it more likely that algorithmic targeting will be relied on as it allows trust to be delegated to the AI system. As one IDF officer told +927 magazine: Because of the scope and magnitude, the protocol was that even if you dont know for sure that the machine is right, you know that statistically its fine. So you go for it.

The IDF denied these revelations in an official statement to The Guardian. A spokesperson said that while the IDF does use information management tools [] in order to help intelligence analysts to gather and optimally analyse the intelligence, obtained from a variety of sources, it does not use an AI system that identifies terrorist operatives.

The Guardian has since, however, published a video of a senior official of the Israeli elite intelligence Unit 8200 talking last year about the use of machine learning magic powder to help identify Hamas targets in Gaza. The newspaper has also confirmed that the commander of the same unit wrote in 2021, under a pseudonym, that such AI technologies would resolve the human bottleneck for both locating the new targets and decision-making to approve the targets.

AI accelerates the speed of warfare in terms of the number of targets produced and the time to decide on them. While these systems inherently decrease the ability of humans to control the validity of computer-generated targets, they simultaneously make these decisions appear more objective and statistically correct due to the value that we generally ascribe to computer-based systems and their outcome.

This allows for the further normalisation of machine-directed killing, amounting to more violence, not less.

While media reports often focus on the number of casualties, body counts similar to computer-generated targets have the tendency to present victims as objects that can be counted. This reinforces a very sterile image of war. It glosses over the reality of more than 34,000 people dead, 766,000 injured and the destruction of or damage to 60% of Gazas buildings and the displaced persons, the lack of access to electricity, food, water and medicine.

It fails to emphasise the horrific stories of how these things tend to compound each other. For example, one civilian, Shorouk al-Rantisi, was reportedly found under the rubble after an airstrike on Jabalia refugee camp and had to wait 12 days to be operated on without painkillers and now resides in another refugee camp with no running water to tend to her wounds.

Aside from increasing the speed of targeting and therefore exacerbating the predictable patterns of civilian harm in urban warfare, algorithmic warfare is likely to compound harm in new and under-researched ways. First, as civilians flee their destroyed homes, they frequently change addresses or give their phones to loved ones.

Such survival behaviour corresponds to what the reports on Lavender say the AI system has been programmed to identify as likely association with Hamas. These civilians, thereby unknowingly, make themselves suspect for lethal targeting.

Beyond targeting, these AI-enabled systems also inform additional forms of violence. An illustrative story is that of the fleeing poet Mosab Abu Toha, who was allegedly arrested and tortured at a military checkpoint. It was ultimately reported by the New York Times that he, along with hundreds of other Palestinians, was wrongfully identified as Hamas by the IDFs use of AI facial recognition and Google photos.

Over and beyond the deaths, injuries and destruction, these are the compounding effects of algorithmic warfare. It becomes a psychic imprisonment where people know they are under constant surveillance, yet do not know which behavioural or physical features will be acted on by the machine.

From our work as analysts of the use of AI in warfare, it is apparent that our focus should not solely be on the technical prowess of AI systems or the figure of the human-in-the-loop as a failsafe. We must also consider these systems ability to alter the human-machine-human interactions, where those executing algorithmic violence are merely rubber stamping the output generated by the AI system, and those undergoing the violence are dehumanised in unprecedented ways.

Read more here:

Gaza war: artificial intelligence is changing the speed of targeting and scale of civilian harm in unprecedented ways - The Conversation

Meta Says It Plans to Spend Billions More on A.I. – The New York Times

Meta projected on Wednesday that revenue for the current quarter would be lower than what Wall Street anticipated and said it would spend billions of dollars more on its artificial intelligence efforts, even as it reported robust revenue and profits for the first three months of the year.

Revenue for the company, which owns Facebook, Instagram, WhatsApp and Messenger, was $36.5 billion in the first quarter, up 27 percent from $28.6 billion a year earlier and slightly above Wall Street estimates of $36.1 billion, according to data compiled by FactSet. Profit was $12.4 billion, more than double the $5.7 billion a year earlier.

But Metas work on A.I., which requires substantial computing power, comes with a lofty price tag. The Silicon Valley company said it planned to raise its spending forecast for the year to $35 billion to $40 billion, up from a previous estimate of $30 billion to $37 billion. The move was driven by heavy investments in A.I. infrastructure, including data centers; chip designs; and research and development.

Meta also predicted that revenue for the current quarter would be $36.5 billion to $39 billion, lower than analysts expectations.

The combination of higher spending and lighter-than-expected revenue spooked investors, who sent Metas shares down more than 16 percent on Wednesday afternoon after they ended regular trading at $493.50.

Metas earnings should serve as a stark warning for companies reporting this earnings season, said Thomas Monteiro, a senior analyst at Investing.com. While the companys results were robust, it didnt matter as much as the reported lowering revenue expectations for the current quarter, he said, adding, Investors are currently looking at the near future with heavy mistrust.

We are having trouble retrieving the article content.

Please enable JavaScript in your browser settings.

Thank you for your patience while we verify access. If you are in Reader mode please exit andlog intoyour Times account, orsubscribefor all of The Times.

Thank you for your patience while we verify access.

Already a subscriber?Log in.

Want all of The Times?Subscribe.

Continue reading here:

Meta Says It Plans to Spend Billions More on A.I. - The New York Times

Anatomy Of A Fall star Messi getting Cannes show – The A.V. Club

Messi at the Oscars Photo: Rob Latour ( Shutterstock )

Hollywood will do anything to avoid paying a real human, actor, huh? Just kidding! We are all very happy for the success of Messi the dog, who was the breakout star in the Academy Award-winning French film Anatomy Of A Fall. According to IndieWire, the canine actor is capitalizing on his success with a new series called Messi: The Cannes Film Festival From A Dogs Eye View.

Maksim Chmerkovskiy on "So You Think You Can Dance" and meeting John Travolta

Messi, a.k.a. the canine George Clooney, as producer D18 Paris calls him, will host the series of eight episodes, each running one minute. According to a statement from D18 (via IndieWire), This will be an opportunity for Messi to ask his guest any questions with the innocence of a dog. When youre the current international star, you can do anything and Messi dares to do it all!

Surely, Messis team didnt mean to paraphrase Donald Trump on the infamous Access Hollywood tapes? What kind of nefarious shit will Messi be getting into over at Cannes? He already caused enough of a stir by allegedly unfairly swaying the vote in Anatomys favor by being a Very Good Boy at the Oscars Nominee Luncheon, nearly getting himself banned from the Academy Awards ceremony. He doesnt need to give the studio system any more ammunition to have him canceled and put out to pasture. Better keep him away from any cats, just in case.

All jokes aside, Messis show is a French program running on French channels (France 2, France 3, Culturebox, and TV5 Monde) from May 13 through the end of the festival, March 25. However, its sponsored by TikTok, so youll probably see it online somewhere. (You still have several months of the app before the ban goes into effect.) Tim Newman, who conceived the series, will produce for D18, while Raphal Mezrahi, who incarnated the show, will write and provide the voice of Messi. Loc Pourageaux will direct, and then of course theres Laura Martin, Messis loyal trainer.

Visit link:
Anatomy Of A Fall star Messi getting Cannes show - The A.V. Club

Red Sox Pitcher Tanner Houck Has A New Arsenal – Anatomy of an Inning – Over The Monster

Welcome back to another edition of The Anatomy of An Inning. My name is Jacob Roy, and I pretend to know pitching better than the pitchers themselves. If youre new here or need a reminder of what this is all about, I take an inning from the previous week or so and break it down, one pitch at a time. Each pitch should have a purpose, Im looking at each individually to try to go beyond the box score and tell the full story.

Its Tanner Houck week inside my brain! Thats right, I was so convinced Houck would never excel as a starting pitcher that his newfound success in the rotation has consumed my every waking thought. I already covered many of the changes Houck has made thus far in theory, but today Im taking a look at how it works in practice. Lets finish the Tanner Houck analysis saga so I can go back to obsessing over the rest of the pitching staff instead of just one guy.

Well go to the second inning of Houcks start on Tuesday against the Cleveland Guardians. Houck retired the side in order in the first inning and is returning to the mound for in a tie game. Keep in mind that this is Houcks second meeting with the Guardians in as many weeks, so the hitters have some level of familiarity with his arsenal.

Houck starts Naylor off with an offspeed pitch (Im calling it a changeup until someone confirms otherwise) for called strike one. I adore this pitch call. Last season, you would have almost never seen Tanner Houck throw a changeup behind or even in the count. First-pitch strikes tilt the tables in favor of the pitcher, and hes able to get one here with an offspeed pitch. This isnt the best execution as its right down the pipe, but hitters arent typically looking for first-pitch changeups so hes able to get away with it.

Beautiful slider for strike two. This time, Houck starts the pitch in a similar spot to the previous one, but instead of dropping straight down, it runs in on the hitter. Its also about five miles per hour slower than the changeup, giving Houck three velocity bands to work with. Here, Naylor is a bit early and a bit over the ball as he swings for strike two. At 0-2, Id continue to work down in the zone, with either another slider even further away from the strike zone or a changeup low and away.

Perfection. Houck goes to the changeup, and Naylor cant make contact. To me, the swing looks like Naylor is expecting a slider as he falls away from the ball before lunging at the pitch as it breaks away from him. Houcks newfound ability to locate his offspeed pitch makes all the difference here. Previously, lefties didnt have to think about the slow ball as, more often than not, it wasnt landing in the zone. Now, he has pitches breaking both ways to give lefties pause in bad counts. Great execution by Houck to start the inning.

Houck again tries to execute the first pitch changeup for a strike, but this one falls below the zone for ball one.

Heres a good slider for strike one. Brennan fouls it off, but its a weird, almost defensive swing from Brennan whose timing is off. 1-1.

At 1-1, Houck goes back to his changeup and locates it nicely on the outer part of the plate. Much like Naylor, Brennan has to lunge at this one and can only foul it off. In a 1-2 count with Brennans timing off, Id continue to mix my pitches and avoid doubling up.

Houck does double up and it gets away from him to even the count at 2-2.

Heres a slider that gets away from him and runs the count full. To this point in the season, Brennan has seen three fastballs from Houck, none of which were in two-strike counts. After letting two off-speed pitches get away from him, this would be an opportunity to throw one past him.

Heres another slider off the plate, but Brennan reaches out and fouls it off. I still like the sinker here.

Oops. I guess thats why Im behind a keyboard and not calling pitches. That and I lack the arm strength, hand-eye coordination, and overall athleticism required to play catcher at a Major League level. Anyways, this pitch is well located, but Brennan is ready for it and punches it to left field.

Well, that works. Its a sinker on the hands of Freeman, and all he can do is hit it into a fielders choice for the second out. Houcks sinker has a nice deviation between the observed and spin-based movement, meaning it drops more than one might expect. Over the last three years, when he gets the sinker near this spot to righties, it ends up on the ground fairly frequently.

Thats the rule for most same-handed matchups; sinkers inside typically result in ground balls. Houck is no exception, and he executes the book perfectly here to get a quick out.

Florial had a doozy of a time matching up with Houck in the last meeting, striking out in all three of his at-bats. He saw one sinker in three trips to the plate. Id expect more of the same from Houck in this meeting.

He starts off with the first cutter of the inning for a questionably called strike one. Its in a good spot where even if Florial does swing and connect, hell have a hard time keeping it fair. Given Houcks matchups with Florial last week, I wouldnt throw any fastballs now that Houck is ahead in the count.

Heres a changeup from Houck that Florial fouls off. This is probably the worst pitch so far with the ball being left middle-in. Fortunately, it still counts as a strike and Houck is ahead 0-2. Again, I wouldnt throw any fastballs.

Houck opts to go with his slider and leaves it up slightly. Florial again fouls it off to keep the count at 0-2. The story remains the same in my eyes; Houck should go to either his slider inside or his changeup away.

Its a slider this time and its a very good one. Florial thinks about it but ultimately holds up. Again, I would go with the slider inside or the changeup away. I prefer the slider because Florial just spat on one and is probably thinking he wont get another, but either or is a good option.

Beautifully executed. Houck buries a changeup low and away and Florial is completely fooled. Its nearly a perfect mirror for his changeup, and it makes it incredibly difficult for hitters because Houck can start the pitches in the same spot and have them break in opposite directions, at different velocities. Its almost a guessing game for lefties, and Florial guesses wrong this time.

Ive written about Tanner Houck more in two weeks than you should read in a season. If youve made it this far, thank you. At the end of this outing, things got away from Houck a bit, and the Guardians broke through for a couple of runs. Still, Houck managed to make it through six innings and keep the Red Sox in the game, something he rarely did in past seasons as a starter.

Ill reiterate one more time: the keys to Houcks success this season are his changeup, and his mechanical changes. The changeup gives him a new velocity band to work with and a pitch to keep lefties off balance. The command keeps him in control of at-bats, as well as allows him to execute pitches like the above sinker to generate quick outs. I was a Houck doubter before, but these adjustments have changed my tune in a hurry.

Read more:
Red Sox Pitcher Tanner Houck Has A New Arsenal - Anatomy of an Inning - Over The Monster

‘Grey’s Anatomy’: When Did Fans Stop Hating and Start Loving Amelia Shepherd? – The Daily Beast

The Greys Anatomy fandom has never lacked for controversial characters. In fact, Grey Sloan Memorial Hospital seems to hire specifically for that quality. That said, I must ask: Have we ever seen a character redemption arc as pronounced as Amelia Shepherds (Caterina Scorsone)? Once perhaps the worst character wandering this hospitals hallowed, absolutely uninsurable halls, shes spent 13 seasons and counting becoming one of its most compelling.

ABCs chief medical drama is all about stirring up our emotions and, sometimes, our angry keyboards. From the ever-complicated Meredith Grey (Ellen Pompeo), to the lovable but rarely logical Izzie Stevens (Katherine Heigl), to the well-meaning but absolutely toxic lover Owen Hunt (Kevin McKidd), Amelias peers have all had their less-than-stellar moments. For a while, however, everyone in the fandom seemed to hate her with a burning passion.

Maybe it was her ongoing competition with Derek that put people off, or maybe it was her short temper, but the hatred has been intense. Those whove been hanging out with Amelia since her time on the Greys spin-off Private Practice might have a deeper appreciation for her absurdly traumatic backstory. (For those who need a refresher: She watched her dad get murdered when she was just 5 years old; as an adult, she woke up one morning to find her fianc dead from an overdose beside her; and she carried her son to term knowing he would not survive, just so that his death could become meaningful through organ donation.)

That said, some viewers simply could not stomach Amelias impulsivity, her horrific behavior in pretty much all of her romantic relationships, and her admittedly kinda corny superhero pose. To each their own!

But what changed? During the shows most recent episode earlier this month, I saw multiple posts about Amelias princessing. This was where my curiosity began. Then, I dove down a deep social media rabbit hole and started watching the fan videos on YouTube. A theory began to take shape.

It should come as no surprise that a big part of Amelias appeal seems to come down to Scorsones performance. Amelia is bold and expressive, if often self-centered, and while plenty of Greys actors have seemingly started phoning it in over the years, Scorsone always comes ready to Act with a capital A, adding small flourishes to her delivery that make all of Amelias lines feel like Amelia-isms. It also doesnt hurt that a lot of people simply think that Scorsone herself is pretty adorable and also cool as hell. (If only all of us could hang with Hayley Kiyoko!)

Then, theres Amelias actual arc, which, when you take a step back and look at it, has also been a pretty redemptive one. Shes overcome multiple potentially devastating experiences and uses her experience managing her addiction to support colleagues like Dr. Webber when theyre struggling. Also, not for nothing, she saved Geena Davis life (OK, Dr. Hermans life) through an impossible brain surgery that pretty much no one thought she could pull off. Later, she herself survived a brain tumorfurther solidifying her resilience for any remaining haters and losers who mightve doubted her.

Sure, Amelias relationship with Owen was kind of a mess, as are most of her romantic relationships. And yes, she handled that whole paternity test situation with Link (Chris Carmack) all wrong. And yes, OK, she can often struggle to take accountability during interpersonal disputes. But none of us are perfect, right? On a show like Greys Anatomy, especially, where everyone has their disastrous moments of disgrace, being a fuck-up is always kinda relative.

Come to think of it, maybe one of the biggest reasons for Amelias rise from the deepest depths of most-hated territory is that shes actually stuck around long enough to grow on people. On a storytelling level, the massive turnover weve seen on Greys likely also plays a role here. At this point, 20 seasons inGod, does that make me feel old to typeAmelia is one of the longest-running characters we see on screen each week. Apart from Meredith, who only pops in periodically, our only remaining original characters are Chandra Wilsons Miranda Bailey and James Pickens Jr.s Richard Webber.

When people come and go so often, theres some inherent comfort to seeing someone youve spent a lot of time with alreadyeven if she used to annoy the hell out of you. Lets just hope that if she really does get with Natalie Morales Monica Beltran, she gets out of her own way and lets us all have the ship we deserve.

Read more from the original source:
'Grey's Anatomy': When Did Fans Stop Hating and Start Loving Amelia Shepherd? - The Daily Beast

Interstitium: A Network of Living Spaces Supports Anatomical Interconnectedness – The Scientist

The human body is enmeshed in an intricate internal web of living spaces known as the interstitium.1 These fractal-like structures create a vast honeycomb network of fluid-filled openings within and between tissues and organs that spans the body and acts as a thoroughfare. A sophisticated system of connective tissue, including collagen and various other extracellular matrix proteins, supports the continuity of this network. The interstitium is increasingly being recognized as a fundamental anatomical structure and body-wide communication system.

The discovery of the interstitium in 2018 made waves, with many questioning whether scientists had discovered a new organ.1 It's actually not an organ. It's a system, said Neil Theise, a medical doctor and professor of pathology at NYU Grossman School of Medicine, whose team made the discovery. The space itself may be as large as 100 to 200 microns. It's grossly macroscopic, you can see it when you look at any connective tissue in the body, and you can pull it apart with tweezers. Thats not because the collagen easily shreds, but because it's actually a net, said Theise.

Continue reading below...

There is a Crack in Everything; Thats How the Light Gets in

Neil Theise is a medical doctor and professor of pathology at NYU Grossman School of Medicine.

Beowulf Sheehan

Remarkably, doctors and scientists routinely encountered the interstitium but were taught to ignore it. Surgeons regularly removed and discarded portions of this body-wide net and pathologists wrote it off as an artifact of tissue processing. In the latter case, preparing tissue samples for microscope viewing involves a series of steps that include fixation and dehydration. The fluid is removed from the spaces of the interstitium, and the structures collapse down on themselves. You see these cracks, these little openings in the collagen, Theise said. For decades, what Ive been taught and what Ive taught people is just ignore that because collagen is so stiff that when you try to section [the tissue] it cracks. When Theise and his colleagues made their ground-breaking discovery in 2018, they realized that the spaces in living tissue corresponded with the cracks routinely seen in fixed tissue sections on microscope slides. It turns out those are the remnants of the living spaces, Theise said.

With this realization, the cracks in contemporary science and medicine were exposed. Despite the vast scientific knowledge that exists about the human body, the picture remains profoundly incomplete. But as poet, singer, and Zen Buddhist Leonard Cohen famously sang, There is a crack in everything, thats how the light gets in. The interstitium may be the missing piece of the puzzle that helps explain the interconnectivity between every cell, tissue, organ, and hidden crevice in the body. There isnt a tissue that isnt riddled with the spaces. The interstitium has the ability to communicate through the body across every scale, from the quantum electromagnetic level, all the way up to the cellular level, Theise said.

Because the interstitium is a fibrous network, mechanical stimuli that affect a fiber in one area also affect other regions of the body, creating a network of mechanical connectivity. "If you want to communicate a signal, mechanics are so efficient, said Andrew Pelling, a professor of physics and biology at the University of Ottawa. It's no surprise that there are all these highly evolved systems to sense and transmit mechanical information."

Theise explained further that the collagen that makes up the interstitium is piezoelectric.2,3 It can convert mechanical force into electrical currents that may carry charged molecules through the interstitium. Collagen, when you stack it up high enough, becomes a piezo crystal. Any movement of the collagen will generate electrical energy, Theise said. This may have far-reaching implications from tissue and organ regeneration to gastrointestinal function.4-6

The interstitium is continuous throughout tissue, in this case, the human pancreas, as well as the entire body. The image on the left shows a cross section through the draining duct of a human pancreas surrounded by thick bundles of supportive collagen networks in red. On the right, a hyaluronic acid stain in brown demonstrates how the interstitial spaces between the collagen bundles and filaments are filled with hyaluronic acid.

Neil Theise

The interstitium also acts like a sieve in other ways. The spaces of the interstitium are filled with hyaluronic acid, which has a high capacity to hold water, creating a gel. Hyaluronic acid is also highly charged, meaning it can preferentially allow access to certain molecules depending on their charges. In doing so, the interstitium has the potential to modulate the movement of large and small molecules, as well as cells. Although it is not clear how and where they move, the mechanisms may relate to signaling molecules like growth factors, chemokines, and cytokines that create chemical gradients to guide movement. This is particularly relevant for cell migration in the context of cancer cell metastasis through the interstitium.7 I can show you a tumor marching through these spaces, Theise said, referring to histopathological tissue slides of cancerous tissue. The interstitium is also believed to be involved in sepsis and fluid balance.8, 9, 10

Continue reading below...

You Can Add up the Parts, You Wont Have the Sum

The Buddhist tenet of interconnectedness plays a strong role in Theises life and work. A practicing Zen Buddhist, Theise described the interstitium and his work as a pathologist in almost mystical terms. My Zen practice is about cultivating beginner's mind, to just be witness to the present moment and not attached to preconceived notions, not anticipating what you think you know will happen. My practice of pathology uses the same method, which is probably not a coincidence.

Science and spiritual practice came together unexpectedly for Theise one day during a particularly distracted meditative practice. He was ruminating on whether the body is a unique entity or a conglomeration of cells, when he noticed an incense stick turn to smoke on the altar. Suddenly, there was an instant bridge between the scientific and spiritual side in a way I was not looking for. But once you see it, youve seen it; like once youve seen the interstitium, you cant unsee it, he said.

The interstitium was long overlooked by surgeons and pathologists as a byproduct of biology and tissue processing.

Yet the interstitiums extraordinarily complicated net is far from easy to visualize. Theise and his colleagues have broken down the task, mapping the interstitium organ by organ to reveal the continuity of spaces within and between tissues and organs.7 In doing so, Theise also creates interconnections across scientific and medical subspecialities. "It makes sense to me, at least conceptually, that it is such an important space. It's everywhere, it's the interface between all of these discrete systems, Pelling said. Biology doesn't tend to create structures that are not important in some way. It's the same as those older notions about junk DNA that are starting to crumble. Biology is extremely efficient."

Contemporary science is successful because of its reductive approach. The human body is, in many ways, like a finetuned machine. But, as Cohen sang,you can add up the parts, you wont have the sum. Understanding how the interstitium works w
ill define more of the rules about how the trillions of cells in the human body communicate across vast distances to create the exquisitely complex system that is the body. How these things all add up are vast scientific questions that will require a meticulously reductive approach as well as cultivation of a beginners mind. If you do any kind of work with dedicated focus, the secrets of the universe are there in what you're doing, Theise said.

View original post here:
Interstitium: A Network of Living Spaces Supports Anatomical Interconnectedness - The Scientist

Messi the Dog From ‘Anatomy of a Fall’ Is Getting His Own TV Show – Collider

The Big Picture

The French film Anatomy of a Fall received five nominations at the Academy Awards, but it was another aspect of the film that has kept everyone talking about it: Messi, the Border Collie who starred in Anatomy of a Fall. Messi's performance was highly lauded as one of the best animal roles ever put on-screen, and now the four-legged actor is going in front of the camera again. Messi will star in his own television show set at the upcoming Cannes Film Festival, IndieWire reported. It seems that this year's installment of the iconic festival will truly be a dog-eat-dog world.

Messi's show will be set at the festival, fitting as Messi himself was born in France and lives with his handler near Paris. The show will actually be a short program that allows viewers to see themselves through the eyes and, somehow, the voice of Messi, according to Cannes production company D18 Paris. Details remain slim, but the show will reportedly run from dawn to late night and "will be an opportunity for Messi to ask his guest any questions with the innocence of a dog," D18 said. "When youre the current international star, you can do anything and Messi dares to do it all!" the production company added.

The show, which is officially called Messi: The Cannes Film Festival from a Dogs Eye View, will be a series of eight one-minute episodes and will be broadcast on a variety of French TV channels. The idea for the show came from Tim Newman, who will produce the show for D18, while Loc Pourageaux will direct. Raphal Mezrahi will provide Messi's voice.

Anatomy of a Fall featured standout performances from Sandra Hller and Milo Machado-Graner, but Messi became a superstar in own right when the film was released in 2023. Messi portrayed Snoop, and he stole the film during a scene in which he must act as a dog that has overdosed on aspirin, before Sandra (Hller) and Daniel (Machado-Graner) step in and perform life-saving measures. Though he may be just a dog, Messi's performance got everyone in Hollywood talking, and while the film itself became an Academy Award nominee, Messi won a special prize himself: the 2023 Palm Dog, given out by Cannes to the best dog performance of the year.

Messi became such a star that he even attended the Oscars for a brief period, despite initial reports that he wouldn't make an appearance. The dog filmed a series of reaction shots prior to the start of the ceremony, and Oscars host Jimmy Kimmel even mentioned Messi's popularity during his opening monologue. A clip of Messi appearing to "clap" for other nominees also went viral.

Messi's show will run for the duration of Cannes, from May 13 to May 25. Anatomy of a Fall is streaming now on Hulu.

A woman is suspected of her husband's murder, and their blind son faces a moral dilemma as the sole witness.

Watch on Hulu

Read more from the original source:
Messi the Dog From 'Anatomy of a Fall' Is Getting His Own TV Show - Collider

Ellen Pompeo Jokes ‘Grey’s Anatomy’ Success Is Due to Taylor Swift (Exclusive) – PEOPLE

Ellen Pompeo has an inkling on why Grey's Anatomy has been so successful and it might come down to music superstar Taylor Swift.

In PEOPLEs new Greys Anatomy special edition issue, on newsstands now, the 54-year-old actress shares a few reasons why she thinks the show has remained on the air for 20 seasons and counting.

[Creator] Shonda Rhimes is a great writer. Second, Taylor Swift named her cat after Meredith Grey! Just kidding. It is because of our awesome fans, says Pompeo.

The music superstar who released her surprise double album The Tortured Poets Department on April 19 has long been a fan of the series and tapped Pompeo to be in her star-studded "Bad Blood" music video in 2015.

Her people called my people and were said, Would you like to be in this video? And I was like, Of course, that would be so fun. Theres an old lady section? Im down,' Pompeo recalled on Jimmy Kimmel Live the following year.

However, fans shouldnt hold their breath on Pompeo meeting the feline named after her Greys Anatomy character.

Im super allergic to cats. Its going to be awkward, she said of meeting Swifts cat at the time.

In 2022, Pompeo was asked about the possibility of Swift making a cameo on the ABC medical drama. The star told Extra at the time: "I think she's pretty busy, but that would be fun. I would love it."

While Swift has yet to appear on the show, her song White Horse was featured in a season 5 episode, which she called basically the best thing ever in a clip resurfaced on TikTok.

I love Grey's Anatomy because I think it's the best example of dry, sarcastic humor I've ever seen mixed with drama, because in life there's humor and there's drama, she said. You know, when you're watching comedies, it's like, that's awesome and makes you laugh and its light, but you know, when a breakup happens on a comedy and they're all just laughing about it all the time, and they don't ever feel it it's hard for me to buy that because in real life, like you laugh about some things, but you cry about other things.

She added: So I think that Grey's Anatomy has a great balance of real emotions slash dry humor.

For more on Greys Anatomy and its cast, pick up PEOPLE's special issue, out Friday.

PEOPLE's Greys Anatomy special edition issue is on newsstands now. The magazine retails for $14.99. Grey's Anatomy airs Thursdays at 9 p.m. ET on ABC.

View original post here:
Ellen Pompeo Jokes 'Grey's Anatomy' Success Is Due to Taylor Swift (Exclusive) - PEOPLE