Daily Archives: January 18, 2024

Google.org Grants $2.8M to WFP USA in Support of Innovation – World Food Program USA

Posted: January 18, 2024 at 6:09 pm

WASHINGTON, D.C. (January 18, 2024) World Food Program USA has received a $2.8 million grant from Google.org, marking a significant milestone in the fight against global hunger. This grant supports the United Nations World Food Programme (WFP) Innovation Accelerator, a cutting-edge initiative aimed at leveraging innovation and technology to tackle food insecurity worldwide.

We consider the private sector, particularly Google.org, to be best-in-class innovators and visionaries and we rely on their expertise to help us create systemic innovations that will transform our efforts to eradicate global hunger. We are very appreciative of Google.orgs support, said Barron Segar, World Food Program USA President and CEO.

Stuart McLaughlin, director, advocacy and strategic partnerships at Google.org, made the announcement during an official side-event of the 2024 World Economic Forum hosted by the WFP Innovation Accelerator, Galvanizing Impact Innovation for Zero Hunger, which highlighted Google.orgs commitment to empowering communities and advancing innovative solutions to global challenges.

By harnessing the power of innovation and technology, together, we aspire to drive meaningful change and address global food insecurity. This grant exemplifies our commitment to creating a sustainable and hunger-free future for all, said McLaughlin.

The WFP Innovation Accelerator focuses on integrating advanced technologies such as Artificial Intelligence (AI), data analytics, blockchain and new business models to improve humanitarian response and address the root causes of hunger. The grant will fund an acceleration program aimed at developing and scaling 10 ventures created and developed within WFP. These solutions, anchored in technological innovation, aim to improve emergency response, supply chain efficiency, and support for small-scale farmers in many of the worlds most vulnerable communities, including in Yemen, Afghanistan, Somalia, and Haiti. The program includes a comprehensive package of cutting-edge business support, scaling strategies and grant funding.

This partnership amplifies our capacity to leverage cutting-edge technology with WFPs global operational capacity. Google stepping up the support in times of a global food crisis is critical to scale game-changing solutions to create more impact for some of the most vulnerable people across the world. said Bernhard Kowatsch, head of the WFP Innovation Accelerator.

This significant investment by Google.org is part of its ongoing commitment to creating sustainable and impactful solutions in global communities. It also marks a key moment in the collaboration between the technology sector and humanitarian organizations.

About Google.org Google.org, Googles philanthropy, brings the best of Google to help solve some of humanitys biggest challenges combining funding, product donations and technical expertise to support underserved communities and provide opportunity for everyone. We engage nonprofits, social enterprises and civic entities who make a significant impact on the communities they serve, and whose work has the potential to produce scalable, meaningful change.

About World Food Program USA World Food Program USA, a 501(c)(3) organization based in Washington, D.C., proudly supports the mission of the United Nations World Food Programme by mobilizing American policymakers, businesses and individuals to advance the global movement to end hunger. Our leadership and support help to bolster an enduring American legacy of feeding families in need around the world. To learn more, please visit wfpusa.org.

About the United Nations World Food Programme Innovation Accelerator The WFP Innovation Accelerator, started in 2015, is one of the worlds biggest social impact startup accelerators. It offers 16 annual programs to the broader ecosystem on multiple social impact and sustainability issues, such as hunger, climate change, primary healthcare, gender equality, and emergency response. In 2022 alone, the portfolio of 150+ startups and innovations impacted 37 million people globally. Since launch, the WFP Innovation Accelerator has maintained a 2X growth rate every year and raised over $200 million in grants for innovations. For more information, visit https://innovation.wfp.org/.

Media Contacts Toula Athas World Food Program USA tathas@wfpusa.org

Press@Google.com Google inquiries

Go here to read the rest:

Google.org Grants $2.8M to WFP USA in Support of Innovation - World Food Program USA

Posted in Google | Comments Off on Google.org Grants $2.8M to WFP USA in Support of Innovation – World Food Program USA

Google CEO: Job Cuts Will Continue Through 2024 – The Information

Posted: at 6:09 pm

Google CEO Sundar Pichai on Wednesday acknowledged the companys recent wave of layoffs for the first time, warning staff in an email that cuts would continue throughout the year, according to a person with direct knowledge of the email.

Google in the past week laid off more than 1,000 people across teams including hardware, ad sales and Google Assistant, mirroring recent job cuts at other major companies such as Amazon and Meta Platforms. Google had 182,381 employees as of September, meaning the cuts are tiny on a percentage basis. A year ago, it cut 6% of its staff at the time, or 12,000 people, but Pichai said cuts this year wouldnt rise to that level.

See the article here:

Google CEO: Job Cuts Will Continue Through 2024 - The Information

Posted in Google | Comments Off on Google CEO: Job Cuts Will Continue Through 2024 – The Information

AST SpaceMobile gets strategic investment from AT&T, Google and Vodafone (NASDAQ:ASTS) – Seeking Alpha

Posted: at 6:09 pm

AST SpaceMobile (NASDAQ:ASTS) gets strategic investment from AT&T (T), Google and Vodafone and aggregate new financing of up to $206.5 million in gross proceeds.

In addition to the $155 million strategic investment, the company also plans to draw up to $51.5 million from the companys existing senior-secured credit facility.

The strategic investment is expected to support the commercial roll-out of ASTS' network and comprises a mix of equity-linked capital and non-dilutive commercial payments.

The investment includes a $110 million of 10-year subordinated convertible notes with 5.50% interest with a conversion price of $5.75 per share, invested by AT&T, Google and Vodafone, $20 million revenue commitment from AT&T.

The investment also includes $25 million minimum revenue commitment from Vodafone.

Go here to read the rest:

AST SpaceMobile gets strategic investment from AT&T, Google and Vodafone (NASDAQ:ASTS) - Seeking Alpha

Posted in Google | Comments Off on AST SpaceMobile gets strategic investment from AT&T, Google and Vodafone (NASDAQ:ASTS) – Seeking Alpha

Google’s CEO, Sundar Pichai, Finally Responded to the Company’s Layoffs. It’s a Lesson for Every Leader – Inc.

Posted: at 6:09 pm

Google's CEO, Sundar Pichai, Finally Responded to the Company's Layoffs. It's a Lesson for Every Leader  Inc.

More here:

Google's CEO, Sundar Pichai, Finally Responded to the Company's Layoffs. It's a Lesson for Every Leader - Inc.

Posted in Google | Comments Off on Google’s CEO, Sundar Pichai, Finally Responded to the Company’s Layoffs. It’s a Lesson for Every Leader – Inc.

As Google Pivots to AI, CEO Warns He Will Fire Even More Staff – Futurism

Posted: at 6:09 pm

How many people are going to get the axe in favor of AI? Be Evil

Google has laid off more than a thousand people since January 10 and according to CEO Sundar Pichai, there will be more firings in the future as the company pushes forward with its AI pivot.

"We have ambitious goals and will be investing in our big priorities this year," Pichai told Google's staff in a company-wide email reviewed byThe Verge. "The reality is that to create the capacity for this investment, we have to make tough choices."

Those layoffs, as prior reporting from Business Insider andAxios indicates, took place across several departments. Hundreds were in the company's advertising and customer sales teams, while others impacted the Google Assistant, Fitbit, and hardware verticals.

In the more recent memo, the CEO said that although the more recent and forthcoming "role eliminations" will not be "at the scale of last years reductions" a reference to the 12,000 jobs Google cut around this time last year the company will continue down the path of "removing layers to simplify execution and drive velocity in some areas."

Google's forthcoming "layer-removal" comes on the heels of the company's tumultuous 2023, which along with the 12,000 cuts at the beginning of the year saw it pivot to AI much like its peers and competitors.

Indeed, as the New York Times reported based on insider interviews late last year, the company's pivot to AI was made hastily as OpenAI's ChatGPT and its alliance with Microsoft threatened to leave more established tech players else in its dust.

Just before Christmas 2022, theNYT's reporting explains, the company's top lawyer summoned executives to deliver a directive: that their teams were to drop whatever they had previously been working on and begin developing a slate of AI products immediately, per Pichai's orders. Just over a month later, those 12,000 employees were axed, and now we appear to be seeing the continued toll of that rapid decision-making process.

AsAxios aptly pointed out in its own analysis of the 2024 Google layoffs, it seems less that people are being replaced by AI itself and more that they are being replaced with smaller teams of people who are good with the technology as the company adjusts course.

It's hard to say how those AI-adept new workers feel about their roles, but it seems likely they're essentially building the tech that will eventually replace them and that can't feel good.

More on Google: Image Database Powering Google's AI Contains Explicit Images of Children

See the original post:

As Google Pivots to AI, CEO Warns He Will Fire Even More Staff - Futurism

Posted in Google | Comments Off on As Google Pivots to AI, CEO Warns He Will Fire Even More Staff – Futurism

Introducing ASPIRE for selective prediction in LLMs Google Research Blog – Google Research

Posted: at 6:09 pm

Posted by Jiefeng Chen, Student Researcher, and Jinsung Yoon, Research Scientist, Cloud AI Team

In the fast-evolving landscape of artificial intelligence, large language models (LLMs) have revolutionized the way we interact with machines, pushing the boundaries of natural language understanding and generation to unprecedented heights. Yet, the leap into high-stakes decision-making applications remains a chasm too wide, primarily due to the inherent uncertainty of model predictions. Traditional LLMs generate responses recursively, yet they lack an intrinsic mechanism to assign a confidence score to these responses. Although one can derive a confidence score by summing up the probabilities of individual tokens in the sequence, traditional approaches typically fall short in reliably distinguishing between correct and incorrect answers. But what if LLMs could gauge their own confidence and only make predictions when they're sure?

Selective prediction aims to do this by enabling LLMs to output an answer along with a selection score, which indicates the probability that the answer is correct. With selective prediction, one can better understand the reliability of LLMs deployed in a variety of applications. Prior research, such as semantic uncertainty and self-evaluation, has attempted to enable selective prediction in LLMs. A typical approach is to use heuristic prompts like Is the proposed answer True or False? to trigger self-evaluation in LLMs. However, this approach may not work well on challenging question answering (QA) tasks.

In "Adaptation with Self-Evaluation to Improve Selective Prediction in LLMs", presented at Findings of EMNLP 2023, we introduce ASPIRE a novel framework meticulously designed to enhance the selective prediction capabilities of LLMs. ASPIRE fine-tunes LLMs on QA tasks via parameter-efficient fine-tuning, and trains them to evaluate whether their generated answers are correct. ASPIRE allows LLMs to output an answer along with a confidence score for that answer. Our experimental results demonstrate that ASPIRE significantly outperforms state-of-the-art selective prediction methods on a variety of QA datasets, such as the CoQA benchmark.

Imagine teaching an LLM to not only answer questions but also evaluate those answers akin to a student verifying their answers in the back of the textbook. That's the essence of ASPIRE, which involves three stages: (1) task-specific tuning, (2) answer sampling, and (3) self-evaluation learning.

Task-specific tuning: ASPIRE performs task-specific tuning to train adaptable parameters (p) while freezing the LLM. Given a training dataset for a generative task, it fine-tunes the pre-trained LLM to improve its prediction performance. Towards this end, parameter-efficient tuning techniques (e.g., soft prompt tuning and LoRA) might be employed to adapt the pre-trained LLM on the task, given their effectiveness in obtaining strong generalization with small amounts of target task data. Specifically, the LLM parameters () are frozen and adaptable parameters (p) are added for fine-tuning. Only p are updated to minimize the standard LLM training loss (e.g., cross-entropy). Such fine-tuning can improve selective prediction performance because it not only improves the prediction accuracy, but also enhances the likelihood of correct output sequences.

Answer sampling: After task-specific tuning, ASPIRE uses the LLM with the learned p to generate different answers for each training question and create a dataset for self-evaluation learning. We aim to generate output sequences that have a high likelihood. We use beam search as the decoding algorithm to generate high-likelihood output sequences and the Rouge-L metric to determine if the generated output sequence is correct.

Self-evaluation learning: After sampling high-likelihood outputs for each query, ASPIRE adds adaptable parameters (s) and only fine-tunes s for learning self-evaluation. Since the output sequence generation only depends on and p, freezing and the learned p can avoid changing the prediction behaviors of the LLM when learning self-evaluation. We optimize s such that the adapted LLM can distinguish between correct and incorrect answers on their own.

In the proposed framework, p and s can be trained using any parameter-efficient tuning approach. In this work, we use soft prompt tuning, a simple yet effective mechanism for learning soft prompts to condition frozen language models to perform specific downstream tasks more effectively than traditional discrete text prompts. The driving force behind this approach lies in the recognition that if we can develop prompts that effectively stimulate self-evaluation, it should be possible to discover these prompts through soft prompt tuning in conjunction with targeted training objectives.

After training p and s, we obtain the prediction for the query via beam search decoding. We then define a selection score that combines the likelihood of the generated answer with the learned self-evaluation score (i.e., the likelihood of the prediction being correct for the query) to make selective predictions.

To demonstrate ASPIREs efficacy, we evaluate it across three question-answering datasets CoQA, TriviaQA, and SQuAD using various open pre-trained transformer (OPT) models. By training p with soft prompt tuning, we observed a substantial hike in the LLMs' accuracy. For example, the OPT-2.7B model adapted with ASPIRE demonstrated improved performance over the larger, pre-trained OPT-30B model using the CoQA and SQuAD datasets. These results suggest that with suitable adaptations, smaller LLMs might have the capability to match or potentially surpass the accuracy of larger models in some scenarios.

When delving into the computation of selection scores with fixed model predictions, ASPIRE received a higher AUROC score (the probability that a randomly chosen correct output sequence has a higher selection score than a randomly chosen incorrect output sequence) than baseline methods across all datasets. For example, on the CoQA benchmark, ASPIRE improves the AUROC from 51.3% to 80.3% compared to the baselines.

An intriguing pattern emerged from the TriviaQA dataset evaluations. While the pre-trained OPT-30B model demonstrated higher baseline accuracy, its performance in selective prediction did not improve significantly when traditional self-evaluation methods Self-eval and P(True) were applied. In contrast, the smaller OPT-2.7B model, when enhanced with ASPIRE, outperformed in this aspect. This discrepancy underscores a vital insight: larger LLMs utilizing conventional self-evaluation techniques may not be as effective in selective prediction as smaller, ASPIRE-enhanced models.

Our experimental journey with ASPIRE underscores a pivotal shift in the landscape of LLMs: The capacity of a language model is not the be-all and end-all of its performance. Instead, the effectiveness of models can be drastically improved through strategic adaptations, allowing for more precise, confident predictions even in smaller models. As a result, ASPIRE stands as a testament to the potential of LLMs that can judiciously ascertain their own certainty and decisively outperform larger counterparts in selective prediction tasks.

In conclusion, ASPIRE is not just another framework; it's a vision of a future where LLMs can be trusted partners in decision-making. By honing the selective prediction performance, we're inching closer to realizing the full potential of AI in critical applications.

Our research has opened new doors, and we invite the community to build upon this foundation. We're excited to see how ASPIRE will inspire the next generation of LLMs and beyond. To learn more about our findings, we encourage you to read our paper and join us in this thrilling journey towards creating a more reliable and self-aware AI.

We gratefully acknowledge the contributions of Sayna Ebrahimi, Sercan O Arik, Tomas Pfister, and Somesh Jha.

View original post here:

Introducing ASPIRE for selective prediction in LLMs Google Research Blog - Google Research

Posted in Google | Comments Off on Introducing ASPIRE for selective prediction in LLMs Google Research Blog – Google Research

Alphabet CEO tells Googlers: More job cuts on the way – The Register

Posted: at 6:09 pm

Alphabet CEO tells Googlers: More job cuts on the way  The Register

Go here to see the original:

Alphabet CEO tells Googlers: More job cuts on the way - The Register

Posted in Google | Comments Off on Alphabet CEO tells Googlers: More job cuts on the way – The Register

Google Breaks Another Promise About Tracking Your Location History – Gizmodo

Posted: at 6:09 pm

Google pledged to stop tracking user visits to abortion clinics shortly after Roe v. Wade was overturned in June 2022, which killed off the United States largest federal abortion protection. The measure aimed to protect abortion seekers from prosecutors, especially in the 14 states where abortion is now banned. Roughly 18 months later, Google has not followed through on its promise, according to a new study from Accountable Tech, and the company still tracks visits to abortion clinics.

No Google AI Search, I Dont Need to Learn About the Benefits of Slavery

Googles location data is one of law enforcements favorite tools in the United States. The U.S. government requested the companys data over 63,000 times in the first half of 2023, reaching an all-time high, and Google handed over data 85% of the time. Thats why the companys tracking of visits to abortion clinics is a big deal, and why its promise to stop was so celebrated. However, a new study published on Thursday ran tests in seven states across the country and found that Google is still collecting Location History data for visits to abortion clinics, despite their promise. Accountable Tech says this report makes it clear that Google cannot be trusted to follow-through on its privacy promises.

The study ran experiments in Pennsylvania, Texas, Nevada, Florida, New York, Georgia, and North Carolina. In 50% of its tests, Accountable Tech found that routes to and from Planned Parenthood locations were saved in Googles Location History. The actual name, Planned Parenthood, was scrubbed from Googles data, which mirrors Accountable Techs study from a year ago, but the data still clearly points to abortion clinic visits.

Google announced last December it would no longer hold onto peoples location data which would mean it cant turn that information over to the police. Accountable Tech says this was a positive development, but that Google cant be trusted at face value on promises like this. The company received 5,764 geofence warrants in states where abortion is banned between 2018 and 2020, according to Politico.

Google did not immediately respond to Gizmodos request for comment.

Google notes that its Location History feature is default turned off, and you can turn it on if you choose. Its unclear how many users Google tracks location data on, but its estimated the company is one of the worlds largest data trackers, with over a billion users of Google Maps globally.

Follow this link:

Google Breaks Another Promise About Tracking Your Location History - Gizmodo

Posted in Google | Comments Off on Google Breaks Another Promise About Tracking Your Location History – Gizmodo

Google, Amazon Duolingo Cut Hundreds Of Jobs: Here Are The 2024 Tech Layoffs – Forbes

Posted: at 6:09 pm

Google, Amazon Duolingo Cut Hundreds Of Jobs: Here Are The 2024 Tech Layoffs  Forbes

Read more:

Google, Amazon Duolingo Cut Hundreds Of Jobs: Here Are The 2024 Tech Layoffs - Forbes

Posted in Google | Comments Off on Google, Amazon Duolingo Cut Hundreds Of Jobs: Here Are The 2024 Tech Layoffs – Forbes