NASA Is Asking for Help to Return Samples That Could Uncover Life on Mars – FLYING

NASA Administrator Bill Nelson has shared the space agencys revised path forward for the Mars Sample Return program, a proposed NASA-European Space Agency (ESA) mission to return Martian rock and soil samples to Earth. NASAs Perseverance rover has been collecting rock and soil samples on the Red Planet since 2021.

The agency is asking the NASA community, including its Jet Propulsion Laboratory and other agency centers, to collaborate on out-of-the-box designs, using existing technology, that could return the samples.

NASA on Monday released its response to a September 2023 Independent Review Board (IRB) report analyzing Mars Sample Return and its costs. It estimated the missions budget at $8 billion to $11 billion, with the high end of that range being more than double previous estimates of $4.4 billion.

Under those constraints, Nelson said, the mission would not return samples until 2040, which he said is unacceptable.

Mars Sample Return will be one of the most complex missions NASA has ever undertaken, said Nelson. The bottom line is, an $11 billion budget is too expensive, and a 2040 return date is too far away. Safely landing and collecting the samples, launching a rocket with the samples off another planetwhich has never been done beforeand safely transporting the samples more than 33 million miles back to Earth is no small task. We need to look outside the box to find a way ahead that is both affordable and returns samples in a reasonable timeframe.

Nelson also pointed to Congress recent budget cuts as a contributing factor in the agencys current challenges.

The agencys response to the IRB report includes an updated mission design with reduced complexity; improved resiliency; risk posture; [and] stronger accountability and coordination.

It said it will solicit proposals from the industry that could return samples in the 2030s, with responses expected in the fall. These alternative mission designs, NASA said, would reduce cost, risk, and mission complexity. It is unclear exactly what kind of solution the agency is seeking. But it emphasized leveraging existing technologies that do not require large amounts of time and money to develop.

Without more funding, according to NASA, Mars Sample Return could dip into money allocated for projects at the Goddard Space Flight Center, Jet Propulsion Laboratory, and other centers. Projects such as Dragonfly, a mission to Saturns largest moon, Titan, could be discontinued, warned Nicola Fox, associate administrator of NASAs Science Mission Directorate.

Plans for a Mars sample return mission have been proposed by the Jet Propulsion Laboratory since 2001. The samples are expected to help researchers understand the formation and evolution of the solar system and habitable worlds, including our own. They could be used to learn whether there was ancient life on Mars and aid in the search for life elsewhere in the universe.

NASAs Perseverance rover landed on Mars in 2021 and has been collecting samples since. Originally, the plan was to return them to Earth in 2033 using a rocket, orbiter, and lander. However, the IRB report found that the orbiter and lander likely would not leave the Earth until that year.

A Sample Retrieval Lander would deploy a small rocket to collect samples from Perseverance, using an ESA-provided robotic arm. Sample recovery helicoptersbased on the successful Ingenuity autonomous Mars helicopter and also capable of collecting sampleswould serve as backup.

A Mars Ascent Vehicle, which would be the first rocket to launch off the Mars surface, would carry samples to the planets orbit, where they would be captured by an Earth Return Orbitealso designed by ESAand brought back to Earth.

The initiative would be the first international, interplanetary mission to return samples from another planet and, according to NASA, would return the most carefully selected and well-documented set of samples ever delivered from another planet.

Earlier this year, the space agency marked the 20-year anniversary of its twin Spirit and Opportunity rovers arrival on the Martian surface, where they provided the first compelling evidence that the red planet once held water.

NASAs Curiosity rover is currently surveying a region of the planet thought to have been carved by a river billions of years ago. Its explorations could lead to further discoveries about life on Mars.

Like this story? We think youll also like the Future of FLYING newsletter sent every Thursday afternoon. Sign up now.

See the rest here:

NASA Is Asking for Help to Return Samples That Could Uncover Life on Mars - FLYING

SpaceX launches U.S. military weather monitoring satellite – SpaceNews

COLORADO SPRINGS A SpaceX Falcon 9 rocket on April 11 launched a U.S. Space Force weather monitoring satellite. The vehicle lifted off from Vandenberg Space Force Base, California, at 7:25 a.m. Pacific.

The USSF-62 mission flew to orbit the U.S. militarys first Weather System Follow-on Microwave (WSF-M) satellite.

Made by Ball Aerospace a company recently acquired by BAE Systems WSF-M has a microwave imager instrument to collect weather data including the measurement of ocean surface wind speed and direction, ice thickness, snow depth, soil moisture and local space weather.

The spacecraft will operate in a low polar orbit. The Space Force has ordered a second WSF-M satellite, projected to be delivered by 2028. These satellites are part of a broader effort to modernize the militarys space-based environmental monitoring assets.

Data used for military planning

The data gathered by WSF-M will be provided to meteorologists in support of the generation of a wide variety of weather products necessary to conduct mission planning and operations globally every day, the U.S. Space Force said.

Just under eight minutes after liftoff and payload separation, the Falcon 9s first stage flew back to Earth and landed at Vandenbergs Landing Zone 4.

USSF-62 is the 37th launch performed by SpaceX so far in 2024 and its second national security space launch mission of the year. In February SpaceX launched the USSF-124 mission from Cape Canaveral Space Force Station, Florida, deploying six U.S. missile defense satellites for the Space Development Agency and the Missile Defense Agency.

Sandra Erwin writes about military space programs, policy, technology and the industry that supports this sector. She has covered the military, the Pentagon, Congress and the defense industry for nearly two decades as editor of NDIAs National Defense... More by Sandra Erwin

Read the original here:

SpaceX launches U.S. military weather monitoring satellite - SpaceNews

SpaceX launches Space Force weather satellite designed to take over for a program with roots to the 1960s … – Spaceflight Now

The Weather System Follow-on Microwave (WSF-M) space vehicle was successfully encapsulated April 8, 2024, ahead of its scheduled launch as the U.S. Space Force (USSF)-62 mission from Vandenberg Space Force Base, Calif., marking a major milestone on its upcoming launch into low Earth orbit. Image: SpaceX

SpaceX launched a military weather satellite designed to replace aging satellites from a program dating back to the 1960s. The United States Space Force-62 (USSF-62) mission featured the launch of the first Weather System Follow-on Microwave (WSF-M) spacecraft.

Liftoff of the Falcon 9 rocket from Space Launch Complex 4 East (SLC-4E) at Vandenberg Space Force Base happened at 7:25 a.m. PDT (10:25 a.m. EDT (1425 UTC), which was the opening of a 10-minute launch window.

The booster supporting this National Security Space Launch (NSSL) mission, B1082 in the SpaceX fleet, made its third flight after previously launching the Starlink 7-9 and 7-14 missions this year.

Were absolutely thrilled be out here on the Central Coast, with a superb team primed and ready to launch the USSF-62 satellite. It has an important mission ahead of it and were excited for flight-proven Falcon 9 to deliver the satellite to orbit, said Col. Jim Horne, senior materiel leader for the Space System Commands Launch Execution Delta, in a statement. And on this mission, were using a first-stage booster whose history is purely commercial.

About eight minutes after liftoff, B1082 touched down at Landing Zone 4 (LZ-4). This was the 17th land landing in California and the 295th booster landing for SpaceX.

A significant milestone for the company on the USSF-62 mission was the use of flight-proven payload fairings, which will be a first for an NSSL mission. They previously flew on the USSF-52 mission, which featured the launch of the X-37B spaceplane from NASAs Kennedy Space Center in December 2023.

With each national security launch, we add to Americas capabilities and improve its deterrence in the face of growing threats, Horne stated.

USSF-62 was one of three missions granted to SpaceX in May 2022 as part of the NSSL Phase 2 Order Year 3 award, which collectively are valued at $309.7 million. SpaceX launched USSF-124 in February 2024 and will likely launch the SDA-Tranche 1 satellites later this year.

Ball Aerospace, the manufacturer of the WSF-M, said the spacecrafts primary payload is a passive microwave radiometer, which has been demonstrated on previous spacecraft. It also boasts a 1.8 meter antenna, which combined with the primary instrument allow the spacecraft to address so-called space-based environmental monitoring (SBEM) gaps.

Its capabilities will provide valuable information for protecting the assets of the United States and its allies, primarily in ocean settings.

The WSF-M satellite is a strategic solution tailored to address three high-priority Department of Defense SBEM gaps specifically, ocean surface vector winds, tropical cyclone intensity, and energetic charged particles in low Earth orbit, said David Betz, WSF-M program manager, SSC Space Sensing, in a statement. Beyond these primary capabilities, our instruments also provide vital data on sea ice characterization, soil moisture, and snow depth.

The spacecraft is based on the Ball Configurable Platform and includes a Global Precipitation Measurement (GPM) Microwave Imager (GMI) sensor and an Energetic Charged Particle sensor. Ball Aerospace has been involved with other, similar spacecraft, including the Suomi National Polar-orbiting Partnership (Suomi-NPP) and the Joint Polar Satellite System-1 (JPSS-1).

According to a public FY2024 Department of Defense budget document, the WSF-M system will consist of two spacecraft. Once the first is on orbit, it will assess the level of Ocean Surface Vector Wind (OSVW) measurement uncertainty and Tropical Cyclone Intensity (TCI) latency.

The first seeds of the program were planted back in October 2012 during whats called the Materiel Solution Analysis phase. That resulted in the Department of the Air Force issuing a request for proposals from companies in January 2017.

In November 2017, the Space and Missile Systems Center (now Space Systems Command) awarded a $93.7 million firm-fixed-price contract to Ball Aerospace for the WSF-M project with an expected completion date of Nov. 15, 2019.

This is an exciting win for us, and were looking forward to expanding our work with the Air Force and continuing to support warfighters and allies around the world, said Rob Strain, the then president, Ball Aerospace, in a 2017 statement. WSF-M extends Balls legacy of providing precise measurements from space to enable more accurate weather forecasting.

Roughly a year later, Ball received a $255.4 million contract modification, which provides for the exercise of an option for development and fabrication of the [WSF-M] Space Vehicle 1. This new contract also pushed out the expected completion date to Jan. 15, 2023.

In May 2020, the U.S. Space Forces SMSC noted the completion of the WSF-M systems critical design review that April, which opened the door to the beginning of fabrication.

Over the following year, the spacecraft went through a series of tests, running both the software and hardware through its paces. The primary bus structure was completed by August 2021 and by October 2022, the spacecraft entered its integration readiness review (IRR) and test readiness review (TRR).

Before that though, in May 2022, Ball was awarded a $16.6 million cost-plus-incentive-fee contract modification, which was for the exercise of an option for integration, test and operational work of the spacecraft. That brought the cumulative face value of the contract to about $417.4 million.

Shortly before the end of that year, in November 2022, Ball received a $78.3 firm-fixed-price contract modification to develop the second WSF-M spacecraft. That work is expected to be completed by Nov. 15, 2027, which would set up a launch opportunity no earlier than January 2028.

It was finally delivered from Balls facilities in Boulder, Colorado, to Vandenberg Space Force Base for pre-launch processing in February 2024.

This delivery represents a major milestone for the WSF-M program and is a critical step towards putting the first WSF-M satellite on-orbit for the warfighter, said Col. Daniel Visosky, senior materiel leader, SSCs Space Sensing Environmental and Tactical Surveillance program office, in a statement.It represents a long-term collaboration and unity-of-effort between the Space Force and our combined teams at Ball Aerospace, support contractors and government personnel.

This first WSF-M satellite, and eventually the second, will take the place of the legacy Defense Meteorological Satellite Program (DMSP) satellites, which have roots going back in the 1960s. The program features two primary satellites, which operate in sun-synchronous LEO polar orbits at about 450 nautical miles in altitude.

Originally known as the Defense Satellite Applications Program (DASP), the first of these legacy satellites launched in 1962 and they were classified under the purview of the National Reconnaissance Office (NRO) as part of the Corona Program. The DMSP was declassified in 1972 to allow data to be used by non-governmental scientists and civilians.

According to a Space Force historical accounting, a tri-agency organizational agreement was forged between the DoD, the Department of Commerce and NASA following President Bill Clintons directive for the DOC and the DoD to converge their separate polar-orbiting weather satellite programs. Funding responsibility stayed with the DoD, but by June 1998, the operational responsibility of the DMSP transferred to the Department of Commerce.

Satellite operations for the DMSP then became the responsibility of the National Oceanic and Atmospheric Administration (NOAA) Office of Satellite and Product Operations (OSPO).

The program was not without issue over the years. In 2004, the DMSP-F11 satellite, launched in 1991 and retired in 1995, disintegrated and created dozens of pieces of orbital debris. In 2015, a faulty battery was blamed for a similar disintegration of DMSP-F13, which resulted in 147 pieces of debris.

That year, Congress ordered an end to the DMSP program and the yet-to-launch F20 satellite was to be scrapped.

In February 2016, the DMSP-F19 had its planned five-year mission cut short less than two years after launch. The satellite suffered a power anomaly that caused engineers to lose control of it. The spacecraft was declared lost in March.

The DMSP-F17 satellite, launched in 2006, was then relocated to the primary position vacated by F19. According to the Observing Systems Capability Analysis and Review (OSCAR), a tool developed by the World Meteorological Organization, there are three DMSP satellites still in service: F16, F17 and F18. They launched in 2003, 2006 and 2009 respectively.

The latter two have expected end-of-life dates of 2025, with F16 intended to conclude its mission in December 2023, according to the Committee on Earth Observation Satellites (CEOS). However, that expiration has been extended as the WSF-M replacements are still on the way.

Its unclear if F17 and F18 can hang on until the second WSF-M spacecraft is completed and launched in 2028.

Link:

SpaceX launches Space Force weather satellite designed to take over for a program with roots to the 1960s ... - Spaceflight Now

Esther Peterson, the woman who advocated for the Equal Pay Act and made it possible in 1963 – GOOD

In a rendition of "Santa Baby," Miley Cyrus sings, "A girl's best friend is equal pay." The remix might be new but debates and discussions about equal pay have been quite long-standing. Esther Peterson was the woman who pushed the Equal Pay Act in 1963 and paved the way for discussions and actions around the same. The bill was signed by President John F. Kennedy on June 10, 1963, to ensure that there was no sex-based wage discrimination.

Peterson was the leading reason behind the act and was the highest-ranking woman in Kennedy's administration. The president appointed her as the Head of the Women's Bureau at the beginning of his term. She was later promoted to Assistant Secretary of Labor in 1963. As per History TV 18, Peterson remembered advocating for the Equal Pay Act even when it was not a top agenda at White House in a 1970 interview.

Equal pay was never a top priority, she said in the interview and added, [The White House] helped me at certain times, but Ive literally carried that bill up. However, the Equal Pay Act was not the first time someone had pushed for equal pay for women. As per the source, the interest in this topic began in 1896, when it was first brought to the Republican party platform. As a senator, Kennedy co-sponsored the Equal Pay bill in 1957 but never held much discussion around it. Although he supported equal pay it was not a priority for him. Peterson confirmed this in the 1970 interview and shared that the White House didn't intervene much in the work of the Women's Bureau on the Equal Pay Bill. She said, We were given the responsibility and we lobbied it through. She was asked if the bill was a top priority at the White House, to which she replied, No. We didnt get help from them We got the bill through ourselves, frankly.

She played a key role in putting together the testimony for the hearing on the Equal Pay Bill in 1962. She also liaised with other groups to lobby members of Congress to support the bill. The next year Congress passed the bill through amendments to the Fair Labor Standards Act of 1938 to protect against wage-based discriminations. However, the bill was slightly different from what Peterson had advocated for. She advocated for "Equal Pay for comparable work" while the bill passed was for "Equal pay for equal work." Peterson believed that the bill needed some strengthening and work.

Peterson was proven right because, as per the Pew Research Centre report in 2022, gender-based and ethnicity-based discrimination in wages still exists. As per the report, black women earned 70 percent of what white men earned while Hispanic women earned 65 percent of what white men earned.

In 2023, Congress even considered the Paycheck Fairness Act to strengthen the Equal Pay Act but didn't pass it. Peterson's contribution was not restricted to this single act though. After JFK's assassination in 1963, she continued to work for Lyndon B. Johnsons administration. He appointed her as Special Assistant to the President for Consumer Affairs, a role she returned for during Jimmy Carters term. She advocated for food labels to list nutritional information and grocery stores to list down prices per unit so consumers could make better decisions. She also advocated for better child care. Her endeavor for the act is an inspiration for women in power and for women around the world to keep pushing for their rights.

Read more from the original source:

Esther Peterson, the woman who advocated for the Equal Pay Act and made it possible in 1963 - GOOD

Microsoft’s AI Access Principles: Our commitments to promote innovation and competition in the new AI economy … – Microsoft

As we enter a new era based on artificial intelligence, we believe this is the best time to articulate principles that will govern how we will operate our AI datacenter infrastructure and other important AI assets around the world. We are announcing and publishing these principles our AI Access Principles today at the Mobile World Congress in Barcelona in part to address Microsofts growing role and responsibility as an AI innovator and a market leader.

Like other general-purpose technologies in the past, AI is creating a new sector of the economy. This new AI economy is creating not just new opportunities for existing enterprises, but new companies and entirely new business categories. The principles were announcing today commit Microsoft to bigger investments, more business partnerships, and broader programs to promote innovation and competition than any prior initiative in the companys 49-year history. By publishing these principles, we are committing ourselves to providing the broad technology access needed to empower organizations and individuals around the world to develop and use AI in ways that will serve the public good.

These new principles help put in context the new investments and programs weve announced and launched across Europe over the past two weeks, including $5.6 billion in new AI datacenter investments and new AI skilling programs that will reach more than a million people. Weve also launched new public-private partnerships to advance responsible AI adoption and protect cybersecurity, new AI technology services to support network operators, and a new partnership with Frances leading AI company, Mistral AI. As much as anything, these investments and programs make clear how we will put these principles into practice, not just in Europe, but in the United States and around the world.

These principles also reflect the responsible and important role we must play as a company. They build in part on the lessons we have learned from our experiences with previous technology developments. In 2006, after more than 15 years of controversies and litigation relating to Microsoft Windows and the companys market position in the PC operating system market, we published a set of Windows Principles. Their purpose was to govern the companys practices in a manner that would both promote continued software innovation and foster free and open competition.

Ill never forget the reaction of an FTC Commissioner who came up to me after I concluded the speech I gave in Washington, D.C. to launch these principles. He said, If you had done this 10 years ago, I think you all probably would have avoided a lot of problems.

Close to two decades have gone by since that moment, and both the world of technology and the AI era we are entering are radically different. Then, Windows was the computing platform of the moment. Today, mobile platforms are the most popular gateway to consumers, and exponential advances in generative AI are driving a tectonic shift in digital markets and beyond. But there is wisdom in that FTC Commissioners reaction that has stood the test of time: As a leading IT company, we do our best work when we govern our business in a principled manner that provides broad opportunities for others.

The new AI era requires enormous computational power to train, build, and deploy the most advanced AI models. Historically, such power could only be found in a handful of government-funded national laboratories and research institutions, and it was available only to a select few. But the advent of the public cloud has changed that. Much like steel did for skyscrapers, the public cloud enables generative AI.

Today, datacenters around the world house millions of servers and make vast computing power broadly available to organizations large and small and even to individuals as well. Already, many thousands of AI developers in startups, enterprises, government agencies, research labs, and non-profit organizations around the world are using the technology in these datacenters to create new AI foundation models and applications.

These datacenters are owned and operated by cloud providers, which include larger established firms such as Microsoft, Amazon, Google, Oracle, and IBM, as well as large firms from China like Alibaba, Huawei, Tencent, and Baidu. There are also smaller specialized entrants such as Coreweave, OVH, Aruba, and Denvr Dataworks Corporation, just to mention a few. And government-funded computing centers clearly will play a role as well, including with support for academic research. But building and operating those datacenters is expensive. And the semiconductors or graphical processing units (GPUs) that are essential to power the servers for AI workloads remain costly and in short supply. Although governments and companies are working hard to fill the gap, doing so will take some time.

With this reality in mind, regulators around the world are asking important questions about who can compete in the AI era. Will it create new opportunities and lead to the emergence of new companies? Or will it simply reinforce existing positions and leaders in digital markets?

I am optimistic that the changes driven by the new AI era will extend into the technology industry itself. After all, how many readers of this paragraph had, two years ago, even heard of OpenAI and many other new AI entrants like Anthropic, Cohere, Aleph Alpha, and Mistral AI? In addition, Microsoft, along with other large technology firms are dynamically pivoting to meet the AI era. The competitive pressure is fierce, and the pace of innovation is dizzying. As a leading cloud provider and an innovator in AI models ourselves and through our partnership with OpenAI, we are mindful of our role and responsibilities in the evolution of this AI era.

Throughout the past decade, weve typically found it helpful to define the tenets in effect, the goals that guide our thinking and drive our actions as we navigate a complex topic. We then apply these tenets by articulating the principles we will apply as we make the decisions needed to govern the development and use of technology. I share below the new tenets on which we are basing our thinking on this topic, followed by our 11 AI Access Principles.

Fundamentally, there are five tenets that define Microsofts goals as we focus on AI access, including our role as an infrastructure and platforms provider.

First, we have a responsibility to enable innovation and foster competition. We believe that AI is a foundational technology with a transformative capability to help solve societal problems, improve human productivity, and make companies and countries more competitive. As with prior general-purpose technologies, from the printing press to electricity, railroads, and the internet itself, the AI era is not based on a single technology component or advance. We have a responsibility to help spur innovation and competition across the new AI economy that is rapidly emerging.

AI is a dynamic field, with many active participants based on a technology stack that starts with electricity and connectivity and the worlds most advanced semiconductor chips at the base. It then runs up through the compute power of the public cloud, public and proprietary data for training foundation models, the foundation models themselves, tooling to manage and orchestrate the models, and AI-powered software applications. In short, the success of an AI-based economy requires the success of many different participants across numerous interconnected markets.

You can see here the technology stack that defines the new AI era. While one company currently produces and supplies most of the GPUs being used for AI today, as one moves incrementally up the stack, the number of participants expands. And each layer enables and facilitates innovation and competition in the layers above. In multiple ways, to succeed, participants at every layer of the technology stack need to move forward together. This means, for Microsoft, that we need to stay focused not just on our own success, but on enabling the success of others.

Second, our responsibilities begin by meeting our obligations under the law. While the principles we are launching today represent a self-regulatory initiative, they in no way are meant to suggest a lack of respect for the rule of law or the role of regulators. We fully appreciate that legislators, competition authorities, regulators, enforcers, and judges will continue to evolve the competition rules and other laws and regulations relevant to AI. Thats the way it should be.

Technology laws and rules are changing rapidly. The European Union is implementing its Digital Markets Act and completing its AI Act, while the United States is moving quickly with a new AI Executive Order. Similar laws and initiatives are moving forward in the United Kingdom, Canada, Japan, India, and many other countries. We recognize that we, like all participants in this new AI market, have a responsibility to live up to our obligations under the law, to engage constructively with regulators when obligations are not yet clear, and to contribute to the public dialogue around policy. We take these obligations seriously.

Third, we need to advance a broad array of AI partnerships. Today, only one company is vertically integrated in a manner that includes every AI layer from chips to a thriving mobile app store. As noted at a recent meeting of tech leaders and government officials, The rest of us, Microsoft included, live in the land of partnerships.

People today are benefiting from the AI advances that the partnership between OpenAI and Microsoft has created. Since 2019, Microsoft has collaborated with OpenAI on the research and development of OpenAIs generative AI models, developing the unique supercomputers needed to train those models. The ground-breaking technology ushered in by our partnership has unleashed a groundswell of innovation across the industry. And over the past five years, OpenAI has become a significant new competitor in the technology industry. It has expanded its focus, commercializing its technologies with the launch of ChatGPT and the GPT Store and providing its models for commercial use by third-party developers.

Innovation and competition will require an extensive array of similar support for proprietary and open-source AI models, large and small, including the type of partnership we are announcing today with Mistral AI, the leading open-source AI developer based in France. We have also invested in a broad range of other diverse generative AI startups. In some instances, those investments have provided seed funding to finance day-to-day operations. In other instances, those investments have been more focused on paying the expenses for the use of the computational infrastructure needed to train and deploy generative AI models and applications. We are committed to partnering well with market participants around the world and in ways that will accelerate local AI innovations.

Fourth, our commitment to partnership extends to customers, communities, and countries. More than for prior generations of digital technology, our investments in AI and datacenters must sustain the competitive strengths of customers and national economies and address broad societal needs. This has been at the core of the multi-billion-dollar investments we recently have announced in Australia, the United Kingdom, Germany, and Spain. We need constantly to be mindful of the community needs AI advances must support, and we must pursue a spirit of partnership not only with others in our industry, but with customers, governments, and civil society. We are building the infrastructure that will support the AI economy, and we need the opportunities provided by that infrastructure to be widely available.

Fifth, we need to be proactive and constructive, as a matter of process, in working with governments and the IT industry in the design and release of new versions of AI infrastructure and platforms. We believe it is critical for companies and regulators to engage in open dialogue, with a goal of resolving issues as quickly as possible ideally, while a new product is still under development. For our part, we understand that Microsoft must respond fully and cooperatively to regulatory inquiries so that we can have an informed discussion with regulators about the virtues of various approaches. We need to be good listeners and constructive problem solvers in sorting through issues of concern and identifying practical steps and solutions before a new product is completed and launched.

The foregoing tenets come together to shape the new principles we are announcing below. Its important to note that, given the safety, security, privacy, and other issues relating to responsible AI, we need to apply all these principles subject to objective and effective standards to comply with our legal obligations and protect the public. These are discussed further below. Subject to these requirements, we are committed to the following 11 principles:

We are committed to enabling AI innovation and fostering competition by making our cloud computing and AI infrastructure, platforms, tools, and services broadly available and accessible to software developers around the world. We want Microsoft Azure to be the best place for developers to train, build, and deploy AI models and to use those models safely and securely in applications and solutions. This means:

Today, our partnership with OpenAI is supporting the training of the next generation of OpenAI models and increasingly enabling customers to access and use these models and Microsofts CoPilot applications in local datacenters. At the same time, we are committed to supporting other developers, training, and deploying proprietary and open-source AI models, both large and small.

Todays important announcement with Mistral AI launches a new generation of Microsofts support for technology development in Europe. It enables Mistral AI to accelerate the development and deployment of its next generation Large Language Models (LLMs) with access to Azures cutting-edge AI infrastructure. It also makes the deployment of Mistral AIs premium models available to customers through our Models-as-a-Service (MaaS) offering on Microsoft Azure, which model developers can use to publish and monetize their AI models. By providing a unified platform for AI model management, we aim to lower the barriers and costs of AI model development around the world for both open source and proprietary development. In addition to Mistral AI, this service is already hosting more than 1,600 open source and proprietary models from companies and organizations such as Meta, Nvidia, Deci, and Hugging Face, with more models coming soon from Cohere and G42.

We are committed to expanding this type of support for additional models in the months and years ahead.

As reflected in Microsofts Copilots and OpenAIs ChatGPT itself, the world is rapidly benefiting from the use of a new generation of software applications that access and use the power of AI models. But our applications will represent just a small percentage of the AI-powered applications the world will need and create. For this reason, were committed to ongoing and innovative steps to make the AI models we host and the development tools we create broadly available to AI software applications developers around the world in ways that are consistent with responsible AI principles.

This includes the Azure OpenAI service, which enables software developers who work at start-ups, established IT companies, and in-house IT departments to build software applications that call on and make use of OpenAIs most powerful models. It extends through Models as a Service to the use of other open source and proprietary AI models from other companies, including Mistral AI, Meta, and others.

We are also committed to empowering developers to build customized AI solutions by enabling them to fine-tune existing models based on their own unique data sets and for their specific needs and scenarios. With Azure Machine Learning, developers can easily access state-of-the-art pre-trained models and customize them with their own data and parameters, using a simple drag-and-drop interface or code-based notebooks. This helps companies, governments, and non-profits create AI applications that help advance their goals and solve their challenges, such as improving customer service, enhancing public safety, or promoting social good. This is rapidly democratizing AI and fostering a culture of even broader innovation and collaboration among developers.

We are also providing developers with tools and repositories on GitHub that enable them to create, share, and learn from AI solutions. GitHub is the worlds largest and most trusted platform for software development, hosting over 100 million repositories and supporting more than 40 million developers. We are committed to supporting the AI developer community by making our AI tools and resources available on GitHub, giving developers access to the latest innovations and best practices in AI development, as well as the opportunity to collaborate with other developers and contribute to the open source community. As one example, just last week we made available an open automation framework to help red team generative AI systems.

Ensure choice and fairness across the AI economy

We understand that AI innovation and competition require choice and fair dealing. We are committed to providing organizations, AI developers, and data scientists with the flexibility to choose which AI models to use wherever they are building solutions. For developers who choose to use Microsoft Azure, we want to make sure they are confident we will not tilt the playing field to our advantage. This means:

The AI models that we host on Azure, including the Microsoft Azure OpenAI API service, are all accessible via public APIs. Microsoft publishes documentation on its website explaining how developers can call these APIs and use the underlying models. This enables any application, whether it is built and deployed on Azure or other private and public clouds, to call these APIs and access the underlying models.

Network operators are playing a vital role in accelerating the AI transformation of customers around the world, including for many national and regional governments. This is one reason we are supporting a common public API through the Open Gateway initiative driven by the GSM Association, which advances innovation in the mobile ecosystem. The initiative is aligning all operators with a common API for exposing advanced capabilities provided by their networks, including authentication, location, and quality of service. Its an indispensable step forward in enabling network operators to offer their advanced capabilities to a new generation of AI-enabled software developers. We have believed in the potential of this initiative since its inception at GSMA, and we have partnered with operators around the world to help bring it to life.

Today at Mobile World Congress, we are launching the Public Preview of Azure Programmable Connectivity (APC). This is a first-class service in Azure, completely integrated with the rest of our services, that seamlessly provides access to Open Gateway for developers. It means software developers can use the capabilities provided by the operator network directly from Azure, like any other service, without requiring specific work for each operator.

We are committed to maintaining Microsoft Azure as an open cloud platform, much as Windows has been for decades and continues to be. That means in part ensuring that developers can choose how they want to distribute and sell their AI software to customers for deployment and use on Microsoft Azure. We provide a marketplace on Azure through which developers can list and sell their AI software to Azure customers under a variety of supported business models. Developers who choose to use the Azure Marketplace are also free to decide whether to use the transaction capabilities offered by the marketplace (at a modest fee) or whether to sell licenses to customers outside of the marketplace (at no fee). And, of course, developers remain free to sell and distribute AI software to Azure customers however they choose, and those customers can then upload, deploy, and use that software on Azure.

We believe that trust is central to the success of Microsoft Azure. We build this trust by serving the interests of AI developers and customers who choose Microsoft Azure to train, build, and deploy foundation models. In practice, this also means that we avoid using any non-public information or data from the training, building, deployment, or use of developers AI models to compete against them.

We know that customers can and do use multiple cloud providers to meet their AI and other computing needs. And we understand that the data our customers store on Microsoft Azure is their data. So, we are committed to enabling customers to easily export and transfer their data if they choose to switch to another cloud provider. We recognize that different countries are considering or have enacted laws limiting the extent to which we can pass along the costs of such export and transfer. We will comply with those laws.

We recognize that new AI technologies raise an extraordinary array of critical questions. These involve important societal issues such as privacy, safety, security, the protection of children, and the safeguarding of elections from deepfake manipulation, to name just a few. These and other issues require that tech companies create guardrails for their AI services, adapt to new legal and regulatory requirements, and work proactively in multistakeholder efforts to meet broad societal needs. Were committed to fulfilling these responsibilities, including through the following priorities:

We are committed to safeguarding the physical security of our AI datacenters, as they host the infrastructure and data that power AI solutions. We follow strict security protocols and standards to ensure that our datacenters are protected from unauthorized access, theft, vandalism, fire, or natural disasters. We monitor and audit our datacenters to detect and prevent any potential threats or breaches. Our datacenter staff are trained and certified in security best practices and are required to adhere to a code of conduct that respects the privacy and confidentiality of our customers data.

We are also committed to safeguarding the cybersecurity of our AI models and applications, as they process and generate sensitive information for our customers and society. We use state-of-the-art encryption, authentication, and authorization mechanisms to protect data in transit and at rest, as well as the integrity and confidentiality of AI models and applications. We also use AI to enhance our cybersecurity capabilities, such as detecting and mitigating cyberattacks, identifying and resolving vulnerabilities, and improving our security posture and resilience.

Were building on these efforts with our new Secure Future Initiative (SFI). This brings together every part of Microsoft and has three pillars. It focuses on AI-based cyber defenses, advances in fundamental software engineering, and advocacy for stronger application of international norms to protect civilians from cyber threats.

As AI becomes more pervasive and impactful, we recognize the need to ensure that our technology is developed and deployed in a way that is ethical, trustworthy, and aligned with human values. That is why we have created the Microsoft Responsible AI Standard, a comprehensive framework that guides our teams on how to build and use AI responsibly.

The standard covers six key dimensions of responsible AI: fairness; reliability and safety; privacy and security; inclusiveness; transparency; and accountability. For each dimension, we define what these values mean and how to achieve our goals in practice. We also provide tools, processes, and best practices to help our teams implement the standard throughout the AI lifecycle, from design and development to deployment and monitoring. The approach that the standard establishes is not static, but instead evolves and improves based on the latest research, feedback, and learnings.

We recognize that countries need more than advanced AI chips and datacenters to sustain their competitive edge and unlock economic growth. AI is changing jobs and the way people work, requiring that people master new skills to advance their careers. Thats why were committed to marrying AI infrastructure capacity with AI skilling capability, combining the two to advance innovation.

In just the past few months, weve combined billions of dollars of infrastructure investments with new programs to bring AI skills to millions of people in countries like Australia, the United Kingdom, Germany, and Spain. Were launching training programs focused on building AI fluency, developing AI technical skills, supporting AI business transformation, and promoting safe and responsible AI development. Our work includes the first Professional Certificate on Generative AI.

Typically, our skilling programs involve a professional network of Microsoft certified training services partners and multiple industry partners, universities, and nonprofit organizations. Increasingly, we find that major employers want to launch new AI skilling programs for their employees, and we are working with them actively to provide curricular materials and support these efforts.

One of our most recent and important partnerships is with the AFL-CIO, the largest federation of labor unions in the United States. Its the first of its kind between a labor organization and a technology company to focus on AI and will deliver on three goals: (1) sharing in-depth information with labor leaders and workers on AI technology trends; (2) incorporating worker perspectives and expertise in the development of AI technology; and (3) helping shape public policy that supports the technology skills and needs of frontline workers.

Weve learned that government institutions and associations can typically bring AI skilling programs to scale. At the national and regional levels, government employment and educational agencies have the personnel, programs, and expertise to reach hundreds of thousands or even millions of people. Were committed to working with and supporting these efforts.

Through these and other initiatives, we aim to democratize access to AI education and enable everyone to harness the potential of AI for their own lives and careers.

In 2020, Microsoft set ambitious goals to be carbon negative, water positive and zero waste by 2030. We recognize that our datacenters play a key part in achieving these goals. Being responsible and sustainable by design also has led us to take a first-mover approach, making long-term investments to bring as much or more carbon-free electricity than we will consume onto the grids where we build datacenters and operate.

We also apply a holistic approach to the Scope 3 emissions relating to our investments in AI infrastructure, from the construction of our datacenters to engaging our supply chain. This includes supporting innovation to reduce the embodied carbon in our supply chain and advancing our water positive and zero waste goals throughout our operations.

At the same time, we recognize that AI can be a vital tool to help accelerate the deployment of sustainability solutions from the discovery of new materials to better predicting and responding to extreme weather events. This is why we continue to partner with others to use AI to help advance breakthroughs that previously would have taken decades, underscoring the important role AI technology can play in addressing some of our most critical challenges to realizing a more sustainable future.

Tags: ChatGPT, datacenters, generative ai, Github, Mobile World Congress, open ai, Responsible AI

Read the original post:

Microsoft's AI Access Principles: Our commitments to promote innovation and competition in the new AI economy ... - Microsoft

Oppo’s Air Glass 3 Smart Glasses Have an AI Assistant and Better Visuals – CNET

Oppo is emphasizing the "smart" aspect of smart glasses with its latest prototype, the Air Glass 3, which the Chinese tech giant announced Monday at Mobile World Congress 2024.

The new glasses can be used to interact with Oppo's AI assistant, signaling yet another effort by a major tech company to integrate generative AI into more gadgets following the success of ChatGPT. The Air Glass 3 prototype is compatible with Oppo phones running the company's ColorOS 13 operating system and later, meaning it'll probably be exclusive to the company's own phones. Oppo didn't mention pricing or a potential release date for the Air Glass 3 in its press release, which is typical of gadgets that are in the prototype stage.

Read more: Microsoft Is Using AI to Stop Phone Scammers From Tricking You

The glasses can access a voice assistant that's based on Oppo's AndesGPT large language model, which is essentially the company's answer to ChatGPT. But the eyewear will need to be connected to a smartphone app in order for it to work, likely because the processing power is too demanding to be executed on a lightweight pair of glasses. Users would be able to use the voice assistant to ask questions and perform searches, although Oppo notes that the AI helper is only available in China.

Following the rapid rise of OpenAI's ChatGPT, generative AI has begun to show up in everything from productivity apps to search engines to smartphone software. Oppo is one of several companies -- along with TCL and Meta -- that believe smart glasses are the next place users will want to engage with AI-powered helpers. Mixed reality has been in the spotlight thanks to the launch of Apple's Vision Pro headset in early 2024.

Like the company's previous smart glasses, the Air Glass 3 looks just like a pair of spectacles, according to images provided by Oppo. But the company says it's developed a new resin waveguide that it claims can reduce the so-called "rainbow effect" that can occur when light refracts as it passes through.

Waveguides are the part of the smart glasses that relays virtual images to the eye, as smart glasses maker Vuzix explains. If the glasses live up to Oppo's claims, they should offer improved color and clarity. The glasses can also reach over 1,000 nits at peak brightness, Oppo says, which is almost as bright as some smartphone displays.

Watch this: Motorola's Rollable Concept Phone Wraps on Your Wrist

Oppo's Air Glass 3 prototype weighs 50 grams, making it similar to a pair of standard glasses, although on the heavier side. According to glasses retailer Glasses.com, the majority of glasses weigh between 25 to 50 grams, with lightweight models weighing as low as 6 grams.

Oppo is also touting the glasses' audio quality, saying it uses a technique known as reverse sound field technology to prevent sound leakage in order to keep calls private. There are also four microphones embedded in the glasses -- which Oppo says is a first -- for capturing the user's voice more clearly during phone calls.

There are touch sensors along the side of the glasses for navigation, and Oppo says you'll be able to use the glasses for tasks like viewing photos, making calls and playing music. New features will be added in the future, such as viewing health information and language translation.

With the Air Glass 3, Oppo is betting big on two major technologies gaining a lot of buzz in the tech world right now: generative AI and smart glasses. Like many of its competitors, it'll have to prove that high-tech glasses are useful enough to earn their place on your face. And judging by the Air Glass 3, it sees AI as being part of that.

Editors' note: CNET is using an AI engine to help create some stories. For more, seethis post.

See more here:

Oppo's Air Glass 3 Smart Glasses Have an AI Assistant and Better Visuals - CNET

DeepMind Chief Says Google’s Bungled AI Faces Feature Is Returning Soon – Bloomberg

Google plans to resume a paused artificial intelligence feature that generates images of people in the next couple of weeks, according to the companys top AI executive.

We hope to have that back online in a very short order, Demis Hassabis, head of the research division Google DeepMind, said on Monday at the Mobile World Congress in Barcelona.

Read more:

DeepMind Chief Says Google's Bungled AI Faces Feature Is Returning Soon - Bloomberg

MWC 2024: Microsoft to open up access to its AI models to allow countries to build own AI economies – Euronews

Monday was a big day for announcements from tech giant Microsoft, unveiling new guiding principles for AI governance and a multi-year deal with Mistral AI.

Tech behemoth Microsoft has unveiled a new set of guiding principles on how it will govern its artificial intelligence (AI) infrastructure, effectively further opening up access to its technology to developers.

The announcement came at the Mobile World Congress tech fair in Barcelona on Monday where AI is a key theme of this years event.

One of the key planks of its newly-published "AI Access Principles" is the democratisation of AI through the companys open source models.

The company said it plans to do this by expanding access to its cloud computing AI infrastructure.

Speaking to Euronews Next in Barcelona, Brad Smith, Microsofts vice chair and president, also said the company wanted to make its AI models and development tools more widely available to developers around the world, allowing countries to build their own AI economies.

"I think it's extremely important because we're investing enormous amounts of money, frankly, more than any government on the planet, to build out the AI data centres so that in every country people can use this technology," Smith said.

"They can create their AI software, their applications, they can use them for companies, for consumer services and the like".

The "AI Access Principles" underscore the company's commitment to open source models. Open source means that the source code is available to everyone in the public domain to use, modify, and distribute.

"Fundamentally, it [the principles] says we are not just building this for ourselves. We are making it accessible for companies around the world to use so that they can invest in their own AI inventions," Smith told Euronews Next.

"Second, we have a set of principles. It's very important, I think, that we treat people fairly. Yes, that as they use this technology, they understand how we're making available the building blocks so they know it, they can use it," he added.

"We're not going to take the data that they're developing for themselves and access it to compete against them. We're not going to try to require them to reach consumers or their customers only through an app store where we exact control".

The announcement of its AI governance guidelines comes as the Big Tech company struck a deal with Mistral AI, the French company revealed on Monday, signalling Microsofts intent to branch out in the burgeoning AI market beyond its current involvement with OpenAI.

Microsoft has already heavily invested in OpenAI, the creator of wildly popular AI chatbot ChatGPT. Its $13 billion (11.9 billion) investment, however, is currently under review by regulators in the EU, the UK and the US.

Widely cited as a growing rival for OpenAI, 10-month-old Mistral reached unicorn status in December after being valued at more than 2 billion, far surpassing the 1 billion threshold to be considered one.

The new multi-year partnership will see Microsoft giving Mistral access to its Azure cloud platform to help bring its large language model (LLM) called Mistral Large.

LLMs are AI programmes that recogise and generate text and are commonly used to power generative AI like chatbots.

"Their [Mistral's] commitment to fostering the open-source community and achieving exceptional performance aligns harmoniously with Microsofts commitment to develop trustworthy, scalable, and responsible AI solutions," Eric Boyd, Corporate Vice President, Azure AI Platform at Microsoft, wrote in a blog post.

The move is in keeping with Microsoft's commitment to open up its cloud-based AI infrastructure.

In the past week, as well as its partnership with Mistral AI, Microsoft has committed to investing billions of euros over two years in its AI infrastructure in Europe, including 1.9 billion in Spain and 3.2 billion in Germany.

See the original post here:

MWC 2024: Microsoft to open up access to its AI models to allow countries to build own AI economies - Euronews

Ensuring access to health care – Southeast Iowa Union

Rep. Ashley Hinson

One of my top priorities in Congress is ensuring Iowans in rural areas have the same access to health care as those in urban areas. As Ive traveled throughout Eastern Iowa, I have seen firsthand the ways rural Iowans are stepping up to improve access to quality health care, especially maternal care.

As a mom, expanding access to maternal care is personal to me. When youre pregnant, you have a million thoughts going through your head. Stressing about being able to see your doctor or get the care you need should not have to be one of them.

I am committed to advancing bipartisan initiatives to ensure women in rural areas have the support and care they need for their health and the health of their babies.

The Midwives for Moms Act: Legislation that will help increase the number of trained midwives in the United States to help fill gaps in maternity care and improve birth outcomes.

The No Surprise Bills for New Moms Act: Legislation that prevents new parents from receiving surprise medical bills for their newborn babies.

The Maternal and Child Health Stillbirth Prevention Act: Legislation that helps prevent the all too frequent, but often silent, tragedy of stillbirth and save the lives of mothers and babies.

Last week, I spent time with maternal care providers and with women who shared their experiences with high risk pregnancies and stillbirths. I will bring their stories with me to Washington as I continue the fight for better care.

It was great to join the Mason City Chamber for their Breaking Glass Leadership series last week. From the importance of prioritizing family time to sharing how I stay grounded in D.C., I enjoyed sharing my experience as a working mom and the lessons Ive learned throughout my time in public service.

It was great to meet with All About Cheesesteaks owner, Joe. I heard about how he successfully grew his operation from food truck to storefront, or as he calls it, wheels to walls.

If youre in Charles City, be sure to check them out and grab a delicious cheesesteak.

The team at the MercyOne New Hampton Medical Center is exceptional. We discussed the steps theyve taken to ensure expecting mothers can receive prenatal care and the importance of expanding access to health care for new moms and their babies

Peoples Clinic provides a wide array of health care services to residents of Butler County. It was great to learn more about their successful model and see firsthand their passion for providing high quality care to everyone who walks in their doors

Iowa raises 12 million turkeys each year, ranking 7th nationwide for turkey production. Thank you to The Iowa Turkey Federation, Kim Reis, and USDA Wildlife Services for facilitating a tour and the informative discussion about the importance of mitigating bird flu.

I sat down with women who have endured high-risk pregnancies and the tragedy of stillbirth in Grundy County. These women have leaned on each other throughout their difficult journeys to motherhood and I am so inspired by their strength.

It was heartbreaking to hear their stories of immense loss, but Im so grateful they are willing to lend their voices and experiences to the fight for better maternal care.

Im more motivated than ever to fight for healthy moms and healthy babies.

Link:

Ensuring access to health care - Southeast Iowa Union

Beach towns gear up for ambitious state and federal lobbying effort, firm on $9K monthly retainer – Port City Daily

The Topsail Island Shoreline Protection Commission discussed federal and state lobbying initiatives in a meeting last week.(Port City Daily photo/Mark Darrough)

TOPSAIL ISLAND Three coastal municipalities in the Cape Fear region are aiming for big shoreline preservation goals in 2024.

READ MORE: More dredging needed for Topsail inlets, price could increase by $3.5M

The Topsail Island Shoreline Protection Commission consisting of representatives from the towns of Topsail Beach, North Topsail Beach, and Surf City, as well as Pender and Onslow counties put forward state and federal legislative goals for the new year at a meeting last week. They agreed to employ a new lobbyist for their state agenda and bumped pay for their federal lobbyist Ward & Smith P.A. to aid them with the campaign.

Chair Steve Smith the mayor of Topsail Beach told Port City Daily the commissions budget is around $130,000 per year, funded through matching 33% contributions from the three coastal towns. Smith said the vast majority of TISPCs resources are used to pay lobbyists, with some funds covering related fees, such as transportation.

TISPCs original charter was established in 2005. In addition to lobbying for state and federal policies to benefit the coastal towns, the organization provides information to county and town governments it serves. Smith said this allows other coastal communities to stay up to date with federal and state policies related to beach management and water quality.

The commission agreed to a contract amendment with Ward & Smith which the group has worked with since 2016 to increase the law firms retainer by $250 per month for a new rate of $9,225.

Smith described Mike McIntyre who served as representative for North Carolinas 7th district from 1997 to 2015 as the primary Ward & Smith employee involved with TISPC. Hes worked as the law firms senior adviser for government relations since 2020.

Smith has worked for Topsail Beach in various capacities for at least a decade and became mayor four years ago. He said TISPC has strived to receive federal funding for at least 15 years and described Ward & Smiths services as helpful in advancing several long-term goals, such as federal support for the Surf City storm mitigation project.

At the Jan. 25 meeting, TISPC concluded a four-month search by selecting Raleigh-based lobbyist David Farrell of Maynard Nexsen P.C. to take over state duties from former lobbyist Connie Wilson of Connie Wilson Consulting, Inc. She retired last year after 12 years of work with the commission. Farrell will be paid $4,000 a month.

Smith said it would be difficult to give a figure of how many hours per week the lobbyists work, as it fluctuates based upon activities within the legislature. At the state level, the group will lobby to maintain funding for the Shallow Draft Inlet Dredging Fund, which will cover $16.8 million of a $22 million contract signed with Norfolk Dredging Company in October 2023; FEMA is providing the remaining costs.

The project aims to renourish Topsail Beach by excavating between 1.6 million and 1.9 million cubic yards of sand from inlet channels and applying them to the beachfront. However, a November 2023 review of inlet depths found some areas more shallow than anticipated, potentially increasing the projects cost by $3.5 million; Topsail Beach may seek state support on any cost increases.

When the communities need the shallow draft fund to keep the inlets open, theyre not talking about $500,000 were talking about anywhere from $10 to $20 million, Smith said at the meeting.

The commission will also advocate for recurring patronage from the Coastal Storm Damage Mitigation Fund at $10 million per year. The fund provides grants to local governments to mitigate and remediate storm damage to beaches and dunes.

Other state priorities include shellfish lease management changes to provide public access to state waters and lobbying to keep home insurance rates at an affordable cost. Just recently, the North Carolina Rate Bureau requested as high as a 99% rate increase in coastal counties earlier this month, although experts told Port City Daily the final figure will likely be significantly lower.

NCRB chief operating officer Jared Chappell told PCD storm risk is the primary reason for the heightened insurance cost on beach-front homes North Carolina experienced five hurricanes of varying intensity from 2016 to 2022.

The National Oceanic and Atmospheric Association anticipates sea level to rise several feet in coming decades; one of TISPCs roles is to stay up to date with sea level rise studies and integrate new data with federal and state legislative goals.

On the federal level, TISPC will request Congress to direct the U.S. Army Corps of Engineers to conduct a full review of past expenditures through the Flood Control and Coastal Emergencies program, which assists disaster-impacted communities with recovery and repairs on critical infrastructure. This review would use forward-looking data to estimate future expenses and ensure budgetary flexibility for the program.

Additionally, the group will lobby Congress to establish a FEMA team with one representative for the island to determine losses when making disaster recovery recommendations.

TISPC will request legislation to allow sand used for Coastal Barrier Resources Act (CBRA) beach renourishment projects on non-CBRA designated beaches and amend the regions CBRA-designated areas.

The commission also wants to consider involvement with initiatives such as the RISEE Act to acquire federal funds for offshore wind energy projects. The group is looking into studies on offshore wind productions impact on commercial fishing.

Vice Chair Mike Benson who is Mayor Pro Tem of the town of North Topsail noted TISPC is starting the year with momentum, having passed some of their top legislative priorities last year. These include granting local authority to remove deserted vessels and banning non-encapsulated polystyrene foam in docks for environmental preservation.

I think we got along further in this legislative cycle than ever before, he said at the meeting.

Benson also noted TISPCs efforts could serve as a model to other coastal communities without a coastal protection policy organization. Smith told PCD Cateret County has a similar body but said other nearby municipalities have not established a group to carry out the same breadth of initiatives as TISPC.

PCD reached out to Benson to ask if he had any other goals for shoreline protection but he deferred to the chairman.

Although it was not on the agenda, Smith raised the idea of considering new tree protections akin to Oak Islands vegetation ordinance amendment passed last week. He cited the stormwater absorption benefits of high tree volume in coastal areas.

Most notably, Oak Island broadened the definition of heritage trees which refers to a tree considered particularly valuable for its rarity, age or size from encompassing 30-inch diameter trees to 15-inch diameter trees, making a significantly higher number of trees require a permit for removal.

We havent had any conversations about doing a total canopy coverage, Smith told PCD, in reference to Oak Islands tree canopy study carried out by urban forestry consulting firm PlanIt Geo, which was published in November and helped inform the towns new vegetation policy.

Smith noted Topsail and Surf City already have tree protection ordinances, but he is interested in taking new measures to preserve vegetation.

I think as we move down through the year, it will become important to understand how they reached the size of a tree and a few other issues there at Oak Island, he said.

Editors note: This article has been updated to change the word expand to amend for CBRA-designated areas in Topsail Island, to change the phrase Topsail Island to Topsail Beach, and to specify shellfish leases for public uses of state waters. Port City Daily regrets these errors.

Tips or comments? Email journalist Peter Castagno atpeter@localdailymedia.com.

Want to read more from PCD? Subscribenowand then sign up for our morning newsletter,Wilmington Wire, and get the headlines delivered to your inbox every morning.

View post:

Beach towns gear up for ambitious state and federal lobbying effort, firm on $9K monthly retainer - Port City Daily