NASA to look for new options to carry out Mars Sample Return program – SpaceNews

WASHINGTON NASA will seek out of the box ideas in a bid to reduce the costs and shorten the schedule for returning samples from Mars.

In an April 15 briefing, agency officials announced they would solicit proposals from NASA centers and from industry on innovative designs to reshape its Mars Sample Return (MSR) effort after an internal review confirmed the ballooning costs of the overall program.

That review found that the current program would cost between $8 billion and $11 billion, the same range offered by an independent assessment completed last September. To fit that into the overall planetary science budget without affecting other programs would delay the return of samples from the early 2030s to 2040.

The bottom line is that $11 billion is too expensive and not returning samples until 2040 is unacceptably too long, NASA Administrator Bill Nelson said at the briefing.

To try to reduce costs and schedule, NASA will issue a request for proposals April 16 seeking ideas on alternative approaches for the overall MSR architecture or specific elements of it, like the sample retrieval lander or Mars Ascent Vehicle (MAV) rocket that would place the collected samples into orbit. Proposals would be due to NASA May 17, with the agency issuing contracts for 90-day studies shortly thereafter.

Im expecting to get everybody in high gear and that we have the answers to this by this fall, Nelson said.

While NASA is looking for innovative approaches, it is not necessarily looking for new technologies. What were looking for is heritage, said Nicola Fox, NASA associate administrator for science. What were hoping is that well be able to get back to some more traditional, tried-and-true architectures, things that do not require huge technological leaps.

One example she gave is technology that enables a smaller, and presumably less expensive, MAV. The studies, she said, will seek proposals that could return an unspecified number of samples, and not necessarily all the roughly 30 samples that the Perseverance rover will have on board when it completes its work.

NASAs hope is that the studies can significantly reduce the cost and schedule for MSR, but officials did not set a specific goal. Were definitely going to try, Nelson said, adding he was counting on the expertise of NASA personnel and those in industry to find a solution.

The goal is to do better than a revised version of the baseline architecture for MSR that NASA developed in response to the independent report last fall. That architecture would see the launch of the ESA-developed Earth Return Orbiter in 2030, slightly later than currently planned, said Sandra Connelly, NASA deputy associate administrator for science, during a town hall meeting after the briefing. That would be followed by the sample retrieval lander with the MAV in 2035, allowing samples to make it back to Earth in 2040.

One issue is the longevity of Perseverance. Connelly said the new plan would have Perseverance complete its exploration of terrain outside Jezero Crater and return to the crater floor in 2028. Once there, it would go into a quiescent state until the sample retrieval lander arrived.

Fox suggested in the town hall meeting that this baseline concept would not fly given its projected high cost. In the current budget climate that we have, $11 billion, as the administrator said, is too much, she said. I wouldnt say the entire thing is dead on arrival. What were looking at is how we can infuse some innovation and heritage and simplification.

MSR, though, will be on a fiscal diet the next two years. Fox said that NASA plans to spend $310 million on MSR in the current fiscal year, near the low end of the range offered by congressional appropriators in the final omnibus spending bill last month. That is a little less than one third of the $949.3 million that NASA originally requested for MSR in its 2024 budget proposal.

NASAs fiscal year 2025 budget request left funding for MSR as TBD or to be determined. NASA now says it will seek $200 million for the program. Lori Glaze, director of NASAs planetary science division, said at the town hall meeting that the $200 million will come from a line for Planetary Decadal Future in the original budget proposal, avoiding taking money away from existing missions or research programs. It would, though, further delay new missions, like a proposed Uranus mission recommended by the latest planetary science decadal survey.

Nelson said he has had extensive discussions about NASAs MSR plans with members of Congress, including senators and representatives from California worried about the effects of the changes on the Jet Propulsion Laboratory, which laid off 8% of its workforce in February in response to reductions in spending on MSR. They seem to be quite understanding of the predicament were in.

However, in a statement a few hours after the briefing, Sens. Alex Padilla (D-Calif.) and Laphonza Butler (D-Calif.) criticized the budget reductions. These funding levels are woefully short for a mission that NASA itself identified as its highest priority in planetary science and that has been decades in the making, they stated, asking Nelson to work with Congress to better balance these cuts to protect the JPL workforce.

NASA officials said at the briefing and town hall that there was no discussion of suspending or even canceling MSR, citing its high ranking in the last two planetary science decadal survey among flagship-class missions. Returning these samples from Mars is such a huge priority for us. That is why were doing all of these things, Fox said.

Returning the samples from Mars remains an important operation, Nelson said.

More:

NASA to look for new options to carry out Mars Sample Return program - SpaceNews

Cloud Native Efficient Computing is the Way in 2024 and Beyond – ServeTheHome

Today we wanted to discuss cloud native and efficient computing. Many have different names for this, but it is going to be the second most important computing trend in 2024, behind the AI boom. Modern performance cores have gotten so big and fast that there is a new trend in the data center: using smaller and more efficient cores. Over the next few months, we are going to be doing a series on this trend.

As a quick note: We get CPUs from all of the major silicon players. Also, since we have tested these CPUs in Supermicro systems, we are going to say that they are all sponsors of this, but it is our own idea and content.

Let us get to the basics. Once AMD re-entered the server market (and desktop) with a competitive performance core in 2017, performance per core and core counts exploded almost as fast as pre-AI boom slideware on the deluge of data. As a result, cores got bigger, cache sizes expanded, and chips got larger. Each generation of chips got faster.

Soon, folks figured out a dirty secret in the server industry: faster per core performance is good if you license software by core, but there are a wide variety of applications that need cores, but not fast ones. Todays smaller efficient cores tend to be on the order of performance of a mainstream Skylake/ Cascade Lake Xeon from 2017-2021, yet they can be packed more densely into systems.

Consider this illustrative scenario that is far too common in the industry:

Here, we have several apps built by developers over the years. Each needs its own VM and each VM is generally between 2-8 cores. These are applications that need to be online 247 but are not ones that need massive amounts of compute. Good examples are websites that serve a specific line of business function but do not have hundreds of thousands of visitors. Also, these tend to be workloads that are already in cloud instances, VMs, or containers. As the industry has started to move away from hypervisors with per-core licensing or per-socket license constraints, scaling up to bigger, faster cores that are going underutilized makes little sense.

As a result, the industry realized it needed lower cost to produce chips that are chasing density instead of per-core performance. An awesome way to think about this is to think about trying to fit the maximum number of instances for those small line-of-business applications developed over the years that are sitting in 2-8 core VMs into as few servers as possible. There are other applications like this as well that are commonly shown such as nginx web servers, redis servers, and so forth. Another great example is that some online game instances require one core per user in the data center, even if that core is relatively meager. Sometimes just having more cores is, well, more cores = more better.

Once the constraints of legacy hypervisor per core/ per socket licensing are removed, then the question becomes how to fit as many cores on a package, and then how dense those packages can be deployed in a rack. One other trend we are seeing is not just more cores, but also lower clock speed cores. CPUs that have a maximum frequency in the 2-3GHz range today tend to be considerably more power efficient than those with frequencies of P-core only servers in the 4GHz+ range and desktop CPUs now pushing well over 5GHz. This is the voltage frequency curve at work. If your goal is to have more cores, but do not need maximum per-core performance, then lowering the performance per core by 25% but decreasing the power by 40% or more, means that all of those applications are being serviced with less power.

Less power is important for a number of reasons. Today, the biggest reason is the AI infrastructure build-out. If you, for example, saw our 49ers Levis Stadium tour video, that is a perfect example of a data center that is not going to expand in footprint and can only expand cooling so much. It also is a prime example of a location that needs AI servers for sports analytics.

That type of constraint where the same traditional work needs to get done, in a data center footprint that is not changing, while adding more high-power AI servers is a key reason cloud-native compute is moving beyond the cloud. Transitioning applications running on 2017-2021 era Xeon servers to modern cloud-native cores with approximately the same performance per core can mean 4-5x the density per system at ~2x the power consumption. As companies release new generations of CPUs, the density figures are increasing at a steep rate.

We showed this at play with the same era of servers and modern P-core servers in our 5th Gen Intel Xeon Processors Emerald Rapids review.

We also covered the consolidation just between P-core generations in the accompanying video. We are going to have an article with the current AMD EPYC Bergamo parts very soon in a similar vein.

If you are not familiar with the current players in the cloud-native CPU market, that you can buy for your data centers/ colocation, here is a quick run-down.

The AMD EPYC Bergamo was AMDs first foray into cloud-native compute. Onboard, it has up to 128 cores/ 256 threads and is the densest publicly available x86 server CPU currently available.

AMD removed L3 cache from its P-core design, lowered the maximum all core frequencies to decrease the overall power, and did extra work to decrease the core size. The result is the same Zen 4 core IP, with less L3 cache and less die area. Less die area means more can be packaged together onto a CPU.

Some stop with Bergamo, but AMD has another Zen 4c chip in the market. The AMD EPYC 8004 series, codenamed Siena also uses Zen 4c but with half the memory channels, less PCIe Gen5 I/O and single-socket only operation.

Some organizations that are upgrading from popular dual 16 core Xeon servers can move to single socket 64-core Siena platforms and stay within a similar power budget per U while doubling the core count per U using 1U servers.

AMD markets Siena as the edge/ embedded part, but we need to recognize this is in the vein of current gen cloud native processors.

Arm has been making a huge splash into the space. The only Arm server CPU vendor out there for those buying their own servers, is Ampere led by many of the former Intel Xeon team.

Ampere has two main chips, the Ampere Altra (up to 80 cores) and Altra Max (up to 128 cores.) These use the same socket and so most servers can support either. The Max just came out later to support up to 128 cores.

Here, the focus on cloud-native compute is even more pronounced. Instead of having beefy floating point compute capabilities, Ampere is using Arm Neoverse N1 cores that focus on low power integer performance. It turns out, a huge number of workloads like serving web pages are mostly integer performance driven. While these may not be the cores if you wanted to build a Linpack Top500 supercomputer, they are great for web servers. Since the cloud-native compute idea was to build cores and servers that can run workloads with little to no compromise, but at lower power, that is what Arm and Ampere built.

Next up will be the AmpereOne. This is already shipping, but we have yet to get one in the lab.

AmpereOne uses a custom designed core for up to 192 cores per socket.

Assuming you could buy a server with AmpereOne, you would get more core density than an AMD EPYC Bergamo server (192 vs 128 cores) but you would get fewer threads (192 vs 256 threads.) If you had 1 vCPU VMs, AmpereOne would be denser. If you had 2 vCPU VMs, Bergamo would be denser. SMT has been a challenge in the cloud due to some of the security surfaces it exposes.

Next in the market will be the Intel Sierra Forest. Intels new cloud-native processor will offer up to 144/ 288 cores. Perhaps most importantly, it is aiming for a low power per core metric while also maintaining x86 compatibility.

Intel is taking its efficient E-core line and bringing it to the Xeon market. We have seen massive gains in E-core performance in both embedded as well as lower-power lines like the Alder Lake-N where we saw greater than 2x generational performance per chip. Now, Intel is splitting its line into P-cores for compute intensive workloads and E-cores for high-density scale-out compute.

Intel will offer Granite Rapids as an update to the current 5th Gen Xeon Emerald Rapids for all P-core designs later in 2024. Sierra Forest will be the first generation all E-core design and is planned for the first half of 2024. Intel already has announced the next generation Clearwater Forest will continue the all E-core line. As a full disclosure, this is a launch I have been excited about for years.

We are going to quickly mention the NVIDIA Grace Superchip here. With up to 144 cores across two dies packaged along with LPDDR memory.

While at 500W and usingArm Neoverse V2 performance cores, one would not think of this as a cloud native processor, it does have something really different. The Grace Superchip has onboard memory packaged alongside its Arm CPUs. As a result, that 500W is actually for CPU and memory. There are applications that are primarily memory bandwidth bound, not necessarily core count bound. For those applications, something like a Grace Superchip can actually end up being a lower-power solution than some of the other cloud-native offerings. These are also not the easiest to get, and are priced at a significant premium. One could easily argue these are not cloud-native, but if our definition is doing the same work in a smaller more efficient footprint, then the Grace Superchip might actually fall into that category for a subset of workloads.

If you were excited for our 2nd to 5th Gen Intel Xeon server consolidation piece, get ready. To say that the piece we did in late 2023 was just the beginning would be an understatement.

While many are focused on AI build-outs, projects to shrink portions of existing compute footprints by 75% or more are certainly possible, making more space, power, and cooling available for new AI servers. Also, just from a carbon footprint perspective, using newer and significantly more power-efficient architectures to do baseline application hosting makes a lot of sense.

The big question in the industry right now on CPU compute is whether cloud native energy-efficient computing is going to be 25% of the server CPU market in 3-5 years, or if it is going to be 75%. My sense is that it likely could be 75%, or perhaps should be 75%, but organizations are slow to move. So at STH, we are going to be doing a series to help overcome that organizational inertia and get compute on the right-sized platforms.

More:

Cloud Native Efficient Computing is the Way in 2024 and Beyond - ServeTheHome

The Latest News on Meme Coins | Navigating the Meme Coin Hype in 2023 with Dogecoin, Shiba Inu, Pepe Coin, and … – Finbold – Finance in Bold

This post is sponsored and not a part of Finbold's editorial content. For a full disclaimer, please . If you encounter any issues, kindly report them to [emailprotected]. Crypto assets/products can be highly risky. Never invest unless youre prepared to lose all the money you invest.

The cryptocurrency market is experiencing a surge of optimism, fueled by recent positive developments, while the ever-growing popularity of meme coins persists. These digital tokens, inspired by internet memes and cultural trends, are captivating both experienced crypto users and newcomers alike.

This article aims to demystify the world of meme coins, providing an informative overview for crypto users of all levels. Well explore the current market landscape, analyze the top contenders, and dissect the risks and potential rewards associated with these unique crypto coins.

What are Meme Coins? | Understanding the Hype

Unlike Bitcoin or other altcoins, memecoins ride the wave of humor and community. While first wave meme coins such as Dogecoin may have lacked inherent utility, more recent altcoins such as ApeMax stands out with its Boost-to-Earn staking utility, allowing coin holders to boost projects or their favorite entities and potentially earn rewards. Its important to note that while all meme coins mix blockchain and fun, they are all different from one another in many ways. In this article well explore some of these particularities of these different meme cryptos.

Meme Coin Market Overview | Current Landscape and Trends

The meme coin market has experienced explosive growth in recent years, with the total market capitalization reaching over $22 billion as of December 2023. This surge is attributed to several factors, including:

The Leading Meme Coins of 2023:

ApeMax (APEMAX)

ApeMax is exploding onto the crypto scene, grabbing peoples attention with its innovative approach to staking. This exciting new meme coin introduces Boost-to-Earn tokenomics, a revolutionary concept that lets holders potentially earn rewards while simultaneously boosting the projects theyre fans of.

Dogecoin (DOGE)

Dogecoin (DOGE) is the grandfather of all meme coins, predating the current trend by several years. Launched in 2013 as a joke, it quickly captured the hearts and minds of internet users with its Shiba Inu mascot and lighthearted approach to cryptocurrency.

Shiba Inu (SHIB)

Often called the Dogecoin killer, Shiba Inu has captured significant attention through its aggressive marketing and loyal community. SHIB has developed a wide ecosystem with several sub-tokens with specific purposes, and it has also recently launched its own Shibarium Layer-2 chain.

Pepe Coin (PEPE)

Pepe Coin is a popular new meme coin, inspired by the iconic Pepe the Frog meme. Pepe Coin implements a token burning mechanism which takes place on each on-chain transaction. This, combined with its strong community and unique branding, has propelled it to become a popular choice in the meme coin space.

What is ApeMax? | A Potential Game Changer in the Meme Coin Space?

With a rapidly growing community, ApeMax has quickly become a hot topic in presale discussions, gaining traction as a potential game-changer in the blockchain and presale space. In its current presale phase, ApeMax boasts an impressive 7,800+ token holders and a staggering 3.5 billion tokens staked. Moreover, eligible buyers during the presale can acquire exclusive discount Loot Boxes for a limited period.

Heres what makes ApeMax stand out:

ApeMax positions itself as a leader in the evolution of crypto presales and staking. Its innovative approach, combined with its passionate community and exciting features, makes it a new coin that new coin buyers are watching.

Whats The Future of Crypto Coins?

The short answer to this complex question is: its impossible to predict with certainty. Crypto is a notoriously volatile market and constantly changing, with new trends and use cases arising every year. However, 2023 has been a recovery year for crypto following a bearish year in 2023. Institutional adoption and a move into the mainstream could be in sight according to some crypto fans. For example, Paypal will soon allow its users to make transactions using cryptocurrencies, and that includes meme coins.

While the future is always unpredictable, Blockchain technology has the power to disrupt various industries, from finance and supply chain to healthcare and governance. These factors, combined with the current bullish market sentiment, could mean that 2024 could be an intriguing year for crypto with new surprises in store. As 2024 is just around the corner, larger meme coins such as Dogecoin and Shiba Inu also continue to remain popular amongst meme coin fans.

Wrapping up 2023 | Positivity Towards a Bullish Market?

The meme coin market has experienced a whirlwind of activity in 2023, captivating peoples hearts and sparking discussions about the future of cryptocurrency. While volatility remains a defining feature, the year has also been marked by new developments, gearing towards a possible crypto bullrun.

Looking forward, meme coins seem likely to remain a crypto staple within the wider altcoin family. The increasing mainstream awareness, adoption by major platforms, and integration with real-world applications are creating a fertile ground for adoption. New meme coins like ApeMax, with their innovative features, are paving the way for a future where meme coins go beyond the realm of hype to offer new and creative forms of utility. It should be noted that this article doesnt serve as financial advice. Thorough personal research is crucial when engaging with meme coins or any kind of cryptocurrency. Its essential to be aware of the high risks associated as well as the volatility inherent to all crypto coins. For those interested in ApeMax coins, verify regional restrictions and buying eligibility rules on the ApeMax official website before proceeding.

Read more here:

The Latest News on Meme Coins | Navigating the Meme Coin Hype in 2023 with Dogecoin, Shiba Inu, Pepe Coin, and ... - Finbold - Finance in Bold

Sometimes buses take the old route, sometimes the new one, so its roulette getting a bus at either stop right now. – PoPville

Dear PoPville,

I wanted to flag WMATAs clusterf**k of a rollout of the new (old) S2 and S9 routes and hopefully save people from waiting for a bus that may never come. This week, WMATA rolled the S2/9 routes back to 16th between K and M after about a year of running that strip on 15th. However, they didnt inform passengers or drivers clearly of this change. Sometimes buses take the old route, sometimes the new one, so its roulette getting a bus at either stop right now.

There are also no signs at the old stop to let people know where they should go now, so only people with smartphones could track the busses, and even then its not clear. For example, the current sign at 16 and L says only S2, but Google and Transit app say the S9 stops there too. Which is it?

My advice for bus riders is to go to Franklin Square or P Street (or just take the 50 busses) until WMATA figures this out in a week or two.

I remember similar chaos the last time this route changed, so it seems like this isnt just a fluke and better planning and communication from WMATA is needed.

Link:

Sometimes buses take the old route, sometimes the new one, so its roulette getting a bus at either stop right now. - PoPville