The evolution of security analytics – Help Net Security

As networks continue to evolve and security threats get more complex, security analytics plays an increasingly critical role in securing the enterprise. By combining software, algorithms and analytic processes, security analytics helps IT and security teams proactively (and reactively) detect threats before they result in data loss or other harmful outcomes.

Given that the average time to identify and contain a data breach in 2021 was 287 days, its more important than ever for organizations to include security analytics in their threat detection and response programs. But how has this technology changed over the last decade? In this article, I will explore the evolution and importance of security analytics.

This evolution has had two main trends.

First, security analytics is becoming more sophisticated. In the last 10 years the industry has transitioned from rule-based alerting to big data and machine learning analysis. Second, products have become more open and customizable.

As these technologies have advanced, so too have their specific use cases, with organizations using these for identity analytics (examining authentication, authorization and access for anomalies), fraud (finding anomalous transactions), and more. Today, security analytics plays a central role in Security Information and Event Management (SIEM) solutions and Network Detection and Response products (not to mention standalone security analytics software).

To better understand this evolution and the capabilities of current security analytics solutions, lets dive into the three primary generations of security analytics advancement.

Traditional security analytics focused on correlation and rules within a proprietary platform.

Users imported data into a closed database, the data was normalized and run through a correlation engine, and then the system produced alerts based on rules. Products typically included alert enrichment, which provided more useful context along with an alert, such as linking it to a specific user, host, or IP address.

However, this era often suffered from alert fatigue where the analytic solution produced more alerts than the security team could investigate, including high numbers of false positives. Sorting which alerts were important and which ones werent involved a great deal of manual work. Furthermore, these solutions were often entirely proprietary, with little to no options for customization. This prevented the security team from tweaking rules to cut down on the number of bad alerts. They were stuck with the alert fatigue issue.

The second generation of security analytics began to incorporate big data and statistical analysis, while remaining a black box to users.

These solutions offered data lakes instead of databases, which allowed for a greater variety of data to be gathered and analyzed, but they were still proprietary. New analytics capabilities emerged, such as the ability to include cloud data, network packets and flow data, but users still couldnt see how they worked or verify the results.

Data enrichment was better, but users largely could not customize the contextual data they wanted with their alerts. For example, a security team might want to add asset criticality data so they can prioritize events that affect key pieces of their infrastructure or include information from external sources like VirusTotal.

Many solutions started offering threat hunting capabilities as well, which made it easier for security teams to proactively search for suspicious activity that evaded perimeter security controls.

But false positives and limited bandwidth on security teams continued to be a major challenge. In fact, this remains a challenge today. According to the 2021 Insider Threat Report from Cybersecurity Insiders, 33% of respondents said the biggest hurdle to maximizing the value of their SIEM was not having enough resources and 20% said too many false positives.

The third generation of security analytics technologies brings us to the current day, where machine learning, behavioral analysis and customization are driving innovation.

There are now SIEM products that allow organizations to use their existing data lakes, rather than forcing customers to use proprietary ones. And some solutions have opened their analytics, enrichment, and machine learning models so users can better understand them and modify as needed.

Today, powerful algorithms find patterns in data, set baselines and identify outliers. Theres also a greater focus on identifying anomalous behavior (a user taking suspicious actions) and on prioritizing and ranking the risk of alerts based on contextual information like data from Sharepoint or IAM systems. For example, a user accessing source code with legitimate credentials might be a low-priority alert at best, but that user doing so in the middle of the night for the first time in weeks from a suspicious location should trigger a high-priority alert. Thanks to these capabilities, analytic solutions are reaching the point where they can trigger remediation actions automatically.

Security analytics have evolved quickly in recent years and as we look ahead, the industry is starting to combine SIEM, User Entity Behavioral Analytics (UEBA), Security Orchestration, Automation and Response (SOAR) and Extended Detection and Response (XDR) for a more automated and telemetry rich approach to threat detection and response.

But today, the latest advancements are helping to reduce the workload on security teams, allowing them to better detect and contain both known and unknown threats more quickly. Open access to security analytics is also a monumental shift that helps teams better understand and tweak these solutions so they can verify models and generate better results.

Ideally, analytics solutions should have strong pre-built libraries of machine learning models that dont require users to be data scientists to edit them (but give them the editing option if needed). As these capabilities continue to develop, I believe theyll be a key factor in helping security teams reduce that 287-day average time to contain a breach in the coming years.

See the rest here:

The evolution of security analytics - Help Net Security

Emo to e-boy: the evolution of a subculture – Campus Times

The emo subculture had teens by the throat during the late 2000s. Unfortunately, when the great MySpace-to-Facebook migration happened, the emo subculture that had floundered there lost a lot of its members and soon faded into the background. Now, most would say that emo is dead, and that is true to an extent: the genre itself and its standing in modern pop culture is practically on life support. That being said, however, emo has evolved into a new, possibly more popular subculture.

The emo subculture stemmed directly from the music of its namesake, which featured the likes of My Chemical Romance, Fall Out Boy, and Panic! at the Disco. Its fashion was characterized by skinny jeans, eyeliner, painted nails, band t-shirts, studded belts, wristbands, and the iconic straight jet-black hair with an asymmetrical fringe. Emo managed to become an influential subculture through MySpace, which allowed young people to interact with each other without having to leave home, giving young emos easy access to like-minded people across the world. But as the subculture found mainstream popularity, so did the negative connotations it carried, often being associated with depression, self-harm, and suicide. These stereotypes led to a lot of backlash against the emo subculture, and consequently caused Panic! At the Disco and My Chemical Romance to deny being emo. This negative reputation and the eventual migration from MySpace to Facebook spelled the end for the emo subculture in its original form.

Luckily for emo, before the end of its original run, it had already evolved into a new subculture, known as scene. Scene saw emo expand its musical repertoire to include metal, crunk, electronic, indie rock, emo pop, and pop-punk, taking a detour away from emotional emphasis while still leaning towards rock influences. Fashion-wise, scene took the core of emo fashion and added more color and accessorization to it. Unfortunately, the popularity that emo found in its new life as scene wouldnt last much longer. By the late 2010s, scene began losing its popularity and eventually faded away completely.

However, scene wasnt the end of emos evolution; the two would further evolve into a new subculture. E-kids, the collective term for e-boys and e-girls, are the most recent iteration of emo. The e-kid subculture started in 2018, and quickly rose to popularity following the worldwide release of TikTok in the same year. Unlike scene, the e-kid subculture continued to pull away from its rock-based roots while also returning to the emotional emphasis of the emo genre. E-kids are strongly associated with sad boy music, which is music that focuses on sadness and mental illness, such as emo rap.

With e-kids being the most recent iteration of emo, their success in infiltrating pop culture raises an important question: why are e-kids so popular while emo and scene kids werent? E-kids beat out emo and scene in popularity due to various reasons, such as being associated with more mainstream music and more fashionable styling, making the subculture easier to get into. This new iteration of emo is primarily known for fashion and thirst traps, which are popular both within the subculture and on TikToks platform as a whole. On top of this, e-kid fashion draws considerable influence from K-pop fashion, which started becoming mainstream in western media around the same time. Simply put, a lot of the subcultures popularity comes down to the timing of its emergence and its fresh spin on what its predecessors left it with.

Follow this link:

Emo to e-boy: the evolution of a subculture - Campus Times

SOC 2025: The Coming SOC Evolution – Security Boulevard

Posted under: Research and Analysis

Its brutal running a security operations center (SOC) today. The attack surface continues to expand, in a lot of cases exponentially, as data moves to SaaS, applications move to containers, and the infrastructure moves to the cloud. The tools used by the SOC analysts are improving, but not fast enough. It seems adversaries remain one (or more) steps ahead. There arent enough people to get the job done. Those that you can hire typically need a lot of training, and retaining them continues to be problematic. As soon as they are decent, they head off to their next gig for a huge bump in pay.

At the same time, security is under the spotlight like never before. Remember the old days when no one knew about security? Those days are long gone, and they arent coming back. Thus, many organizations embrace managed services for detection and response, mostly because they have to.

Something has to change. Actually, a lot has to change. Thats what this series, entitled SOC 2025 is about. How can we evolve the SOC over the next few years to address the challenges of dealing with todays security issues, across the expanded attack surface, with far fewer skilled people, while positioning for tomorrow?

We want to thank Splunk(you may have heard of them) for agreeing to be the preliminary licensee for the research. That means when we finish up the research and assemble it as a paper, they will have an opportunity to license it. Or not. There are no commitments until the paper is done, in accordance with our Totally Transparent Research methodology.

There tend to be two use cases main use cases for the SOC. Detecting, investigating, and remediating attacks and substantiating the controls for audit/compliance purposes. We are not going to cover the compliance use case in this series. Not because it isnt important, audits are still a thing, and audit preparation should still be done in as efficient and effective a manner as possible. But in this series, were tackling the evolution of the Security OPERATIONS Center, so were going to focus on the detection, investigation, and remediation aspects of the SOCs job.

You cant say (for most organizations anyway) there hasnt been significant investment in security tooling over the past five years. Or ten years. Whatever your timeframe, security budgets have increased dramatically. Of course, there was no choice given the expansion of the attack surface and the complexity of the technology environment. But if the finance people objectively look at the spending on security, they can (and should) ask some tough questions about the value the organization receives from those significant investments.

And there is the rub. We, as security professionals, know that there is no 100% security. That no matter how much you spend, you can (and will) be breached. We can throw out platitudes about reducing the dwell time or make the case that the attack would have been much worse without the investment. And youre are probably right. But as my drivers education teacher told me over 35 years ago, you may be right, but youll still be dead.

What we havent done very well is manage to Security Outcomes and communicate the achievements. What do we need the outcome to be for our security efforts? Our mindset needs to shift from activity to outcomes. So what is the outcome we need from the SOC? We need to find and fix security issues before data loss. That means we have to sharpen our detection capabilities and dramatically improve and streamline our operational motions. There is no prize for finding all the vulnerabilities. Like there are no penalties for missing them. The SOC needs to master detecting, investigating, and turning that information into effective remediation before data is lost.

Once weve gotten our arms around the mindset shift in focusing on security outcomes, we can focus on the how. How is the SOC going to get better in detecting, investigating, and remediating attacks? Thats where better tooling comes into play. The good news is that SOC tools are much better than even five years ago. Innovations like improved analytics and security automation give SOCs far better capabilities. But only if the SOC uses them.

What SOC leader in their right mind wouldnt take advantage of these new capabilities? In concept, they all would and should. In reality, far too many havent and cant. The problem is one of culture and evolution. The security team can handle detection and even investigation. But remediation is a cross-functional effort. And what do security outcomes depend on? You guessed it remediation. So at its root, security is a team sport, and the SOC is one part of the team.

This means addressing security issues needs to fit into the operational motions of the rest of the organization. The SOC can and should automate where possible, especially the things within their control. But most automation requires buy-in from the other operational teams. Ultimately if the information doesnt consistently and effectively turn into action, the SOC fails in its mission.

In this series, we will deal with both internal and external evolution. Well start by turning inward and spending time understanding the evolution of how the SOC collects security telemetry from both internal and external sources. Given the sheer number of new data sources that much be considered (IaaS, PaaS, SaaS, containers, DevOps, etc.), making sure the right data is aggregated is the first step in the battle.

Next, well tackle detection and analytics since that is the lifeblood of the SOC. Again, you get no points for detecting things, but youve got no chance of achieving desired security outcomes if you miss attacks. The analytics area is where the most innovation has happened over the past few years, so well dig into some use cases and help you understand how frameworks like ATT&CK and buzzy marketing terms like eXtended Detection and Response (XDR) should influence your SOC plans.

Finally, well wrap up the series by taking the what (accurate detections) and turning them into the how (effective remediation), resulting in positive security outcomes. Operationalizing is a key concept in that context. So buckle up and come along on the SOC evolution ride as we define SOC 2025.

Mike Rothman(0) CommentsSubscribe to our daily email digest

*** This is a Security Bloggers Network syndicated blog from Securosis Blog authored by [emailprotected] (Securosis). Read the original post at: http://securosis.com/blog/soc-2025-the-coming-soc-evolution

See original here:

SOC 2025: The Coming SOC Evolution - Security Boulevard

Kona bound? Rudy Von Berg on the evolution of long-course racing – Tri247.com

In part one of our interview with Rudy Von Berg, we kicked off by looking back at the 2019 IRONMAN 70.3 World Championship in Nice, an event he described as a dream race.

A venue which means a lot to him, being brought up riding on the hills and roads around the Alps-Maritimes, its probably no surprise to find out that IRONMAN France this year, scheduled for June 26, will be the venue for his full-distance debut.

In part two, we turn our attentions to those longer distances and what could lead to the IRONMAN World Championship in Hawaii. While Nice could open the door to Kona, its not a given just yet that it would be an automatically accepted invitation.

As Von Berg explains, the landscape is changing for professional athletes.

The overall idea was that I didnt want to start doing Ironman races too early. I wanted to develop at 70.3, and reach my potential, and even though I dont think Ive reached my very best at 70.3 yet, Ill be in the year of turning 29 and so that feels like its old enough that I can start doing Ironman.

I always wanted to do France, for the reasons I mentioned earlier, growing up there. I was going to do it last year, but due to COVID it was postponed to a week before or after St George (70.3 World Championship), so that was out, and I didnt want to scramble to find another Ironman and so I thought Ill just do it in 2022 and properly prepare for it.

The only thing is I hope I didnt lose too much shape kind of medium term with my Mono, because I lost quite a bit of muscle in my legs when I was sick. I lost a lot of weight I just hope that didnt set me back too much, especially for an Ironman, when its really the years of training before that count. Thats my only question mark. But, Ill put in six good months of training now, and hopefully be at my very best.

The typical assumption is that if you earn a Kona slot (for October 2022), you take it. The IRONMAN World Championship is part of the Von Berg family DNA his father, Rodolphe senior, has been a Kona Age-Group World Champion himself but the decision on whether junior will be there this October, isnt clear yet. Would he take a Kona slot, if France goes well?

Likely but the problem is that the calendar is quite difficult. Up to Nice I will probably do two half races before the IRONMAN. Then a month later theres the PTO Canadian Open, then a month later the Collins Cup than a month later the PTO US Open, and then theres the two World Champs in October.

Its not possible to be your best at all of those, not even three of those. Usually I can peak in June and peak again in September and then be close to peaking for the last race in November or something. But thats going to be tough.

Given that Von Bergs earliest triathlon memory is watching his father racing in Kona almost 25 years ago, the pull towards the Big Island is strong. Its clear this wont be an easy decision either way:

So, I havent decided yet exactly whats going to happen for that. If I qualify for Kona, Ill see what I want to focus on. I cant not do these PTO Tour races because these are the type of races that weve been waiting for as Pros for many years; some big prize money races, something like Regional Champs where all the best athletes will be at for many years to come, rather than at diluted races usually.

So, the short answer is well have to see as its kind of tough. I dont want to be average at Kona and the 70.3 St. George World Champs, I want to be really good at one of the two.

Ive been thinking about Kona for so long that if I qualify it would be kind of dumb not to do it, but also I have to think about my career in the big picture. Theres still time to focus on the 70.3 Worlds for example and then try to go for Kona the next year but then also a career goes by fast and when you have opportunities, they wont always be next year.

Results in Nice, of course, will determine whether those considerations need to be resolved. For this year, at least.

As well as new events creating decisions for athletes to make and perhaps a choose your battles wisely situation the PTO Tour could also impact the distance focus of an athlete career. IRONMAN France will represent a full-distance debut for Rudy, but not necessarily the beginnings of an all-in move towards that seven-hour-plus format:

Things are changing a little with the PTO Tour races for example. Out of the four PTO Tour races [Ed. The European Open and Asian Open will be added in 2023], there will be three 100km races and one 200km race, so pretty much three halfs and one Ironman. So, the focus for that is a little more on the shorter distance, so I dont think I will ever go to just be a full Ironman athlete. Ill definitely still want to perform really well at half distance, and so I think Ill max do two ironman races per year and then theres still room to do really well at half with that.

The PTO is kind of changing that, in a good way, because I think the 70.3 is a really good distance and makes it a good mix of the endurance and the speed.

We love to race. I like to be more of a Frodeno type where I want to prepare and do a race only if Im going to be really good at it. Ill race slightly less, but I still always have that urge to add races into the calendar. Its just my rational part that says thats a little too much. We love the process of training, but the only reason we do it is because of racing.

Its long been a part of their mission, and was reiterated in our discussions with the PTOs CEO Sam Renouf before Christmas, the best to race the best. That aim is in line with the direction that Rudy sees the sport moving, talking us through his potential 2022 schedule as an example:

I think more and more now its going to be championship-type races, because even some of those Regional Championship races I did, they didnt have quite the fields that the PTO Tours will have, which is literally 40 of the top 50 guys in the world. Its going to be world champ events every time.

For me its only going to be the big races. I mean Oceanside 70.3 (April) in North America is the first big race of the year, then Chattanooga 70.3, North American Champs in May, then IRONMAN France.

That might have actually a slightly weaker field maybe, IRONMAN France, even though its a race thats more and more on the map and I wouldnt be surprised if a Norwegian goes, or some top guys like that, or a Cam Wurf type.

After that its just all World Champs events two PTO Tours, Collins Cup, Kona if qualified and 70.3 Worlds in St. George. Thats why I was saying that you really want to be at your best in these events. If you are just at 90% then you are going to be 15th.

Something weve certainly referenced many times over the last two years is the impact of the pandemic. With limited racing opportunities, those events that have gone ahead have regularly featured pro fields with notable depth. While that, perhaps, indirectly gave a glimpse of the future, Von Berg is clear where the driver of change will continue to come from:

Thats true, COVID definitely created that a little, but I think the bigger reason now and moving forward is the PTO for sure.

Creating these big events and that 100km distance, which is as short of a long distance as they could for TV and putting these million dollar prize purses up. I think this is what is going to really develop the sport of triathlon professionally, and just kind of like in tennis, its a familiar notion to have these grand slam / regional champs type of events, and the PTO is going to focus on these main races plus the Collins Cup and develop that.

Hopefully that PTO Tour Series will become a really interesting series for triathlon and fans of triathlon.

Continue reading here:

Kona bound? Rudy Von Berg on the evolution of long-course racing - Tri247.com

A Stellar Merger’s Astrophysical Evolution in the Blink of an Eye – SciTechDaily

SOFIA FORCAST measurements (orange) of the V838 Mon spectrum, and the best-fit composite model of SOFIA data with a silicate-to-alumina ratio of 50:50 (yellow), overlaid atop an image of V838 Mon obtained by the Hubble Space Telescope, which shows the light echo illuminating circumstellar material. Credit: V838 Mon: ESA/Hubble & NASA; Spectra: Woodward et al.

Everything we see in the universe is a snapshot of the past: As light takes its time to reach our telescopes, the system were observing continues to evolve, and what we end up seeing is a moment in its history. By revisiting an object over the course of decades, we can look not only into its past, but can watch its history unfold.

Eleven years after it was last observed and 17 years after a stellar merger occurred, SOFIA looked at V838 Monocerotis, or V838 Mon, a binary star system about 19 thousand light-years away from Earth, capturing a snapshot in time of its makeup. This confirmed that the dust chemistry of the system has changed significantly over the course of nearly two decades following the merger, particularly over the past decade. This provided a history we otherwise cannot look at and offered an archaeological view of its evolution.

Because V838 Mon is quite bright and can saturate other telescopes, SOFIA is the only observatory capable of observing it at infrared wavelengths required to monitor this dust process. The researchers used SOFIAs FORCAST camera, which allows for low-resolution spectroscopy and deep imaging of bright objects.

Its very rare to see this progression of dust transformation in objects that is predicted to happen, said Charles Woodward, astrophysicist at the University of Minnesota and lead author on the paper describing the observation. To catch one is pretty cool.

An Armstrong F/A-18 flying safety and photo chase for NASAs SOFIA 747. Credit: NASA / Jim Ross

Material expelled as a result of a merger may provide hints about how our own early solar system evolved. Understanding how dust condensation occurs from material originally in a hot gas phase is related to how rocky planets, like Earth, form out of the gas and debris that surround young stars.

Its these small, micron-sized pieces of material that eventually build into planets like the one we sit on, Woodward said.

In environments like this that are conducive to forming dust, the way that the different materials are incorporated and condense affects the geology of the final product. This is especially true when aluminum which is very chemically active and can quickly deplete its surrounding oxygen is involved. In V838 Mon, the chemical composition of the dust has changed from primarily comprising of alumina components in 2008 to being dominated by silicates, as the alumina bond with their oxygen neighbors. Notably, this progression can be seen in real time.

If we look at theoretical condensation sequences for how this is supposed to work, this is an example of us being able to test those hypotheses, Woodward said.

While most astronomical events occur on a timescale of millions of years, this is one example of human-timescale astronomy, reminding us that immense changes can occur in a very short period of time.

Often when people think about astronomy, things are in stasis and they take millions and billions of years to occur. This was in the blink of an eye that the source went through evolution, Woodward said. Certain astrophysical phenomena are really dynamic.

Reference: The Infrared Evolution of Dust in V838 Monocerotis by C. E. Woodward, A. Evans, D. P. K. Banerjee, T. Liimets, A. A. Djupvik, S. Starrfield, G. C. Clayton, S. P. S. Eyres, R. D. Gehrz and R. M. Wagner, 7 October 2021, The Astronomical Journal.DOI: 10.3847/1538-3881/ac1f1e

SOFIA is a joint project of NASA and the German Space Agency at DLR. DLR provides the telescope, scheduled aircraft maintenance, and other support for the mission. NASAs Ames Research Center in Californias Silicon Valley manages the SOFIA program, science, and mission operations in cooperation with the Universities Space Research Association, headquartered in Columbia, Maryland, and the German SOFIA Institute at the University of Stuttgart. The aircraft is maintained and operated by NASAs Armstrong Flight Research Center Building 703, in Palmdale, California.

Link:

A Stellar Merger's Astrophysical Evolution in the Blink of an Eye - SciTechDaily

Usman: Ngannou Showed The Evolution of Heavyweights At UFC 270 – MMA News

UFC Welterweight Champion Kamaru Usman has praised heavyweight king Francis Ngannou for his adaptability at UFC 270, branding him the evolution of the heavyweights.

At the opening pay-per-view of 2022 this past weekend, Ngannou returned to defend his title for the first time since winning it at UFC 260 last March. Ahead of his unification showdown with former teammate Gane, a lot was being made about his future, preparation, and mindset.

Would his ongoing contractual dispute with the UFC affect his performance? Would his desire for a crossover to boxing distract him from the threat of Bon Gamin? Would Ganes technical style and fast movement nullify his power? Was a knockout his only path to victory?

When the iconic voice of Bruce Buffer called out and still after 25 minutes of action, Ngannou had successfully answered all of those questions.

After struggling on the feet for the opening two rounds, it appeared The Predator was on his way to a first defeat since 2018 and a potential departure from the promotion. But in the third frame, a momentous takedown changed the game.

After seeing the control he could employ on the ground, the UFCs hardest-hitting knockout artist put his grappling improvements on full display, earning the nickname Francis Ngannoumedov from some fans with the performance.

One man who had a front-row seat for Ngannous impressive strategy towards the end of the UFC 270 main event, and who knows a bit or two about wrestling, was reigning welterweight king Usman.

Speaking to BT Sport in the aftermath of his fellow African champs victory, The Nigerian Nightmare described Ngannou as the evolution of the heavyweights and suggested even he doesnt perform the sweep The Predator employed while on his back in the fifth and final frame.

Francis, thats the thing about him, hes one of those special athletes that he takes everything as it comes, said Usman. He was gonna be able to deal with whatever was coming at him. He didnt initially engage in the clinch or the wrestling the first round. That came from Gane, which I thought was an excellent game plan.

But were just seeing the evolution of heavyweights. I mean, did you see that sweep in the fifth? I mean, damn. Even I dont do that one. So youre seeing the evolution of the game, and Francis is a scary man.

While an Ngannou prediction was hardly left field prior to UFC 270, the manner in which he defeated the previously unbeaten Gane was one in which not many, if anybody, had seen coming.

With a clearly developed ground game to go along with the immense KO power that has left the likes of Jairzinho Rozenstruik and Stipe Miocuc unconscious, the champion is a scary prospect for the rest of the divisionif he remains in the promotion beyond 2022, that is.

What did you make of Francis Ngannous performance at UFC 270?

View original post here:

Usman: Ngannou Showed The Evolution of Heavyweights At UFC 270 - MMA News

The Evolution of Skate Videos, From VHS to TikTok – VICE

This article originally appeared on VICE Belgium.

When it comes to skateboarding, the only thing more important than actually going skating is making sure that you have footage of you doing it. You can tell people youve pulled off this, or jumped that, but without actual evidence of those particular alleged achievements, people will take you as seriously as Boris Johnsons apologies.

Skating owes much of its enduring popularity precisely to these videos. This has been the case for the past half a century, with the first ever skateboarding video dating back to 1965. Titled Skaterdater, a dialogue-free, coming of age short film shot in sunny California focused on a group of downhill skaters known as the Imperial Skate Board Club as they hoped to impress local girls with their prowess.

The film won the Palme dOr for Best Short Film at 1966s Cannes Film Festival and has proved to have a long shelf-life, having been the subject of both academic study and extreme sports fandom. Skaterdater is still of cultural interest, even if it presents us with a vision of skate videos that looks nothing like the ones that aficionados like myself and my friends sit down and enjoy together today.

As skateboarding became increasingly popular amongst young people the world over, Hollywood cottoned on to the fact, featured skating in cult movies like Back to the Future and Gleaming the Cube. This was, as skate historians might remind you, a moment when the sport was still largely confined to pools, bowls, and ramps. The Californian surf-inspired skating scene of the 1970s was immortalised for younger skaters in the 2001 documentary Dogtown and Z-Boys, directed by skate supremo Stacy Peralta.

Thats not to say that skating was the sole preserve of pool-plunging ex-surfers. By the mid-80s, the likes of Rodney Mullen and Mark Gonzales were laying the foundations of what we now know as street skating. They just werent dragging cameramen along with them for the ride.

In 1988, the movie Shackle Me Not came out. This hour-long video was released by H-Street a skate team based in San Diego, California, founded by pro skateboarders Tony Mag and Mike Ternasky in 1986 and arguably marked the birth of the modern skate video. Its combination of gritty footage and a punk soundtrack set the tone for the avalanche of skate videos that were to follow in its wake.

These releases included the legendary Video Days, the 1991 classic released by the skateboarding brand Blind. Directed by future-Hollywood darling, Spike Jonze, and featuring the skating talents of Jason Lee amongst others, Video Days was dynamic, action-oriented, and in your face. Other big hitters in the heady days of the 1990s include Plan Bs Questionable (1992), Girl Skateboards Mouse (1996) and Toy Machines Welcome to Hell (also 1996).

The thing that made this explosion of skate videos possible was Sonys era-defining VX1000 camcorder, the first device to use MiniDV tapes, which were much smaller than previous tapes. The camcorders relative affordability, portability and ease of use made it an essential on the skate scene and led to a standardisation of a skate video aesthetic. One of the defining visual characteristics of that aesthetic is the fish-eye lens, which shows up everywhere in skate videos of the 1990s and 2000s and still features in tapes released today.

Another technological advance, the internet, has allowed skaters to delve into the history of their hobby. File-sharing services like Limewire gave people the opportunity to fill their hard drives with all manner of skate footage. The ability to easily and freely consume those videos allowed people like me and my friends to develop a serious interest in skate culture.

Then, YouTube came in the latter half the the early 2000s, and killed off the skate DVD, which had already replaced skate VHS. Magazines like Thrasher and Transworld which attempted to bolster sales by bundling DVDs with their latest issues had to find new ways to stay relevant in a context where audiences didnt need to spend money for content.

Thrasher managed to drag themselves into the digital age of skateboarding pretty quickly, joining YouTube back in 2006 and amassing nearly three million subscribers along the way. Theyre also about to celebrate the publication of their 500th issue, a testament to their work and to their fans eagerness to preserve their culture, even for a price.

If YouTube shortened skateboarders attention spans, then Instagram decimated them, ushering the era of the minute-long video. Suddenly, videos that were 10 or 20 minutes were considered excessive, and while this led to a proliferation of free content, something was lost, too. Inviting friends over, grabbing a pizza and settling down on the sofa to watch a 15-minute video always felt a little lacking in the old magic.

Things got even shorter when TikTok launched in 2016. The super-short videos hosted on the platform opened up the skate scene to more people than ever, with skater girls, queer and non-binary skaters finally finding their online home.

For skaters themselves, the rise of social media opened up new avenues for self-promotion. It was now easier than ever to try and catch the eye of professional skate teams. Rather than having to mail out physical evidence of ones abilities, you could just upload them for the whole world in a matter of minutes. This gave skaters a sense of independence, putting (some of) the power in their hands. In addition to their finely-honed array of tricks, skaters increasingly learned how best to get eyes on their videos, understanding that big brands are impressed by viewers, followers, and subscribers.

Not everyone abandoned the traditional video formats, and by the mid-2010s things were getting longer once again. Brands like Palace and Bronze 56K extended the length of their releases, giving the worlds skaters something to really sink their teeth into on the sofa in the evening.

If the modern skate film has a superstar director, it might well be William Strobeck, the American cinematographer best known for his work with Supreme.The 2014 film Cherry marked a return to the old school full-length video format, and he did it again in 2018 with the instant classic Blessed. Both films have made a mark on a new generation of skaters who never knew they were looking for a long-form video to change their lives.

Here is the original post:

The Evolution of Skate Videos, From VHS to TikTok - VICE

The evolution of Remco Evenepoel: ‘He has learned he cannot take five steps forward in a row forever’ – Cycling Weekly

If Remco Evenepoel completes a stage race, he wins it.

At least that is how it works if his staggering form of the last two seasons is a correct indication of how a race will unfold.

In the last six stage races that the Belgian superstar has started and finished, he has topped the general classification in them all, picking up eight stage wins en-route.

Yet for a man blessed with such extraordinary talent and self-belief, there is already an asterisk hovering over his results: he has yet to do it in a Grand Tour or a truly big stage race, such as the Critrium du Dauphin.

He made his three-week race debut at last springs Giro dItalia, sitting second on GC for a period of time until stage 11, before eventually withdrawing before stage 18. This season he will target the Vuelta a Espaa as he seeks to prove that he can transform one-week dominance into three-week superiority.

The season that just passed proved one of maturation for Evenepoel, who turns 22 on January 25. His QuickStep-AlphaVinyl sports director Tom Steels told Cycling Weekly: I think last year for Remco, and for everybody else too I think, was a good year in the sense of learning that not everything comes easy.

He is the biggest of talents, but they all have to be prepared that they cannot take five steps forward in a row forever.

Despite his tender years and only having completed three professional seasons, Evenepoel has grown into a natural leader, a fierce winner who demands nothing less than the best from himself and others around him.

He has also caused controversy with other riders, notably his compatriot Wout van Aert who publicly voiced his disappointment after Evenepoel questioned Belgiums tactics at the World Championships in September.

Steels acknowledged his young riders temperament but views it as a positive. That winning mentality I see as an advantage, he continued.

We all know the guys who really cannot stand losing after a race are quite outspoken, but I must say I always see it as a quality.

Of course, you have to manage it after a race as frustration itself comes from losing, but it also means you gave everything to win the race. Thats the balance you have to find, although its not easy.

With Remco, if he gets frustrated with another rider it can be headline news. You have to manage that so its not a real problem.

After a race on the bus, sometimes youre wondering how the windows are still in because the tension can get so high.

>>> From 83m of altitude during the day to 4,500m at night: Pros check into simulated altitude training hotel room

Evenepoel will begin his season at the Volta a Valenciana, having enjoyed a full block of uninterrupted winter training, something he was deprived of last year thanks to a slower than expected recovery from a crash he sustained at Il Lombardia that resulted in a fractured pelvis.

At the end of the year, we saw once again the Remco we wanted to see, Steels added. The way he rode the Europeans and the Worlds, but we also saw that at the end of the season his basic condition was not at the best. It was a difficult year for him.

He has trained this winter without problems and is by far in a better place than last year, even two years ago.

Visit link:

The evolution of Remco Evenepoel: 'He has learned he cannot take five steps forward in a row forever' - Cycling Weekly

This year needs to be one of procurement evolution – New Civil Engineer

This is a year of opportunity for contractors working in the built environment. Regenerative infrastructure projects including innovations in highways, rail and flood defences will play a vital role in creating stronger local economies in a post-pandemic world. But as well as being a year of opportunity for contractors, it should be seen as an opportunity to evolve procurement.

Mark Robinson is group chief executive at leading procurement authority Scape

As part of the UKs recovery efforts, government departments and local authorities have been tasked with delivering such high-quality projects at speed in order to drive better outcomes for communities across the UK.

Yet we still live in turbulent times. The industry has had every challenge thrown at it over the last two years and while the end of Covid pandemic might be in sight, the ongoing squeeze to the supply of labour and concerns around inflation and supply chain disruption are likely to affect the speed at which future projects get off the ground.

While many contractors remain upbeat about the outlook ahead, changes within the governance of public sector procurement have paved the way for a transformation in the way in which these future projects will be delivered, and, more importantly, what is now expected from civil delivery partners.

Three recent developments will reshape public procurement in the UK. All civils projects in 2022 must follow the principles of the Construction Playbook, the governments blueprint for best practice and take into account the findings of the independent Cabinet Office construction frameworks review, led by former Kings College London director of construction law David Mosey. Both developments are spearheaded by the Procurement Bill which will appear before parliament this year to underline the reforms to public procurement regulations.

When taxpayers money is concerned, procurement must be best-in-class. These legislative and policy developments have not only set a new gold standard for public sector clients, framework providers and contractors but will go a long way in helping to deliver projects with strong social value and green credentials.

As one of the organisations consulted on how we should define this gold standard, I welcome the findings of the Mosey review and the long-term direction that government is taking to drive value whether social, environmental or economic.

These plans will be further shaped by the 24 recommendations set out by the review, which include extensive support and accountability in relation to helping the public estate achieve net zero status, generating social value, stimulating innovation through modern methods of construction, minimising or eradicating waste, connecting supply chains and ensuring that they are treated fairly. Critically, Mosey calls for contractually binding action plans around these objectives something that, again, many have long been implementing.

Ultimately, we need to see greater consistency in the outcomes created by publicly funded civil engineering frameworks. As the Mosey Review highlights, bid costs are no small undertaking for contractors, so it is vital that framework providers offer robust support to those securing places on them, as well as the supply chain.

The best outcomes can be achieved where there is active management of frameworks to produce tangible outcomes. We have in-built standards that ensure a constant focus on value in all its varied forms, and every 1M spent on our frameworks generates 300,000 of social value for the local community. This can only be done with a programme of early engagement, which we enable with our direct award approach and local supply chain delivery.

Where we as procurement specialists, and those using our services, should take heart is in the component parts that the Mosey Review expects the gold standard to be made of. Indeed, I would go as far as to argue that many frameworks are already meeting or exceeding these standards. The reviews recommendations to set standards will raise the bar across the sector, while driving further innovation among those already operating at or beyond them.

The key now is to take these pockets of innovation and turn them into business as usual.If more projects and contractors can adopt best-practice behaviours and processesthen theres no reason we cant deliver on the UKs infrastructure needs in the coming years.

We are already seeing some significant innovation from our own civil delivery partners from the Scape Civil Engineering framework, using technology, creativity and a real commitment to improve the design, delivery and whole life cost performance of our national infrastructure. In most parts, the Whitehall recommendations seek to rubber-stamp a way forward where many in the industry were already leading by example.

In terms of the opportunities ahead, Scape is preparing to open bidding in February for spots on its next generation civil engineering framework. The re-procurement is a suite of 4bn, including a 3.25bn framework for England, Wales and Northern Ireland and a separate 750M framework for Scotland, managed and operated by Scape Scotland.

Scapes existing frameworks, both secured by Balfour Beatty, have delivered more than 250 projects to date for public sector clients and are due to expire in January 2023.

Like what you've read?To receive New Civil Engineer's daily and weekly newsletters click here.

Read more from the original source:

This year needs to be one of procurement evolution - New Civil Engineer

Cliff’s Edge — The Evolution of the Pomegranate – Adventist Review

At every meal, especially breakfast, the absurdity, the outrageous absurdity of evolution becomes frighteningly obvious. Take the humble pomegranate. It evolved? How? Did a single pomegranate seed evolve first? If so, starting as some early life form, how could a seedcontaining the concept of a pomegranate tree, along with the contents to grow onehave been formed, step by step, with no direction imposed on it?

Or instead of the seed, did the pomegranate itselfa single pomegranateevolve first? But how could a pomegranate with skin, seeds, and fruit on the seeds, come into existence through a long, slow process of evolution? How many endless proto pomegranates sitting on the ground (where else?) over millions of years came and went until one, finally, became a functioning and edible pomegranate (seeds, skin, and fruit together)?

Or maybe the pomegranate tree began it all? But what evolved first: the roots, the trunk, the branches, the leaf, or the pomegranate itself with seeds within it? Or did they all start evolving at once: a partial root, a partial trunk, a partial branch, a partial leaf, and a partial pomegranate with partial seeds until, finallyafter millions of inchoate and evolving proto-pomegranate trunks and roots and leaves and seeds arising, dying, rottingone, the fittest, survived into the first full-fledged pomegranate tree, the progenitor of all other pomegranates? (How, though, does the nutritive value of the pomegranate, along with its appealing taste, smell, and texture, fit in with this survival of the fittest story, anyway? Would not an uglier, unhealthier, and more tasteless pomegranate add to its survivability?)

Also, where did the idea of a pomegranate, or a pomegranate seed, or pomegranate tree come from to begin with? In evolutionary theory, there was never an idea of anything pomegranatey at all. Just wait long enough and, sooner or later, thanks to random mutation and natural selection, a pomegranate treeseeds, trunk, leaves, root and fruitwill just happen. Thats, at least, the narrative.

Evolutionists who want a Christian spin on creation would answer, of course, that Jesus, the Creator (see John 1:13), did it.

OK. But how?

Did Jesus first put the idea of a pomegranate seed in some very early life form, and then let that life form over millions of years (with a divine tweak every now and then) evolve into a pomegranate seed, which spawned the first pomegranate tree?

Or did He put into this early life the idea of a pomegranate and then said, And let it evolve into a pomegranate, from whose seeds the tree, bearing its own seed, will come. And (millions of years later) it was so?

Or did Jesus put the idea of a pomegranate tree into that early life form first? And, then simply let nature take its course until, eons later, the first pomegranate tree emerged?

However Jesus supposedly did it, evolution still demands millions of years of pre-pomegranate seeds, pre-pomegranates trees, and pre-pomegranates themselves fading in and out, step by step, until (again, maybe with fine-tuning) the first pomegranate treeseeds, leaves, trunk, branches and pomegranatesfinally arrived as a functioning and reproducing whole.

What other options are there? Evolutionary biologists tell us that Genesis 1:11 Then God said, Let the earth bring forth grass, the herb that yields seed, and the fruit tree that yields fruit according to its kind, whose seed is in itself, on the earth; and it was so (Genesis 1:11) cannot be true. But the pomegranate is still here, and because it had to come from somewhere, I humbly ask, From where?

If any of the above scenarios are off, could someonetoo enlightened to believe in Genesis 1:11explain how the pomegranate evolved? And if they dont know how the pomegranate did, how about the blueberry, the avocado, the apple, the melon, the radish, the peach, the almond, the cherry, the tomatoor even the potato? How did any of these, or their first progenitor, step by step, slowly evolve into existence?

In stunning contrast, there is the six-day creation (Genesis 1-2), in which the love and power of God, tasted in every plant-based bite, reveals the wisdom of the world (1 Corinthians 3:19) as obviously, even outrageously, wrong.

Clifford Goldstein is the editor of Adult Bible Study Guides at the General Conference of Seventh-day Adventists, and a longtime columnist for Adventist Review.

Read more from the original source:

Cliff's Edge -- The Evolution of the Pomegranate - Adventist Review

Episode 138: National Food Recovery Evolution: MealConnect and Feeding America – waste360

In our latest episode of NothingWasted!, we bring you a dynamic session from WasteExpo Together Online 2021, National Food Recovery Evolution: MealConnect and Feeding America. This session features speakers Justin Block, Managing Director of Digital Platform Technology, and Nathan Crone, Senior Account Manager of Agri Sourcing Partnerships, both at Feeding Americathe largest domestic hunger-relief organization in the U.S. Dr. Stuart Buckner of Buckner Environmental Associates, LLC, served as moderator.

Heres a sneak peek into the presentation:

Block set the stage by talking about Feeding Americas engagement in the issues of hunger and food waste. He noted that these are urgent threats, particularly with over $218 billion worth of food being thrown away each year. He also pointed out that 275 U.S. counties have food insecurity rates over 20%. The Feeding America approach and process relies on a wide array of food donors, its network of more than 200 certified member food banks, and the agencies that utilize the food in helping to feed the 37 million hungry Americans.

Block went on to talk about MealConnect, the first nationally available food-donation app. Its free to use and allows organizations to post donations; trucks can also reroute rejected loads to food banks. Since its launch in 2014, the app has facilitated the rescuing of 2.9 billion pounds of food, which has helped 10,200 hunger relief organizations. Feeding America is continuing to expand MealConnect as well as access better food-waste data through it. Its 2.0 release is coming soon, which will enable users to better find the produce they need, better match supply and demand, and will feature a mobile, more user-friendly design.

Crone went on to elaborate on the emphasis on produce. On top of the fact that billions of pounds of produce go to waste each year, there is of course great nutritional value in suchand providing food-insecure people with produce encourages more balanced diets and helps to stretch food budgets. Some of the key challenges his team is working on include:

To wrap things up, Block showed off some of the marketing materials related to MealConnects new campaign.

After the presentation, Bruckner posed questions including, How can consumers help reduce food waste? Block encouraged becoming champions and advocates and encouraging businesses to be more mindful and intentional about unsalable product. Taking that one step further, consumers can visit FeedingAmerica.orgs Food Bank Finder and help local food businesses further up the supply chainpackers, distributors, wholesalersthey may not even realize they can donate their extra food. So if you can help making the initial connectionjust letting them know that food bank is in the community, it is a big help.

Listen to the full session above.

#NothingWastedPodcast

Follow this link:

Episode 138: National Food Recovery Evolution: MealConnect and Feeding America - waste360

Lionel Messi’s evolution at PSG: After 20 years, is he finally learning not to do it all himself? – ESPN

Do you remember Lionel Messi? Little guy, used to have a bowl cut? Started off by dribbling past everyone, but getting fouled all the time? Then he turned into the best passer in the world and also the best goal scorer? Finally cut his hair, but bleached it blonde? Rinsed the peroxide out and grew a beard? Seemed like he'd washed out that crisis, only to then tattoo his entire leg in black ink? Won the Ballon d'Or seven times? Ever heard of him?

He used to be everywhere, always. To watch soccer for the past 15 years was to try to pay attention to someone else on a given Saturday or Sunday, only for Messi to remind you that you were wasting your time not watching him.

Enjoying that Bayern Munich match? Hey, I just dribbled through Getafe's entire team. Oh man, this Zlatan guy is pretty interesting, huh? You know what's interesting? I just scored 50 goals and tossed in 16 assists in 38 La Liga games. Whoa, is Manchester City ever going to lose a match? Uh, my team just took 20 shots in a game and I attempted or assisted every single one. All right, seems like it's time to enjoy some of this Erling Haaland business? I'm 32 years old and I'm going to put up a 20-20 goals and assists season just for fun.

Every month during the season, European Sports Media -- a group of 14 European magazines -- votes on a Team of the Month. There's an archive of their selections going all the way back to the 1995-96 season. Since then, Messi has been voted in 84 times. No other player has made more than 51 appearances (Cristiano Ronaldo). Put another way, over his 16 seasons with Barcelona, Messi was voted into the Team of the Month an absurd 60% of the time. For more than a decade-and-a-half, Messi's average month was better than everyone else's best. He's the only player who was more likely to be in the team than not.

Then, all of a sudden, he disappeared.

- ESPN+ viewers guide: LaLiga, Bundesliga, MLS, FA Cup, more- Stream ESPN FC Daily on ESPN+ (U.S. only)- Don't have ESPN? Get instant access

There's just ... nothing. No breathtaking runs, no physics-defying free kicks, and barely any goals. Twenty-one matches into his first Ligue 1 season for Paris Saint-Germain, the greatest soccer player of all time has scored one time. In the 2012-13 season, he'd already scored 33 goals at this point in the domestic campaign. In his last three years at Barcelona, he'd averaged 16 goals through the first 21 matches of the La Liga season. Let me repeat: Lionel Messi has scored ONE GOAL in Ligue 1 this season. Unsurprisingly, given that -- and shockingly, given everything else -- he's yet to be selected to the ESM Team of the Month this season. He's been so absent from the everyday rhythms of European soccer that some people actually got mad when he won the Ballon d'Or.

It sure feels like the beginning of the end -- or maybe it's the start of something new.

The rest is here:

Lionel Messi's evolution at PSG: After 20 years, is he finally learning not to do it all himself? - ESPN

Negative observational learning might play a limited role in the cultural evolution of technology | Scientific Reports – Nature.com

Lewis, H. M. & Laland, K. N. Transmission fidelity is the key to the build-up of cumulative culture. Philos. Trans. R. Soc. B Biol. Sci. 367, 21712180 (2012).

Google Scholar

Tennie, C., Call, J. & Tomasello, M. Ratcheting up the ratchet: On the evolution of cumulative culture. Philos. Trans. R. Soc. Lond. B. Biol. Sci. 364, 24052415 (2009).

PubMed PubMed Central Google Scholar

Burdett, E. R. R., Dean, L. G. & Ronfard, S. A diverse and flexible teaching toolkit facilitates the human capacity for cumulative culture. Rev. Philos. Psychol. 9, 807818 (2018).

PubMed Google Scholar

Reindl, E., Gwilliams, A. L., Dean, L. G., Kendal, R. L. & Tennie, C. Skills and motivations underlying childrens cumulative cultural learning: Case not closed. Palgrave Commun. 6, 19 (2020).

Google Scholar

Legare, C. H. & Nielsen, M. Imitation and innovation: The dual engines of cultural learning. Trends Cogn. Sci. 19, 688699 (2015).

PubMed Google Scholar

Tomasello, M. The Cultural Origins of Human Cognition (Harvard University Press, 1999).

Google Scholar

Horner, V. & Whiten, A. Causal knowledge and imitation/emulation switching in chimpanzees (Pan troglodytes) and children (Homo sapiens). Anim. Cogn. 8, 164181 (2005).

PubMed Google Scholar

Kenward, B., Karlsson, M. & Persson, J. Over-imitation is better explained by norm learning than by distorted causal learning. Proc. Biol. Sci. 278, 12391246 (2011).

PubMed Google Scholar

Hoehl, S. et al. Over-imitation: A review and appraisal of a decade of research. Dev. Rev. 51, 90108 (2019).

Google Scholar

Heyes, C. M. & Frith, C. D. The cultural evolution of mind reading. Science 344, 1243091 (2014).

PubMed Google Scholar

Csibra, G. & Gergely, G. Natural pedagogy. Trends Cogn. Sci. 13, 148153 (2009).

PubMed Google Scholar

Tomasello, M. & Carpenter, M. Shared intentionality. Dev. Sci. 10, 121125 (2007).

PubMed Google Scholar

Moll, H. & Tomasello, M. Cooperation and human cognition: The Vygotskian intelligence hypothesis. Philos. Trans. R. Soc. Lond. B. Biol. Sci. 362, 639648 (2007).

PubMed PubMed Central Google Scholar

Muthukrishna, M. et al. Innovation in the collective brain. Philos. Trans. R. Soc. Lond. B Biol. Sci. 371, 137148 (2016).

Google Scholar

Heyes, C. Blackboxing: Social learning strategies and cultural evolution. Philos. Trans. R. Soc. B Biol. Sci. 371, 1693 (2016).

Google Scholar

Morgan, T. J. H., Rendell, L. E., Ehn, M., Hoppitt, W. & Laland, K. N. The evolutionary basis of human social learning. Proc. R. Soc. B Biol. Sci. 279, 653662 (2012).

CAS Google Scholar

Mesoudi, A. An experimental simulation of the copy-successful-individuals cultural learning strategy: adaptive landscapes, producerscrounger dynamics, and informational access costs. Evol. Hum. Behav. 29, 350363 (2008).

Google Scholar

Laland, K. N. Social learning strategies. Anim. Learn. Behav. 32, 414 (2004).

Google Scholar

Schlag, K. H. Which one should I imitate? J. Math. Econ. 31, 493522 (1999).

MathSciNet MATH Google Scholar

Smith, K., Kirby, S. & Brighton, H. Iterated learning: A framework for the emergence of language. Artif. Life 9, 371386 (2003).

PubMed Google Scholar

Zwirner, E. & Thornton, A. Cognitive requirements of cumulative culture: Teaching is useful but not essential. Sci. Rep. 5, 18 (2015).

Google Scholar

Haidle, M. N. & Schlaudt, O. Where does cumulative culture begin? A plea for a sociologically informed perspective. Biol. Theory 15, 161174 (2020).

Google Scholar

Saldana, C., Fagot, J., Kirby, S., Smith, K. & Claidire, N. High-fidelity copying is not necessarily the key to cumulative cultural evolution: A study in monkeys and children. Proc. R. Soc. B Biol. Sci. 286, 20190729 (2019).

Google Scholar

King, A. J., Cheng, L., Starke, S. D. & Myatt, J. P. Is the true wisdom of the crowd to copy successful individuals? Biol. Lett. 8, 197200 (2012).

PubMed Google Scholar

Hewlett, B. S. & Cavalli-Sforza, L. L. Cultural transmission among aka pygmies. Am. Anthropol. 88, 922934 (1986).

Google Scholar

Jimnez, . V. & Mesoudi, A. Prestige-biased social learning: Current evidence and outstanding questions. Palgrave Commun. 5, 112 (2019).

Google Scholar

Henrich, J. & Gil-White, F. J. The evolution of prestige: Freely conferred deference as a mechanism for enhancing the benefits of cultural transmission. Evol. Hum. Behav. 22, 165196 (2001).

CAS PubMed Google Scholar

McElreath, R. et al. Beyond existence and aiming outside the laboratory: Estimating frequency-dependent and pay-off-biased social learning strategies. Philos. Trans. R. Soc. Lond. B Biol. Sci. 363, 35153528 (2008).

PubMed PubMed Central Google Scholar

Nakahashi, W. A mathematical model of cultural interactions between modern and archaic humans. In Dynamics of Learning in Neanderthals and Modern Humans Vol. 1 (eds Akazawa, T. et al.) 255263 (Springer, 2013).

Olsson, A. & Phelps, E. A. Social learning of fear. Nat. Neurosci. 10, 10951102 (2007).

CAS PubMed Google Scholar

Galef, B. G. Social identification of toxic diets by Norway rats (Rattus norvegicus). J. Comp. Psychol. 100, 331334 (1986).

PubMed Google Scholar

Ferrari, M. C. O. & Chivers, D. P. First documentation of cultural transmission of predator recognition by Larval Amphibians. Ethology 113, 621627. https://doi.org/10.1111/j.1439-0310.2007.01362.x (2007).

Article Google Scholar

Scalise Sugiyama, M. Lions and tigers and bears: Predators as a folklore universal. In Anthropology and Social History: Heuristics in the Study of Literature (eds Friedrich, H. Jannidis, F. Klein, U. Mellmann, K. Metzger, S. & Willem, M.) 319331 (Mentis, 1996).

Nakawake, Y. & Sato, K. Systematic quantitative analyses reveal the folk-zoological knowledge embedded in folktales. Palgrave Commun. 5, 161 (2019).

Google Scholar

Wilks, C. E. H., Rafetseder, E., Renner, E., Atkinson, M. & Caldwell, C. A. Cognitive prerequisites for cumulative culture are context-dependent: Childrens potential for ratcheting depends on cue longevity. J. Exp. Child Psychol. 204, 105031 (2021).

PubMed Google Scholar

Want, S. C. & Harris, P. L. Learning from other peoples mistakes: Causal understanding in learning to use a tool. Child Dev. 72, 431443 (2001).

CAS Article PubMed Google Scholar

Henrich, J. Demography and cultural evolution: How adaptive cultural processes can produce maladaptive lossesThe Tasmanian case. Am. Antiq. 69, 197214 (2004).

Google Scholar

Kobayashi, Y. & Aoki, K. Innovativeness, population size and cumulative cultural evolution. Theor. Popul. Biol. 82, 3847 (2012).

PubMed MATH Google Scholar

Mesoudi, A. & OBrien, M. J. The cultural transmission of Great Basin projectile-point technology I: An experimental simulation. Am. Antiq. 73, 627644 (2008).

Google Scholar

Mesoudi, A. & OBrien, M. The cultural transmission of Great Basin projectile-point technology II: An agent-based computer simulation. Am. Antiq. 73, 627644 (2008).

Google Scholar

Mesoudi, A. An experimental comparison of human social learning strategies: Payoff-biased social learning is adaptive but underused. Evol. Hum. Behav. 32, 334342 (2011).

Google Scholar

Mesoudi, A., Chang, L., Murray, K. & Lu, H. J. Higher frequency of social learning in China than in the West shows cultural variation in the dynamics of cultural evolution. Proc. R. Soc. B Biol. Sci. 282, 20142209 (2014).

Google Scholar

Thompson, B. & Griffiths, T. L. Human biases limit cumulative innovation. Proc. R. Soc. B Biol. Sci. 288, 20202752 (2021).

Google Scholar

Garcia-Retamero, R., Takezawa, M. & Gigerenzer, G. Does imitation benefit cue order learning? Exp. Psychol. 56, 307320 (2009).

PubMed Google Scholar

Gigerenzer, G., Todd, P. M., ABC Research Group. Simple Heuristics that Make Us Smart (Oxford University Press, 1999).

Google Scholar

Hastie, R. & Dawes, R. M. Rational Choice in an Uncertain World: The Psychology of Judgment and Decision Making (Sage, 2001).

Google Scholar

Acerbi, A., Tennie, C. & Mesoudi, A. Social learning solves the problem of narrow-peaked search landscapes: Experimental evidence in humans. R. Soc. Open Sci. 3, 160215 (2016).

ADS PubMed PubMed Central Google Scholar

Higashi, M., Suzuki, R. & Arita, T. The role of social learning in the evolution on a rugged fitness landscape. Front. Phys. 6, 19 (2018).

Google Scholar

Derex, M. & Boyd, R. Social information can potentiate understanding despite inhibiting cognitive effort. Sci. Rep. https://doi.org/10.1038/s41598-018-28306-z (2018).

Lyons, D. E., Damrosch, D. H., Lin, J. K., Macris, D. M. & Keil, F. C. The scope and limits of overimitation in the transmission of artefact culture. Philos. Trans. R. Soc. Lond. B Biol. Sci. 366, 11581167 (2011).

PubMed PubMed Central Google Scholar

Whiten, A., McGuigan, N., Marshall-Pescini, S. & Hopper, L. M. Emulation, imitation, over-imitation and the scope of culture for child and chimpanzee. Philos. Trans. R. Soc. Lond. B Biol. Sci. 364, 24172428 (2009).

Article PubMed PubMed Central Google Scholar

Derex, M., Bonnefon, J. F., Boyd, R. & Mesoudi, A. Causal understanding is not necessary for the improvement of culturally evolving technology. Nat. Hum. Behav. 3, 446452 (2019).

Read the original post:

Negative observational learning might play a limited role in the cultural evolution of technology | Scientific Reports - Nature.com

Is the 49ers evolution since Week 3 enough to beat the Packers? – Niners Nation

Sitting at 2-0, the 49ers hosted the Packers in their first home game with fans since the 2019 NFC Championship game. The 49ers paid homage to the 1994 team by donning the red throwback uniforms for the first time. It was a nationally televised game on Sunday Night Football.

The table was set for the 49ers, but they came out flat, fell behind 17-0, and their comeback fell short, as Aaron Rodgers and the Packers drove the field for a game-winning field goal in 37 seconds.

If youre a Packers fan, your natural inclination is to use that Packers win as evidence for why they will repeat that performance on Saturday. If youre a 49ers fan, youre hoping the outcome will be different during this weekends NFC Divisional battle.

But how much have the 49ers evolved since that Week 3 loss, and are those differences between the teams enough for San Francisco to advance to the NFC Championship game?

The development of the 49ers pass rush and their improved run defense

In Week 3, the 49ers generated seven pressures against Aaron Rodgers and only sacked the Packers signal-caller once. In the last two weeks, San Franciscos defense has generated 27 pressures and sacked the opposing quarterback 10 times.

Arden Key played three snaps as an edge rusher in that first matchup, whereas now Key is rushing from the inside as an extremely valuable piece on this defensive line.

Samson Ebukam has really developed into a capable rusher off the edge, which simply wasnt the case early in the season as he was still adapting to the role. Arik Armstead played 27 snaps at defensive end in Week 3. Since Week 9, hes exclusively moved inside as a 3T and been dominating. He finished with a season-high six pressures last week vs. Cowboys.

San Franciscos run defense hasnt been emphasized enough, but since Week 10, they have been the best in the NFL. Their rushing defense is No. 1 in the following categories: DVOA, EPA per play, Success Rate, and Explosive Runs allowed.

Theyll face a strong rushing attack, as the Packers are No. 1 in rushing success rate on offense. Aaron Jones and A.J. Dillon will be a load to tackle in the freezing temperatures of Wisconsin, but the 49ers improved defense should be up to the task.

Kyle Shanahans mid-season discovery of 49ers offensive identity

The 49ers were a highly-efficient offense all season long, but they really discovered their offensive identity mid-season in Chicago. Early in the year, it felt like Shanahan was struggling to find a rhythm as a play-caller, balancing Jimmy Garoppolo and Trey Lance.

Halfway through 2021, Shanahan punted the Trey Lance package into the sun, moved Deebo Samuel into his wide-back position, emphasized a run-heavy attack with Elijah Mitchell at the forefront.

Since Week 10, the 49ers offense has taken off to a whole another level. Their offense is second in passing DVOA and sixth in rushing DVOA. The 49ers offense also has the highest rate of explosive passing plays during this span as well. Shanahans bunch is also fourth in EPA per play and sixth in success rate. All of the advanced metrics show that the 49ers have assembled a Top-5 offense (based on efficiency) ahead of the Packers matchup.

Samuel has come into his own as a true running back, Jauan Jennings has developed into a legitimate third-down threat, and Brandon Aiyuk has become the 49ers best route runner. Not to mention George Kittles duality as a receiving or blocking tight end depending on the matchup.

Green Bays abysmal run defense

It doesnt make sense given their personnel, but all the advanced numbers show that the Packers run defense is one of the worst units in football.

Since Week 10, the Packers rushing defense is 27th in DVOA, 27th in EPA per play, 32nd in Success Rate, and 32nd in Explosive Run Plays allowed. They get gashed between every gap and havent been able to contain opposing running backs.

The Browns provided the blueprint for how to attack this Packers rushing defense, gashing them for 219 yards on 25 carries (8.8 yards per attempt). Thats similar to Raheem Mosterts box score from the 2019 NFC Title game.

Its clear how the 49ers are going to attack; its just a matter of winning in the trenches and dominating the blocks up front for San Francisco. If they can control the line of scrimmage, theyll have success running the ball against this Green Bay front.

San Franciscos dominance in the Red Zone

It was pretty clear early on in the season that the 49ers red-zone offense was dramatically improved this season. Its been an area of struggle the last few seasons under Kyle Shanahan for whatever reason. Between George Kittle, Deebo Samuel, and the emergence of Jauan Jennings, the 49ers have some legitimate red-zone threats that should keep defensive coordinators awake at night.

Shanahans red-zone offense ranks No. 1 in the NFL this season at 67 percent, while the Packers red-zone defense ranks No. 28. I think its a significant advantage because every time the 49ers get into the red area, theyll look to punch it in for six. Theyve had success all year long doing it, and it seems like the Packers defense has struggled to stop opponents.

Will this be a George Kittle game?

There was a three-week stretch where George Kittle reminded everyone in the National Football League who the most dominant tight end was. He had back-to-back games of at least 150 receiving yards, with three touchdowns, followed up by a 93-yard performance.

Kittles dominance in the run game as a blocker is widely known, but hes been a force as a receiving threat whenever the 49ers have needed it especially on the road.

The Packers have struggled to cover tight ends all season long. Theyre 28th in DVOA when covering opposing tight ends. Kittle caught seven passes for 93 yards in the first meeting this season and has generally had a ton of success against the Packers.

Id expect Kittle to be a major factor over the middle in this game, especially as a big, easy target for quarterback Jimmy Garoppolo.

Prediction: Green Bay 31, San Francisco 27

I think the 49ers are the toughest matchup for anyone in the NFL right now. They play a brand of football that travels anywhere and is uncommon in this day and age. San Franciscos physical rushing defense and pass rush should wreak more havoc than it did in Week 3. Their rushing attack should have success against the Packers front and be able to control this game.

The biggest questions to me heading into a game are the same as always:

Its been the same questions with this team all year long. Theyve generally been able to manage it in wins, and when they have lost, its typically been because of one of these three things.

I think they match up very well with the Packers, and I can see them winning this game and advancing to the NFC Championship game. However, at the same time, I dont trust the 49ers offense (especially their quarterback) to put together four quarters of high-level football on the road, and thats the difference in this game.

Read the original post:

Is the 49ers evolution since Week 3 enough to beat the Packers? - Niners Nation

January: dinosaur movement evolution | News and features – University of Bristol

New research led by the University of Bristol has revealed how giant 50-tonne sauropod dinosaurs, like Diplodocus, evolved from much smaller ancestors, like the wolf-sized Thecodontosaurus

In a new study published today in the journal Royal Society Open Science, researchers present a reconstruction of the limb muscles of Thecodontosaurus, detailing the anatomy of the most important muscles involved in movement.

Thecodontosaurus was a small to medium sized two-legged dinosaur that roamed around what today is the United Kingdom during the Triassic period (around 205 million years ago).

This dinosaur was one of the first ever to be discovered and named by scientists, in 1836, but it still surprises scientists with new information about how the earliest dinosaurs lived and evolved.

Antonio Ballell, PhD student in Bristols School of Earth Sciences and lead author of the study, said: The University of Bristol houses a huge collection of beautifully preserved Thecodontosaurus fossils that were discovered around Bristol. The amazing thing about these fossilised bones is that many preserve the scars and rugosities that the limb musculature left on them with its attachment.

These features are extremely valuable in scientific terms to infer the shape and direction of the limb muscles. Reconstructing muscles in extinct species requires this kind of exceptional preservation of fossils, but also a good understanding of the muscle anatomy of living, closely related species.

Antonio Ballell added: In the case of dinosaurs, we have to look at modern crocodilians and birds, that form a group that we call archosaurs, meaning ruling reptiles. Dinosaurs are extinct members of this lineage, and due to evolutionary resemblance, we can compare the muscle anatomy in crocodiles and birds and study the scars that they leave on bones to identify and reconstruct the position of those muscles in dinosaurs.

Professor Emily Rayfield, co-author of the study, said: These kinds of muscular reconstructions are fundamental to understand functional aspects of the life of extinct organisms. We can use this information to simulate how these animals walked and ran with computational tools.

From the size and orientation of its limb muscles, the authors argue that Thecodontosaurus was quite agile and probably used its forelimbs to grasp objects instead of walking.

This contrasts with its later relatives, the giant sauropods, which partly achieved these huge body sizes by shifting to a quadrupedal posture. The muscular anatomy of Thecodontosaurus seems to indicate that key features of later sauropod-line dinosaurs had already evolved in this early species.

Professor Mike Benton, another co-author, said: From an evolutionary perspective, our study adds more pieces to the puzzle of how the locomotion and posture changed during the evolution of dinosaurs and in the line to the giant sauropods.

How were limb muscles modified in the evolution of multi-ton quadrupeds from tiny bipeds? Reconstructing the limb muscles of Thecodontosaurus gives us new information of the early stages of that important evolutionary transition.

This research was funded by the Natural Environment Research Council (NERC).

Paper:

Walking with early dinosaurs: appendicular myology of the Late Triassic sauropodomorph Thecodontosaurus antiquus by A. Ballell, E. J. Rayfield and M. J. Benton in Royal Society Open Science.

See the original post here:

January: dinosaur movement evolution | News and features - University of Bristol

Making Sense of the Interest Rate Evolution – Planadviser.com

News headlines in both financial services trade publications and national media outlets alike have homed in over the past several weeks on the topic of interest rateson where they have come from, where they stand now and what level rates may reach in the new year.

As often happens in such situations, PLANADVISER has received an extensive amount of written commentary from investment experts on the interrelated subjects of interest rates, inflation and economic growth. They offer viewpoints that seek to go beyond the headlines and illuminate the underlying market forces defining the day.

In the analysis of Brad McMillan, chief investment officer (CIO) for Commonwealth Financial Network, market watchers may be feeling an undue sense of panic about the current interest rate situation.

The panic of the day is the news about interest rates, he writes. The headlines state (correctly) that rates have moved up sharply in recent days. They state (correctly) that stocks have pulled back, noting this fact is due to that increase (which is possibly but not necessarily true). And they state (incorrectly, I believe) that higher rates are going to derail the economy and the markets, in that order.

McMillan says this narrative is pretty standard for this stage of the economic cycle.

The economy is growing, so the Fed, more worried about inflation than employment, starts to raise interest rates, he notes. Higher rates, mathematically, will mean slow growth and lower stock valuations. Cue the headlines. What is missing, as usual, is context.

In McMillans view, the growing concerns about the recent rise in interest rates are based on a couple of assumptions. First is the assumption that the increase reflects a problem with the financial markets.

Second, there is the thinking that current ratesfrom which we see the increaseare, by definition, correct, and the increase, therefore, represents a change from the correct rate levels, McMillan writes. Both assumptions are wrong.

For context, McMillan looks at the past 10 years of interest rates for the 10-year Treasury note. The current yield is about 1.8%, up in recent days from around 1.5%. McMillan agrees with the broader headlines that this is a sharp and sizable increase.

But this rate increase is dwarfed by the ones we saw in 2020 and 2021, he points out. Neither of those increases derailed the recovery, despite the headlines at the time. And, looking back before the pandemic, the interest rates take us back only to the lower end of the range in the latter part of the last decade. In other words, the recent spike is simply reversing part of the drop during the pandemica drop caused by extreme fiscal and monetary policy actions.

Put another way, McMillan argues, rates right now are moving back to the lower end of the normal range for the past decade. He says this should give individual and institutional investors some solace amid the frightening headlines.

Comments sent from investment management firm Ninety One, penned by strategist Russell Silberston, sound a decidedly different note. Silberston argues investors may actually be underestimating how far interest rates will rise, meaning bond yields have much further to riseand bond prices to fallthan commonly expected. He says his argument is based on some basic market history from the past 10 years.

In December 2015, six years after the global financial crisis overwhelmed the global economy and caused interest rates around the world to be slashed, the U.S. Federal Reserve raised the target for its benchmark federal funds rate by 0.25% to 0.5%, Silberston recalls. However, it then took a year for the tightening cycle to kick off in earnest, with another 25 basis point [bps] hike in December 2016, which, in turn, was followed by a series of 25-point hikes each calendar quarter that followed.

This took the Feds overnight rate to 2.5% by December 2018, Silberston explains, and, within seven months, the Fed was forced to partially reverse some of this tightening, reducing its rate to 1.75% over the second half of 2019 as financial markets wobbled badly despite the economy performing well.

With the Federal Reserve again on the verge of a tightening cycle, financial markets are replaying the post-crisis playbook and assuming the Fed is only going to be able to raise its rate to around 1.75%, Silberston says. This is well short of any assessment of the economically neutral level of interest rates, as they will be stymied by the desire to shrink their balance sheet, too. Why, then, in the face of multi-decade highs in inflation, are markets so sanguine about the interest rate outlook? The answer lies in the Feds balance sheet, and in particular the level of excess reserves placed there by commercial banks.

As Silberston observes, when a central bank undertakes quantitative easing, it creates reserves for itself and, with these, buys government bonds and other assets. These sit as an asset on the banks balance sheets.

The money they created to buy those assets ends up in the banking system, and in turn finds its way back to the central bank as excess reserves, he writes. These, like any bank deposit, are a liability for the central bank. Thus, in accounting terms, both assets and liabilities at the central bank have grown. When it comes to quantitative tightening, the process is reversed. The central bank either sells or allows a bond to mature, thus shrinking their assets. However, their liabilities also shrink as commercial bank excess reserves fall in tandem.

Looking forward and comparing the Federal Reserves policy options that are available today relative to what was possible in the wake of the Great Recession, Silberston says the situation is quite differentmore different than some market watchers appear to realize. His arguments are fairly technical and have to do with the way the Federal Reserve estimates its liabilitieshow it did so in the period before the pandemic and how it is doing so now.

Whatever the reason, [in the prior cycle] the Feds compass was on the wrong setting and it overdid quantitative tightening and withdrew far more liquidity than the banking sector needed, Silberston writes. It is this rather technical aspect of the Feds operations that we believe was behind the aborted tightening cycle in 2016 to 2018, rather than the federal funds rate being driven to a level that the economy could not withstand.

This time, in Silberstons view, is different. He warns that, to avoid the same whipsaw happening again when it embarks on quantitative tightening in this cycle, the Fed has introduced new on-demand tools to control overnight interest rates, both to the upside and downside. In theory, at least, policymakers should be able to run the balance sheet down more quickly without causing the liquidity shortages that characterized the last tightening cycle.

Again, if this view is correct, the market is underestimating how far high interest rates will rise, meaning bond yields have much further to rise, and bond prices to fallthan hitherto, Silberston concludes.

For his part, McMillan does not fully agree with that take, but he also acknowledges that investors may be overlooking some potential risks.

Lets look at a few assumptions. The first one says the current spike is a problem in financial markets, McMillan suggests. Looking at the [historical rate] chart, however, the problem seems to have come from the pandemic. Now, from an economic perspective, this problem is starting to fade. In this sense, the recent increase is a recovery from a problem, not an indicator of one.

The second assumption says recent rates are the correct and normal ones, McMillan writes.

Yet here again, due to the pandemic, this is definitely not the case, he argues. If both of these assumptions are wrongand they arethe narrative we are seeing in the headlines must be wrong as well. This logic would also extend to further rate increases. If rates for the 10-year Treasury notes go to 2.5%, they would be within the central range over the pre-pandemic years. It is only when rates begin to rise above 3% for a sustained period, not briefly, that the prospects of significant economic damage will start to get material. The years from 2013 to 2019 show that the economy and the markets can do quite well with rates between 2% and 3%.

After making that point, McMillan is quick to point out that significant risks remain.

Growth stocks are showing the strain, and this has had a disproportionate impact on the market, he observes. The housing sector might slow down as mortgage rates increase, but again this trend would be an adjustment, not a wholesale change. The economy and markets can and do adjust to changes in interest rates. This environment is a normal part of the cycle and one we see on a regular basis. The current trend is perhaps a bit faster than weve been seeing, but it is a response to real economic factorsand, therefore, normal in context. That is why there is no need to panic.

Link:

Making Sense of the Interest Rate Evolution - Planadviser.com

Nutritional Products International’s Evolution of Distribution Platform Helps Health and Wellness Brands Enter the U.S. Market – Digital Journal

Mitch Gould Developed a System to Centralize Essential Services that New Products Need to Thrive in America

This press release was orginally distributed by ReleaseWire

Boca Raton, FL (ReleaseWire) 01/24/2022 Product manufacturers have many obstacles when they decide to launch a new product to American consumers.

A launch campaign needs at least a sales staff, logistical and operational support, and marketing expertise.

"Everything costs money, especially if you are an international health and wellness brand," said Mitch Gould, founder and CEO of Nutritional Products International, a global brand management firm based in Boca Raton, FL. "International brands often don't understand the American retail industry or our culture."

Gould said he developed the "Evolution of Distribution" system to streamline the product launch process and keep costs down.

"I brought all the services involved in a product launch under the NPI banner," Gould said. "NPI provides a seasoned sales staff, warehousing, logistics, regulatory compliance, and specialized marketing services."

With NPI, Gould said domestic and international product manufacturers don't have to rent office or warehouse space.

"They don't have to hire a sales team with support staff. We have a Food Scientist to make sure their labels are FDA approved," he added. "We have the knowledge and experience our clients are seeking."

Gould said he also founded InHealthMedia, a marketing agency that specializes in the health and wellness sector.

"You have to understand the products and the industry to market them effectively," he said.

The marketing plan can include social media influencers, strategic professionally written press releases, TV segments that can reach more than 100 million households, and media outreach.

"We also have gotten major general and trade publications to cover our clients," Gould said.

For more information, visitnutricompany.com.

About NPI and Its Founder NPI is a privately-held company specializing in the retail distribution of nutraceuticals, dietary supplements, functional beverages, and skin-care products. NPI offers a unique, proven approach for product manufacturers worldwide seeking to launch or expand their products' distribution in the U.S. retail market.

Mitch Gould, the founder of NPI, is a third-generation retail distribution and manufacturing professional. Gould developed the "Evolution of Distribution" platform, which provides domestic and international product manufacturers with the sales, marketing, and product distribution expertise required to succeed in the world's largest market the United States. In the early 2000s, Gould was part of a "Powerhouse Trifecta" that placed more than 150 products on Amazon's new health and wellness category.

Gould, known as a global marketing guru, also has represented icons from the sports and entertainment worlds such as Steven Seagal, Hulk Hogan, Ronnie Coleman, Roberto Clemente Jr., Chuck Liddell, and Wayne Gretzky.

For more information on this press release visit: http://www.releasewire.com/press-releases/nutritional-products-internationals-evolution-of-distribution-platform-helps-health-and-wellness-brands-enter-the-us-market-1352240.htm

Read the rest here:

Nutritional Products International's Evolution of Distribution Platform Helps Health and Wellness Brands Enter the U.S. Market - Digital Journal

Spatial structure governs the mode of tumour evolution – Nature.com

Previous mathematical models of tumour population genetics

Many previous studies of tumour population genetics have used non-spatial branching processes21, in which cancer clones grow exponentially without interacting. Unless driver mutations increase cell fitness by less than 1%, these models predict lower clonal diversity and lower numbers of driver mutations than typically observed in solid tumours46. Among spatial models, a popular option is the Eden growth model (or boundary-growth model), in which cells are located on a regular grid with a maximum of one cell per site, and a cell can divide only if an unoccupied neighbouring site is available to receive the new daughter cell32,47,61. Other methods with one cell per site include the voter model32,62,63 (in which cells can invade neighbouring occupied sites) and the spatial branching process47 (in which cells budge each other to make space to divide). Further mathematical models have been designed to recapitulate glandular tumour structure by allowing each grid site or deme to contain multiple cells and by simulating tumour growth via deme fission throughout the tumour5,26 or only at the tumour boundary27. A class of models in which cancer cells are organized into demes and disperse into empty space has also been proposed36,52,64. Supplementary Table 2 summarizes selected studies representing the state of the art of stochastic modelling of tumour population genetics.

Our main methodological innovations are to implement all these distinct model structures, and additional models of invasive tumours, within a common framework, and to combine them with methods for tracking driver and passenger mutations at single-cell resolution. The result is a highly flexible framework for modelling tumour population genetics that can be used to examine consequences of variation not only in mutation rates and selection coefficients, but also in spatial structure and manner of cell dispersal65.

Simulated tumours in our models are made up of patches of interacting cells located on a regular grid of sites. In keeping with the population genetics literature, we refer to these patches as demes. All demes within a model have the same carrying capacity, which can be set to any positive integer. Each cell belongs to both a deme and a genotype. If two cells belong to the same deme and the same genotype then they are identical in every respect, and hence the model state is recorded in terms of such subpopulations rather than in terms of individual cells. For the sake of simplicity, computational efficiency and mathematical tractability, we assume that cells within a deme form a well-mixed population. The well-mixed assumption is consistent with previous mathematical models of tumour evolution5,26,27,36,64 and with experimental evidence in the case of stem cells within colonic crypts66.

A simulation begins with a single tumour cell located in a deme at the centre of the grid. If the model is parameterized to include normal cells, then these are initially distributed throughout the grid such that each demes population size is equal to its carrying capacity. Otherwise, if normal cells are absent, then the demes surrounding the tumour are initially unoccupied.

The simulation stops when the number of tumour cells reaches a threshold value. Because we are interested only in tumours that reach a large size, if the tumour cell population succumbs to stochastic extinction, then results are discarded and the simulation is restarted (with a different seed for the pseudo-random number generator).

Tumour cells undergo stochastic division, death, dispersal and mutation events, whereas normal cells undergo only division and death. The within-deme death rate is density-dependent. When the deme population size is less than or equal to the carrying capacity, the death rate takes a fixed value d0 that is less than the initial division rate. When the deme population size exceeds carrying capacity, the death rate takes a different fixed value d1 that is much greater than the largest attainable division rate. Hence, all genotypes grow approximately exponentially until the carrying capacity is attained, after which point the within-deme dynamics resemble a birthdeath Moran processa standard, well characterized model of population genetics.

In all spatially structured simulations, we set d0=0 to prevent demes from becoming empty. For the non-spatial (branching process) model, we set d0>0 and dispersal rate equal to zero, so that all cells always belong to a single deme (with carrying capacity greater than the maximum tumour population size).

When a cell divides, each daughter cell inherits its parents genotype plus a number of additional mutations drawn from a Poisson distribution. Each mutation is unique, consistent with the infinite-sites assumption of canonical population genetics models. Whereas some previous studies have examined the effects of only a single driver mutation (Supplementary Table 2), in our model there is no limit on the number of mutations a cell can acquire. Most mutations are passenger mutations with no phenotypic effect. The remainder are drivers, each of which increases the cell division or dispersal rate.

The programme records the immediate ancestor of each clone (defined in terms of driver mutations) and the matrix of Hamming distances between clones (that is, for each pair of clones, how many driver mutations are found in only one clone), which together allow us to reconstruct driver phylogenetic trees. To improve efficiency, the distance matrix excludes clones that failed to grow to more than ten cells and failed to produce any other clone before becoming extinct.

Whereas previous models have typically assumed that the effects of driver mutations combine multiplicatively, this can potentially result in implausible trait values (especially in the case of division rate if the rate of acquiring drivers scales with the division rate). To remain biologically realistic, our model invokes diminishing returns epistasis, such that the average effect of driver mutations on a trait value r decreases as r increases. Specifically, the effect of a driver is to multiply the trait value r by a factor of 1+s(1r/m), where s>0 is the mutation effect and m is an upper bound. Nevertheless, because we set m to be much larger than the initial value of r, the combined effect of drivers in all models in the current study is approximately multiplicative. For each mutation, the value of the selection coefficient s is drawn from an exponential distribution.

Depending on model parameterization, dispersal occurs via either invasion or deme fission (Supplementary Table 3). In the case of invasion, the dispersal rate corresponds to the probability that a cell newly created by a division event will immediately attempt to invade a neighbouring deme. This particular formulation ensures consistency with a standard population genetics model known as the spatial Moran process. The destination deme is chosen uniformly at random from the four nearest neighbours (von Neumann neighbourhood). Invasion can be restricted to the tumour boundary, in which case the probability that a deme can be invaded is 1N/K if NK and 0 otherwise, where N is the number of tumour cells in the deme and K is the carrying capacity. If a cell fails in an invasion attempt, then it remains in its original deme. If invasion is not restricted to the tumour boundary, then invasion attempts are always successful.

In fission models, a deme can undergo fission only if its population size is greater than or equal to carrying capacity. As with invasion, deme fission immediately follows cell division (so that results for the different dispersal types are readily comparable). The probability that a deme will attempt fission is equal to the sum of the dispersal rates of its constituent cells (up to a maximum of 1). Deme fission involves moving half of the cells from the original deme into a new deme, which is placed beside the original deme. If the dividing deme contains an odd number of cells, then the split is necessarily unequal, in which case each deme has a 50% chance of receiving the larger share. Genotypes are redistributed between the two demes without bias according to a multinomial distribution. Cell division rate has only a minor effect on deme fission rate because a deme created by fission takes only a single cell generation to attain carrying capacity.

If fission is restricted to the tumour boundary, then the new demes assigned location is chosen uniformly at random from the four nearest neighbours, and if the assigned location already contains tumour cells, then the fission attempt fails. If fission is allowed throughout the tumour, then an angle is chosen uniformly at random, and demes are budged along a straight line at that angle to make space for the new deme beside the original deme.

Our particular method of cell dispersal was chosen to enable comparison between our results and those of previous studies and to facilitate mathematical analysis. In particular, when the deme carrying capacity is set to 1, our model approximates an Eden growth model (if fission is restricted to the tumour boundary, or if dispersal is restricted to the tumour boundary and normal cells are absent), a voter model (if invasion is allowed throughout the tumour) or a spatial branching process (if fission is allowed throughout).

To fairly compare different spatial structures and manners of cell dispersal, we set dispersal rates in each case such that the time taken for a tumour to grow from one cell to one million cells is approximately the same as in the neutral Eden growth model with maximal dispersal rate. This means that, across models, the cell dispersal rate decreases with increasing deme size. Given that tumour cell cycle times are on the order of a few days, the timespans of several hundred cell generations in our models realistically correspond to several years of tumour growth. More specifically, if we assume tumours take between 5 and 50 years to grow and the cell cycle time is between 1 and 10 days (both uniform priors), then the number of cell generations is between 400 and 8,000 in 95% of plausible cases. This order of magnitude is consistent with tumour ages inferred from molecular data67.

We note that, in addition to gland fission, gland fusion has been reported in normal human intestine68, which raises the possibility that gland fusion could occur during colorectal tumour development. However, the rate of crypt fission in tumours is much elevated relative to the rate in healthy tissue, and must exceed the rate of crypt fusion (or else the tumour would not grow). Therefore, even if crypt fusion occurs in human tumours, we do not expect it to have a substantial influence on evolutionary mode. This view is supported by previous computational modelling69.

We chose to conduct our study in two dimensions for two main reasons. First, the effects of deme carrying capacity on evolutionary dynamics are qualitatively similar in two and three dimensions, yet a two-dimensional model is simpler, easier to analyse, and easier to visualize. Second, we aimed to create a method that is readily reproducible using modest computational resources and yet can represent the long-term evolution of a reasonably large tumour at single-cell resolution.

One million cells in two dimensions corresponds to a cross-section of a three-dimensional tumour with many more than one million cells. Therefore, compared to a three-dimensional model, a two-dimensional model can provide richer insight into how evolutionary dynamics change over a large number of cell generations. Developing an approximate, coarse-grained analogue of our model that can efficiently simulate the population dynamics of very large tumours with different spatial structures in three dimensions is an important direction for future research.

The programme implemented Gillespies exact stochastic simulation algorithm70 for statistically correct simulation of cell events. The order of event selection is (1) deme, (2) cell type (normal or tumour), (3) genotype, and (4) event type. At each stage, the probability of selecting an item (deme, cell type, genotype or event type) is proportional to the sum of event rates for that item, within the previous item. We measured elapsed time in terms of cell generations, where a generation is equal to the expected cell cycle time of the initial tumour cell.

We surveyed the multi-region and single-cell tumour sequencing literature to identify data sets suitable for comparison with our model results. Studies published before 2015 (for example, refs. 71,72,73,74) were excluded as they were found to have insufficient sequencing depth for our purposes. We also excluded studies that reconstructed phylogenies using samples from metastases or from multifocal tumours (for example, refs. 75,76,77,78,79,80) because our model is not designed to correspond to such scenarios. The seven studies we chose to include in our comparison are characterized by either high-coverage multi-region sequencing or large-sample single-cell sequencing of several tumours.

The ccRCC investigation81 we selected involved multi-region deep sequencing, targeting a panel of more than 100 putative driver genes. Three studies of NSCLC10, mesothelioma40 and breast cancer39 conducted multi-region whole-exome sequencing (first two studies) or whole-genome sequencing (latter study), and reported putative driver mutations. We also used data from single-cell RNA sequencing studies of uveal melanoma42 and breast cancer41, in which chromosome copy number variations were used to infer clonal structure, and a study of acute myeloid leukaemia (AML) that used single-cell DNA sequencing24. All seven studies constructed phylogenetic trees, which are readily comparable to the trees predicted by our modelling. The methodological diversity of these studies contributes to demonstrating the robustness of the patterns we seek to explain.

From each of the seven cohorts, we obtained data for between three and eight tumours. In the ccRCC data set, we focused on the five tumours for which driver frequencies were reported in the original publication. For NSCLC, we used data for the five tumours for which at least six multi-region samples were sequenced. In mesothelioma, we selected the six tumours that had at least five samples taken. From the breast cancer multi-region study, we used data for the three untreated tumours that were subjected to multi-region sequencing. From the single-cell sequencing studies of uveal melanoma and breast cancer, we used all the published data (eight tumours in each case), and from the AML study, we selected a random sample of eight tumours.

In multi-region sequencing data sets, it is uncertain whether all putative driver mutations were true drivers of tumour progression. One way to interpret the data (interpretation I1) is to assume that all putative driver mutations were true drivers that occurred independently. Alternatively, the more conservative interpretation I2 assumes that each mutational cluster (a distinct peak in the variant allele frequency distribution) corresponds to exactly one driver mutation, while all other mutations are hitchhikers. Thus, I1 permits linear chains of nodes that in I2 are combined into single nodes (compare Supplementary Figs. 9 and 10), and I1 leads to a higher estimate of the mean number of driver mutations per cell (our summary index n). If both the fraction of putative driver mutations that are not true drivers (false positives) and the fraction of true driver mutations that are not counted as such (false negatives) are low, or if these fractions approximately cancel out, then interpretation I1 will give a good approximation of n whereas I2 will give a lower bound. For the ccRCC, NSCLC and breast cancer cases in our data set, I1 generates values of n in the range 310 (mean 6.1), consistent with estimates based on other methodologies13,51, whereas for I2 the range is only 14 (mean 2.5). Accordingly, we used interpretation I1.

To measure clonal diversity, we used the inverse Simpson index defined as (D=1/{sum }_{i}{p}_{i}^{2}), where pi is the frequency of the ith combination of driver mutations. For example, if the population comprises k clones of equal size, then pi=1/k for every value of i, and so D=1/(k1/k2)=k. Clonal diversity has a lower bound D=1. The inverse Simpson index is relatively robust to adding or removing rare types, which makes it appropriate for comparing data sets with differing sensitivity thresholds. Further examples are illustrated in Supplementary Fig. 11.

D is constrained by an upper bound for trees with n<2, where n is the mean number of driver mutations per cell. Indeed, n=iipip1+2(1p1)=2p1, so p12n>0, since n<2. Therefore,

$$D=frac{1}{{sum }_{i}{p}_{i}^{2}}le frac{1}{{p}_{1}^{2}}le frac{1}{{(2-n)}^{2}}.$$

To see that this bound is tight, assume 1n<2 and consider a star-shaped tree with N nodes such that p1=2n and other nodes have equal weights pi=(1p1)/(N1)=(n1)/(N1) for i2. The mean number of driver mutations per cell is p1+2(1p1)=2p1=n, and the inverse Simpson index is

$$begin{array}{l}D=frac{1}{mathop{sum }nolimits_{i = 1}^{N}{p}_{i}^{2}}=frac{1}{{p}_{1}^{2}+mathop{sum }nolimits_{i = 2}^{N}{p}_{i}^{2}}\=frac{1}{{(2-n)}^{2}+(N-1){((n-1)/(N-1))}^{2}}=frac{1}{{(2-n)}^{2}+{(n-1)}^{2}/(N-1)}.end{array}$$

This quantity goes to 1/(2n)2 as the number of nodes N goes to infinity, so the bound 1/(2n)2 may be approached arbitrarily closely.

It is informative to derive the relationship between D and n for a population that evolves via a sequence of clonal sweeps, such that each new sweep begins only after the previous sweep is complete. For a given value of n, our simulations rarely produce trees with D values below the curves of this trajectory. Suppose that a population comprises a parent type and a daughter type, with frequencies p and 1p, respectively. If the daughter has m driver mutations, then the parent must have m1 driver mutations and n must satisfy m1nm. More specifically,

$$n=(m-1)p+m(1-p)=m-p Rightarrow p=m-n=1-{n},$$

where {n} denotes the fractional part of n (or 1 if n=m). The trajectory is therefore described by

$$D=frac{1}{{p}^{2}+{(1-p)}^{2}}=frac{1}{{(1-{n})}^{2}+{{n}}^{2}}.$$

We additionally calculated a curve representing the maximum possible diversity of linear trees. In the main text and below, we refer to this curve as corresponding to trees with an intermediate degree of branching. Specifically, this intermediate-branching curve is defined such that for every point below the curve (and with D>1), there exist both linear trees and branching trees that have the corresponding values of n and D, whereas for every point above the curve there exist only branching trees. Derivation of the curves equation is provided in Supplementary Information. A first-order approximation (accurate within 1% for n2.2) is D9(2n1)/8.

To assess the extent to which clusters of points (n, D) are well separated, we calculated silhouette widths using the cluster R package82. A positive mean silhouette width indicates that clusters are distinct.

Our diversity index fulfills the same purpose as the intratumour heterogeneity (ITH) index used in the TRACERx Renal study9, defined as the ratio of the number of subclonal driver mutations to the number of clonal driver mutations. However, compared to ITH, our index has the advantages of being a continuous variable and being robust to methodological differences that affect ability to detect low-frequency mutations. In calculating ITH from sequencing data, we included all putative driver mutations, whereas ref.9 used only a subset of mutations. For model output, we classified mutations with frequency above 99% as clonal and we excluded mutations with frequency less than 1%. ITH and the inverse Simpson index are strongly correlated across our models (Spearmans =0.98, or =0.81 for cases with D>2; Extended Data Fig. 9c).

The Shannon index, defined as ({sum }_{i}{p}_{i}{{mathrm{log}}},{p}_{i}), is another alternative to the Simpson index. The exponential of this index has the same units as the inverse Simpson index (equivalent number of types). Compared to the Simpson index, the Shannon index gives more weight to rare types, which makes it somewhat less suitable for comparing data sets with differing sensitivity thresholds.

In defining regions in terms of indices D and n (Table 1 and Fig. 3c), we first noted that if a population undergoes a succession of non-overlapping clonal sweeps, then at most two clones coexist at any time, and hence D2. Allowing for some overlap between sweeps, we defined the selective sweeps region as having D<10/3 and D below the intermediate-branching curve. We put the upper boundary at D=10/3 because this intersects with the intermediate-branching curve at n=2.

We used D=20 to define the boundary between the branching and progressive diversification regions. The TRACERx Renal study9 instead categorized trees containing more than 10 clones as highly branched, as opposed to branched. It is appropriate for us to use a higher threshold because our regions are based on true tumour diversity values, rather than the typically lower values inferred from multi-region sequencing data. Finally, we defined an effectively almost neutral region containing star-shaped trees with n<2 and D above the intermediate-branching curve.

It is possible to construct trees that do not fit the labels we have assigned to regions. For example (as shown in Supplementary Information), there exist linear trees within the branching and progressive diversification regions. Such exceptions are an unavoidable consequence of representing high-dimensional objects, such as phylogenetic trees, in terms of a small number of summary indices. Our labels are appropriate for the subset of trees that we have shown to arise from tumour evolution.

Conventionally, the balance of a tree is the degree to which branching events split the tree into subtrees with the same number of leaves, or terminal nodes. A balanced tree thus indicates more equal extinction and speciation rates than an unbalanced tree83. Tree balance indices are commonly used to assert the correctness of tree reconstruction methods and to classify trees. We considered three previously defined indices, all of which are imbalance indices, which means that more balanced trees are assigned smaller values. We subtracted each of these indices from 1 to obtain measurements of tree balance.

Let T=(V,E) be a tree with a set of nodes V and edges E. Let V=N, and hence E=N1 (since each node has exactly one parent, except the root). We defined l as the number of leaves of the tree. The root is labelled 1 and the leaves are numbered from Nl+1 to N. There is only one cladogram with two leaves, which is maximally balanced according to all the previously defined indices discussed below. We also considered the single-node tree to be maximally balanced with respect to these previously defined indices. The following definitions then apply when l3.

For each leaf j, we defined j as the number of interior nodes between j and the root, which is included in the count. Then a normalized version of Sackins index, originally introduced in ref.84, is defined as

$${I}_{S,mathrm{norm}}(T)=frac{mathop{sum }limits_{j=N-l+1}^{N}{nu }_{j}-l}{frac{1}{2}(l+2)(l-1)-l},$$

where to be able to compare indices of trees on different number of leaves l, we subtracted the minimal value for a given l and divided by the range of the index on all trees on n leaves, as in ref.85.

For an interior node i of a binary tree T, we defined TL(i) as the number of leaves subtended by the left branch of Ti, the subtree rooted at i, and TR(i) the number of leaves subtended by its right branch. Then, the unnormalized Colless index86 of T is

$${I}_{C}(T)=mathop{sum }limits_{i=1}^{N-l}| {T}_{L}(i)-{T}_{R}(i)| .$$

Since Colless index is defined only for bifurcating trees, we used the default normalized Colless-like index ({{mathfrak{C}}}_{{mathrm{MDM}},,{{mathrm{ln}}}(l+e),,{mathrm{norm}}}) defined in ref.85. This consisted of measuring the dissimilarity between the subtrees (T^{prime}) rooted at a given internal node by computing the mean deviation from the median (MDM) of the f-sizes of these subtrees. In this case, (f(l)={{mathrm{ln}}}(l+e)) and the f-size of (T^{prime}) is defined as

$$mathop{sum}limits_{vin V(T^{prime} )}{mathrm{ln}}({mbox{deg}}(v)+e).$$

These dissimilarities were then summed and the result was normalized as for Sackins index.

The cophenetic value (i,j) of a pair of leaves i,j is the depth of their lowest common ancestor (such that the root has depth 0). The total cophenetic index87 of T is then the sum of the cophenetic values over all pairs of leaves, and a normalized version is

$${I}_{{{Phi }},{mathrm{norm}}}(T)=frac{mathop{sum}limits_{N-l+1le i < jle N}phi (i,j)}{left({l}atop{3}right)},$$

where here the minimal value of the cophenetic index is 0 for all l (for a star-shaped tree with l leaves).

These three balance indices were designed for analysing species phylogenies and are thus defined on cladograms, which are trees in which leaves correspond to extant species and internal nodes are hypothetical common ancestors. Conventional cladograms have no notion of node size. Cladograms also lack linear components as each internal node necessarily corresponds to a branching event. The driver phylogenetic trees reported in multi-region sequencing studies and generated by our models are instead clone trees (also known as mutation trees), in which all nodes of non-zero size represent extant clones. To apply previous balance indices to driver phylogenetic trees, we first converted the trees to cladograms by adding a leaf to each non-zero-sized internal node and collapsing linear chains of zero-sized nodes.

Whereas diversity indices such as D are relatively robust to the addition or removal of rare clones, the balance indices described above are much less robust because they treat all clones equally, regardless of population size (Supplementary Figs. 6, 7 and 8). This hampered comparison between model results and data for two reasons. First, due to sampling error, even high quality multi-region sequencing studies underestimate the number of subclonal, locally abundant driver mutations by approximately 25%81. Second, bulk sequencing cannot detect driver mutations present in only a very small fraction of cells.

To overcome the shortcomings of previous indices, we have developed a more robust tree balance index based on an extended definition: tree balance is the degree to which internal nodes split the tree into subtrees of equal size, where size refers to the sum of all node populations.

Let f(v)>0 denote the size of node v. For an internal node i, let V(Ti) denote the set of nodes of Ti, the subtree rooted at i. We then define

$$begin{array}{l}{S}_{i}=mathop{sum}limits_{vin V({T}_{i})}f(v)=,{{mathrm{the}} {mathrm{size}} {mathrm{of}}},,{T}_{i},\ {S}_{i}^{* }=mathop{sum}limits_{vin V({T}_{i})atop {vne i}}f(v)=,{{mathrm{the}} {mathrm{size}} {mathrm{of}}},,{T}_{i},,{{mathrm{without}} {mathrm{its}} {mathrm{root}}},,i.end{array}$$

For i in the set of internal nodes (widetilde{V}), and j in the set C(i) of children of i, we define ({p}_{ij}={S}_{j}/{S}_{i}^{* }). We then computed the balance score ({W}_{i}^{1}) of a node (iin widetilde{V}) as the normalized Shannon entropy of the sizes of the subtrees rooted at the children of i:

$${W}_{i}^{1}=mathop{sum}limits_{jin C(i)}{W}_{ij}^{1},quad ,{{mbox{with}}},{W}_{ij}^{1}=left{begin{array}{ll}-{p}_{ij}{{{mathrm{log}}},}_{{d}^{+}(i)}{p}_{ij}&,{{mbox{if}}},,{p}_{ij} > 0,{{mbox{and}}},,{d}^{+}(i)ge 2,\ 0&,{{mbox{otherwise,}}},end{array}right.$$

where d+(i) is the out-degree (the number of children) of node i. Finally, for each node i, we weighted the balance score by the product of ({S}_{i}^{* }) and a non-root dominance factor ({S}_{i}^{* }/{S}_{i}.) Our normalized balance index is then

$${J}^{1}:= frac{1}{{sum }_{kin widetilde{V}}{S}_{k}^{* }}mathop{sum}limits_{iin widetilde{V}}{S}_{i}^{* }frac{{S}_{i}^{* }}{{S}_{i}}{W}_{i}^{1}.$$

Supplementary Fig. 11 illustrates the calculation of J1 for four exemplary trees. We further describe the desirable properties of this index, and its relationship to other tree balance indices, in another article43.

When n2 (where n is the mean number of driver mutations per cell), the non-root dominance factor cannot exceed n1, while the other factors in J1 are at most 1, which implies J1n1 for all n2. Also for n>2, we have J11n1, as shown in Fig. 4a.

For each time point tt, we defined a clonal turnover index as

$${{Theta }}(t)=mathop{sum}limits_{i}{left({f}_{i}(t)-{f}_{i}(t-tau )right)}^{2},$$

where fi(t) is the frequency of clone i at time t, and is 10% of the total simulation time measured in cell generations. The mean value (overline{{{Theta }}}) over time measures the total extent of clonal turnover.

To determine whether clonal turnover mostly occurred early, late or throughout tumour evolution, we calculated the weighted average

$${overline{T}}_{{{Theta }}}=frac{1}{max (t)}left(mathop{sum}limits_{t}{{Theta }}(t)tbigg/mathop{sum}limits_{t}{{Theta }}(t)right),$$

where (max (t)) denotes the final time of the simulation. This quantity takes values between 0 and 1, and is higher if clonal turnover occurs mostly late during tumour growth. If the rate of clonal turnover is constant over time, then ({overline{T}}_{{{Theta }}}approx 0.55).

We randomly selected five tumours of each of four cancer types (colorectal cancer, clear cell renal cancer, lung adenocarcinoma and breast cancer) from The Cancer Genome Atlas (TCGA) reference database (http://portal.gdc.cancer.gov). Using QuPath v0.2.0m488, we manually delineated five representative groups of tumour cells in each image and automatically counted the number of cells in each group. We defined a group as a set of tumour cells directly touching each other, separated from other groups by stroma or other non-tumour tissue (Extended Data Fig. 3).

The number of cells per group ranged from 5 to 8,485, with 50% of cases having between 53 and 387 cells (Extended Data Fig. 4a). Variation in the number of cells per group was larger between rather than within tumours, whereas cell density was relatively consistent between tumours (Extended Data Fig. 4b). Because our cell counts were derived from cross sections, they would underestimate the true populations of three-dimensional glands. On the other hand, it is unknown what proportion of cells are able to self-renew and contribute to long-term tumour growth and evolution89. On balance, therefore, it is reasonable to assume that each gland of an invasive glandular tumour can contain between a few hundred and a few thousand interacting cells. This range of values is, moreover, remarkably consistent with results of a recent study that used a very different method to infer the number of cells in tumour-originating niches. Across a range of tissue types, this study concluded that cells typically interact in communities of 3001,900 cells30. Another recent study of breast cancer applied the Louvain method for community detection to identify two-dimensional tumour communities typically in the range of 10100 cells.29

Further information on research design is available in the Nature Research Reporting Summary linked to this article.

Read the original here:

Spatial structure governs the mode of tumour evolution - Nature.com

An Evolutionary Ancestor of Arthropods? – Discovery Institute

Photo: Spriggina, by Daderot, CC0, via Wikimedia Commons.

A commenter at our newScience Uprisingvideo on the fossil recordasks whether a Precambrian fossil from the Ediacaran fauna calledSprigginacould have been an evolutionary ancestor of arthropods, purportedly contradicting a claim by Stephen Meyer. In fact, this is a claim Meyer addressed long ago inDarwins Doubt, where he explained why various authorities do not believe it was an evolutionary ancestor of arthropods or other Cambrian animal phyla:

Similar disputes have characterized attempts to classifySpriggina. In 1976, Martin Glaessner, the first paleontologist to study the Ediacaran in detail, describedSprigginaas a possible annelid polychaete worm based largely upon its segmented body. Nevertheless, Simon Conway Morris later rejected that hypothesis becauseSprigginashows no evidence of the distinguishing chaetes, leg-like bristled protrusions that polychaete worms possess. Glaessner himself later repudiated his original hypothesis thatSprigginawas ancestral to polychaetes, noting thatSprigginacannot be considered as a primitive polychaete, having none of the possible ancestral characters indicated . . . by specialists on the systematics and evolution of this group.

In 1981, paleontologist Sven Jorgen Birket-Smith produced a reconstruction of aSprigginafossil showing that it possessed a head and legs similar to those of trilobites, though examinations of subsequentSprigginaspecimens have shown no evidence of it possessing limbs of any kind. In 1984, Glaessner weighed in on this discussion as wellHe argued that Sprigginashows no specific characters of the arthropods, particularly of the trilobites. He also noted that the body segmentation ofSpriggina,and its known appendages are at the level of polychaete annelids (although, as noted, by this time he had rejectedSprigginaas a possible polychaete ancestor). Instead, he proposed thatSprigginarepresented a side branch on the animal tree of lifeone that resulted, metaphorically perhaps, in an unsuccessful attempt to make an arthropod.

In a presentation to the Geological Society of America in 2003, geologist Mark McMenamin revived the idea thatSprigginamight represent a trilobite ancestor. He argued that several features present inSprigginafossils are comparable to those in trilobites such as the presence of genal spines and an eff aced head or cephalic region. Nevertheless, many Ediacaran experts, including McMenamin, have also noted thatSprigginaspecimens show no evidence of eyes, limbs, mouths, or anuses, most of which are known from fossil trilobites. Other paleontologists remain skeptical about whetherSprigginadoes in fact exhibit genal spines, noting that good specimens seem to show relatively smooth edges with no protruding spines. In addition, analysis of the best recent specimens ofSprigginashows that it does not exhibit bilateral symmetry, undermining earlier attempts to classify it as a bilaterian animal, and by implication an arthropod. Instead,Sprigginaexhibits something called glide symmetry in which the body segments on either side of its midline are off set rather than aligned. As geologist Loren Babcock of Ohio State University notes, The zipper-like body plans of some Ediacaran (Proterozoic) animals such asDickinsoniaandSprigginainvolve right and left halves that are not perfect mirror images of each other. The lack of such symmetry, a distinctive feature of all bilaterian animals, and the absence inSprigginaspecimens of many other distinguishing features of trilobites, has left the classification of this enigmatic organism uncertain.

That was published in 2013. Five years later, Gnter Bechlynoteda paper published byDaleyet al.(2018)which vindicated Meyers point that the symmetry of strange non-bilateral symmetry ofSprigginamakes it a thoroughly implausible ancestor to arthropods. That paper stated:

Spriggina, for example, does not possess bilateral symmetry, but instead has a marked offset along the midline, and this alone is sufficient to reject a euarthropod affinity No euarthropod claim from the Ediacaran biota can therefore be substantiated.

Daleyet al.(2018) further found that Precambrian strata should have been capable of preserving stem arthropods that were ancestors to true arthropods that appear in the Cambrian. Yet arthropod ancestors are missing:

Modes of Fossil Preservation Are Comparable in the Cambrian and Precambrian

Hypotheses that regard Precambrian preservation as insufficient to preserve euarthropods can no longer be sustained, given the abundant lagersttten from the Ediacaran Period. Similarly, claims that euarthropods evolved as a tiny and soft-bodied meiofauna that escaped preservation cannot be substantiated because of how commonly the phosphate window is found in the Ediacaran and lower Cambrian, with microscopic euarthropods not appearing until 514 Ma.

An accompanying Oxford Universitynews release atScience Dailyemphasized this point in plain language:

The idea that arthropods are missing from the Precambrian fossil record because of biases in how fossils are preserved can now be rejected, says Dr. Greg Edgecombe FRS from the Natural History Museum, London, who was not involved in the study. The authors make a very compelling case that the late Precambrian and Cambrian are in fact very similar in terms of how fossils preserve. There is really just one plausible explanation arthropods hadnt yet evolved.

All of this confirms what the Dutch evolutionary ecologist Marten Scheffer wrote in a Princeton University Press book in 2009:

The collapse of the Ediacaran fauna is followed by the spectacular radiation of novel life-forms known as the Cambrian explosion. All of the main body plans that we know now evolved in as little as about 10 million years. It might have been thought that this apparent explosion of diversity might be an artifact. For instance, it could be that earlier rocks were not as good for preserving fossils. However, very well preserved fossils do exist from earlier periods, and it is now generally accepted that the Cambrian explosion was real.

While analyzing Daleyet al.(2017), Bechlyshowsthat were left with a situation where arthropods appear abruptly in the Cambrian period, without evidence of evolutionary precursors a timeline too short for arthropods to evolve by standard neo-Darwinian mechanisms:

[T]he paper by Daley et al. confirms that the Cambrian explosion implies a very acute waiting time problem, again as elaborated by Meyer (2013). Based on their postulated ghost lineages and on molecular clock data, the authors suggest that euarthropods originated about 541 million years ago. They conclude, Rather than being a sudden event, this diversification unfolded gradually over the40 million years of the lower to middle Cambrian, with no evidence of a deep Precambrian history. However, this conclusion is totally speculative and an artifact of their methodological assumptions. It is not based on actual fossil evidence (see above). The latter indeed suggests that the euarthropod body plan appeared with trilobites in the Lower Cambrian, as if out of thin air without any known precursors and without any fossil evidence for a gradual step-wise generation of this body plan.

Far from being a refutation of the abruptness of the Cambrian explosion, this study actually confirms it and makes the abruptness of the event even more acute. Here is why: since the authors refute the existence of stem group arthropods in the Ediacaran period before 550 million years, and euarthropods are documented already for the Lower Cambrian at 537 million years, there remains a window of time of only 13 million years to evolve the stem arthropod body plan from unknown ecdysozoan worm-like ancestors and to make the transition from lobododian pro-arthropods to the fully developed euarthropod body plan, with exoskeleton, articulated legs, compound eyes, etc. Since the average longevity of a single marine invertebrate species is about 5-10 million years (Levinton 2001: 384, table 7.2), this available window of time equals only about two successive species. Considering the implied enormous re-engineering involved, this time is much too short to accommodate the waiting times for the necessary genetic changes to occur and spread according to the laws of population genetics.

For those wedded to an evolutionary interpretation of lifes history, the fossil and genetic evidence leave the origin of arthropods a major mystery.

Continue reading here:

An Evolutionary Ancestor of Arthropods? - Discovery Institute

#10 Story of 2021: A War Against the Truth – Discovery Institute

Image source: Wikimedia Commons.

Editors note: Welcome to anEvolution Newstradition: a countdown of our Top 10 favorite stories of the past year, concluding on New Years Day. Our staff are enjoying the holidays, as we hope that you are, too!Help keep the daily voice of intelligent design going strong. Please give whatever you can to support the Center for Science & Culture before the end of the year!

The following wasoriginallypublished on July 8, 2021.

Given evolutions racist baggage, you might think the theorys proponents would be somewhat abashed to accuse the critics of Darwin of white supremacy.Apparently not. Writing inScientific American, Allison Hopper goes there: Denial ofEvolution Is a Form of White Supremacy. Who isAllison Hopper? She is a white lady, a filmmaker and designer with a masters degree in educational design from New York University. Early in her career, she workedon PBS documentaries. Ms. Hopper has presented on evolution at the Big History Conference in Amsterdam and Chautauqua, among other places. Having been handed a platform by Americas foremost popularscience publication, she writes:

I want to unmask the lie that evolution denial is about religion and recognize that at its core, it is a form of white supremacy that perpetuates segregation and violence against Black bodies.

White people like this always talk about Black bodies instead of Black (or black) people. The idea here is that our human ancestors, who created the first cultures, came out of Africa and were dark-skinned. Supposedly evolution skeptics wish to deny this history, holding that a continuous line of white descendants segregates white heritage from Black bodies. In the real world, this mythology translates into lethal effects on people who are Black. Fundamentalist interpretations of the Bible are part of the fake news epidemic that feeds the racial divide in our country.

She concludes,

As we move forward to undo systemic racism in every aspect of business, society, academia and life, lets be sure to do so in science education as well.

Of course there have been, and still are, religious people who doubted evolution for religious rather than scientific reasons while at the same time holding racist views. The idea, though, that racism can be logically supported from the Bible is ludicrous. As the biblical story goes, writes Ms. Hopper, the curse or mark of Cain for killing his brother was a darkening of his descendants skin. Theres nothing whatsoever in the biblical story to that effect. Handed a copy of the Bible, no reasonable person would come away with a conclusion of white supremacy.

A person who absorbed the history of evolutionary thinking from Charles Darwin to today, and took it all as inerrant, would be an entirely different story. If you had nothing more to go on than Darwins legacy, a conclusion of white supremacy would follow as a matter of course.

Ms. Hopper is concerned about children and their education, but, in concealing Darwinisms foul past, her version of history is wildly inaccurate. From not long after the theory of evolution by natural selection was first proposed by Darwin and Alfred Russel Wallace, evolution took two different paths. That of Wallace, who split with Darwin over human exceptionalism and came to espouse a proto-intelligent design view, supported equal human dignity regardless of skin color.

That of Darwin followed the pseudo-logic of the purposelessly branching tree. Humanity did not advance all as one, equally, Darwin taught. Instead, as he explained in theDescent of Man, Africans were caught somewhere between ape and human, destined to be liquidated by the more advanced peoples: The civilized races of man will almost certainly exterminate and replace the savage races through the world. Darwin did not celebrate this, but he recognized it as what he saw to be a fact.

His cousin Francis Galton drew from Darwins work the pseudo-scientific idea that races could be improved through eugenics. That became mainstream science right up until it was embraced and put into practice by the Nazis, who justified a Final Solution with scientific evolutionary arguments. Eugenic solutions put into place in the United States against African-Americans, and others, including mass forced sterilizations, provided a warm-up and education for the Nazis.

In the U.S. from the start of the 20th century, respectable scientists at top universities, echoed by theNew York Times, supported caging and displaying Africans and others to educate the public about the truths of Darwinism. Before Hitler, Germans committed genocide in Africa, citing Darwinian theory as their justification. Political scientist John West tells these stories in a pair of widely viewed and critically recognized documentaries,Human ZoosandDarwin, Africa, and Genocide. Speaking of racism and eugenics, West has also traced The Line Running from Charles Darwin through Margaret Sanger to Planned Parenthood. As to education, the biology textbook at the center of the 1925 Scopes trial taught both Darwinism and white supremacy.

Todays actual white supremacists, represented by the Alt-Right and various neo-Nazi groups, are warmly disposed to Darwinism, as a glance at their websites will show. Like Hitler before them, they see in evolutionary theory a justification for racial hatred. Allison Hopper leaves ALL OF THIS OUT, both from herScientific Americanarticle and from a simplistic video on YouTube, aimed at kids, Human Evolution and YOU! And she has the nerve to smear skepticism about Darwinian theory as white supremacist.

I am only skimming through a few points of the relevant history. There is much more. Ms. Hopper is either deeply ignorant or deeply dishonest. Ill assume the former. Her concern for Black bodies is well and good. What about a concern for the truth, which matters, or should matter, to people of all skin colors?

This is important. In coming days atEvolution News, we will be sharing some of our past coverage of evolution and its racist past and present. The phrase white supremacy has already been weaponized in politics. Now it is going to war in science education. The aim is to feed children their minds, not their bodies a massive falsehood. This must be resisted.

Continued here:

#10 Story of 2021: A War Against the Truth - Discovery Institute