His greatest hope at freedom is escaping the US and being arrested – CNN

"I have lots of family in Haiti and wanted to bring them to the United States, but I don't have residency," Frederic says. "I thought about them every day, my wife and kids."

At a dead end called Roxham Road, Frederic is crossing a narrow ditch that separates the United States and Canada.

Canadian police wait patiently on the other side. They warn anyone who approaches that what they're about to do is illegal, that they'll be arrested.

But that's the first step. Once arrested, Frederic, and the thousands of others who have made this journey across to Quebec in the past few weeks, can apply for asylum in Canada. He hopes that would mean a chance at uniting with his family that remains in Haiti after 17 years apart. Then, he hopes, his family could apply to for asylum to become Canadian residents too.

In the past month, Greyhound and other bus lines have been packed with immigrants -- primarily Haitians -- making this exact trip from the United States into Canada. They have taken trains, buses, often multiple, to get to Plattsburgh, New York.

From there, they hail a taxi to the border. On this day, Frederic is one of a stream of almost 300 crossing, dragging whatever belongings they can with them. Haitians flooded to the United States after a cholera outbreak in 2010, as well as after the devastating earthquake the same year.

Frederic, like 59,000 other Haitians in the United States, has "temporary protected status," known as TPS, given to Haitians after the earthquake.

Frederic is fearful that means he would be kicked out of the US.

"I'm scared because every day I hear different news," Frederic says. "That's why I'm leaving the United States for Canada."

"We've never seen those numbers," said Royal Canadian Mounted Police (RCMP) spokesman Claude Castonguay. "Even though our officers are patrolling 24 hours a day all year long, we've never seen such numbers coming in."

RCMP intercepted almost 7,000 asylum seekers in the last six weeks in Quebec. 3,000 of those were in July, RCMP says, and almost 4,000 in just the first half of August.

Broadly, asylum seekers point to their growing unease about the Trump administration's attitudes toward immigrants. They also point to the racism they say was unleashed after President Trump's election as motivation for driving them to pick up and head to Canada.

Mimose Joseph and her 13-year-old daughter, Melissa Paul, are trying to find their taxi for the ride from Plattsburgh to the border.

They had taken a series of trains and buses from their home in Belle Glade, Florida, a state Joseph has called home since 2002.

Joseph does not speak any English, but her daughter Melissa was born in Florida and is a US citizen. The 13-year-old explains the pair made this trip to Canada, uprooting her adolescence, because of the growing pressure on her,

"She's been through a lot and has stayed here for almost 15 years, and she doesn't want any stress anymore," Paul says.

They, too, hope Canada will take them in permanently and allow brothers and sisters to join her. But for Melissa, it means leaving the only country she has ever known.

"It's kind of shocking and a little bit sad," she says.

Hundreds have been crossing the border each day in the last two months, according to PRAIDA, a provincial government agency that works under Quebec's Immigration Ministry and focuses on helping new arrivals resettle. Immigration officials say 250 people are coming across the border illegally each day.

"Definitely, there is a movement. People are talking to one another and they are suggesting that it is very easy to cross the border and they think that they will automatically become Canadian," PRAIDA Associate CEO Francine Dupuis says.

Canada has already done away with its version of TPS for Haitians, making it more difficult to claim asylum, Dupuis says.

Just because some asylum seekers are poor, or come from poverty-stricken countries, she says, that does not automatically make them refugees nor guarantee asylum.

"It's not going to be an open door," Dupuis says. "That's definitely not (the case) and it's sad because we do think that many of them believe that they are here to stay, which is not necessarily true."

So many asylum seekers now see Canada as a more welcoming country to find refuge and rebuild their lives that traditional sheltering options used during slower times are overflowing. 3,200 are in temporary housing in Montreal, Dupuis says.

The numbers have grown so much that Montreal's Olympic Stadium, which was home to the 1976 games, is now housing about 700 newcomers, Dupuis says.

The idea, Dupuis says, is to get them comfortable temporary housing but move them through the system as quickly as possible into more permanent lodging.

The vast majority of asylum seekers these days are Haitian, officials say. There are others from Syria and Yemen, fleeing the wars in their countries.

"What they want is a normal life, they want to study, they want to work, they want to have their families with a perspective of stability and this isn't something they seem to be getting now in the (United) States, unfortunately," Dupuis says. "They don't know what is going to happen and that creates anxiety, a lot of anxiety."

The YMCA on Montreal's Tupper Street has long served as the first stop for asylum seekers coming from the United States, but these days it is bursting. Its 600 beds are all full.

Nidal al-Yamani, 26, is standing outside. The Yemeni was living in Alabama on a student visa before he crossed into Canada on July 4. Yemen is one of the six Muslim-majority countries on the Trump administration's travel ban.

"After the ban, everybody knows Yemen. Only the bad things about Yemen," al-Yamani says.

He says he no longer felt comfortable in America and experienced several racist incidents.

"The mood changed and the new administration, they give the green light to the people who were racist (who weren't previously) showing it," al-Yamani says.

Al-Yamani has since moved out of the YMCA into more permanent lodgings as his case is processed. He has a higher chance of succeeding than the Haitians, coming from a country wracked by war, immigration officials say. Already, he says, he feels more at home and accepted in Canada.

"I still love USA. As a people, as a community, as everything. It's just the administration, and maybe the system, that affected me," he says. "Even if I try to go back to the United States I don't think I'm welcome anymore."

Canadian Prime Minister Justin Trudeau and his government are bracing for more people like Al-Yamani to make their way into Canada. On Wednesday he met with a task force of federal and provincial officials charged with managing the influx of asylum seekers.

Those from nine other countries besides Haiti may begin to make their way north too, as their TPS is currently set to expire in the next year. Among them, Honduras and Syria and Al-Yamani's home country of Yemen. It is unclear if the US will extend the TPS for the other countries.

Trudeau said immigrants were a positive for Canada: "Being welcoming and opening is a source of strength," he told reporters.

But he stressed no one was getting a free pass by entering Canada, especially at unauthorized crossing points.

"There are no advantages in terms of the immigration system to arrive irregularly versus arriving regularly," he says. "The same systems will be followed whether it's the very strong and rigorous immediate security checks or whether it's the careful evaluation of their file."

Read more from the original source:

His greatest hope at freedom is escaping the US and being arrested - CNN

How to Make a Supercomputer? – TrendinTech

Scientists have been trying to build the ultimate supercomputer for a while now, but its no easy feat as Im sure you can imagine. There are currently three Department of Energy (DOE) Office of Science supercomputing user facilities: Californias National Energy Research Scientific Computing Center (NERSC), Tennessees Oak Ridge Leadership Computing Facility (OLCF), and Illinois Argonne Leadership Computing Facility (ALCF). All three of these supercomputers took years of planning and a lot of work to get them to the standard they are now, but its all been worth it as they provide researchers with the computing power needed to tackle some of the nations biggest issues.

There are two main challenges that supercomputers solve. The first is that it can analyze large amounts of data and the second is they can model very complex systems. Some of the machines about to go online are capable of producing more than 1 terabyte of data per second, which to put in laymans terms, is nearly enough to fill around 13,000 DVDs every minute. Supercomputers are far more efficient than conventional computers and calculations it could carry out in just one day would take 20 years for a conventional computer to calculate.

As mentioned earlier, the planning of a new supercomputer takes years and is often started before the last one has even finished being set up. Because technology moves so quickly, it works out cheaper to build a new one opposed to redesigning the existing one. In regards to the ALCF, staff began planning for it in 2008, but it wasnt until 2013 that it was launched. Planning involves not only deciding when and where it will be built and installed, but also deciding what capabilities the computers should have that is going to help with future research efforts.

When the OLCF began planning their current supercomputer, the project director, Buddy Bland, said, It was not obvious how we were going to get the kind of performance increase our users said they needed using the standard way we had been doing it. OLCF launched their supercomputer, Titan, in 2012 and combined CPUs (central processing units) with GPUs (graphics processing units). Using GPUs instead allows Titan to handle multiple instructions at once and run 10 times faster than OLCFs previous supercomputer. Its also five times more energy-efficient too.

Even getting the site ready to house the supercomputer takes time. When the NERSC installed their supercomputer, Cori, they had to lay new piping underneath the floor in which to connect the cooling system and cabinets. Theta is Argonnes latest supercomputer to go live which launched in July 2017.

There are many challenges that come with supercomputers too, unfortunately. One is that it literally has thousands of processors so programs have to break problems into smaller chunks and distribute them across the units. Another issue is designing programs that can manage failures. To help pave the way for future research, and to stress-test the computers also, in exchange for having to deal with this new computer issues, users are granted special access as well as being able to attend workshops and get hands-on help when needed.

Dungeon Sessions were held at NERSC while preparing for Cori. These were effectively three-day workshops, often in windowless rooms, where engineers would come together from Intel and Cray to improve their code. Some programs ran 10 times faster after these sessions. Whats so valuable is the knowledge and strategies not only to fix the bottlenecks we discovered when we were there but other problems that we find as we transfer the program to Cori, said Brian Friesen of NERSC.

But, even when the supercomputer is delivered its still a long way from being ready to work.First, the team that it goes to have to ensure that it meets all their performance requirements. Then, to stress-test it fully, they load it with the most demanding, complex programs and let it run for weeks on end. Susan Coghlan is ALCFs project director and she commented, Theres a lot of things that can go wrong, from the very mundane to the very esoteric. She knows this firsthand as when they launched Mira, they discovered that the water they had been using to cool the computer wasnt pure enough and as a result bacteria and particles were causing issues with the pipes.

Scaling up these applications is heroic. Its horrific and heroic, said Jeff Nichols, Oak Ridge National Laboratorys associate director for computing and computational sciences. Luckily the early users program gives exclusive access for several months before eventfully opening up to take requests from the wider scientific community. Whatever scientists can learn from these supercomputers will be used in the Office of Sciences next challenge, which is in the form of exascale computers computers that will be at least 50 times faster than any computer around today. Even though exascale computers arent expected to be ready until 2021, theyre being planned for now at the facilities and managers are already busy conjuring up just what they can achieve with them.

More News to Read

comments

Read the original:

How to Make a Supercomputer? - TrendinTech

Intel Spills Details on Knights Mill Processor | TOP500 … – TOP500 News

At the Hot Chips conference this week, Intel lifted the curtain a little higheron Knights Mill, a Xeon Phi processor tweaked for machine learning applications.

As part of Intels multi-pronged approach to AI, Knights Mill represents the chipmakers first Xeon Phi offering aimed exclusively at the machine learning market, specifically for the training of deep neural networks. For the inferencing side of deep learning, Intel points to its Altera-based FPGA products, which are being used extensively by Microsoft in its Azure cloud (for both AI and network acceleration). Intel is also developing other machine learning products for training work, which will be derived from the Nervana technology the company acquired last year. These will include a Lake Crest coprocessor, and, further down the road, a standalone Knights Crest processor.

In the meantime, its will be up to Knights Mill to fill the gap between the current Knights Landing processor, a Xeon Phi chip designed for HPC work, and the future Nervana-based products. In this case, Knights Mill will inherit most of its design from Knights Landing, the most obvious modification being the amount of silicon devoted to lower precision math the kind best suited for crunching on neural networks.

Essentially, Knights Mill replaces the two large double precision /single precision floating point (64-bit/32-bit) ports on Knights Landings vector processing unit (VPU), with one smaller double precision port and four Vector Neural Network Instruction (VNNI) ports. The latter supports single precision floating point and mixed precision integers (16-bit input/32-bit output). As such, it looks to be Intels version of a tensor processing unit, which has its counterpart in the Tensor Cores on NVIDIAs new V100 GPU. That one, though, sticks with the more traditional 16/32-bit floating point math.

The end result is that compared to Knights Landing, Knights Mill will provide half the double precision floating point performance, twice the single precision floating point performance. With the added VNNI integer support in the VPU (256 ops/cycle), Intel is claiming Knights Mill will deliver up to four times the performance fordeep learning applications.

The use of integer units to beef up deep learning performance is somewhat unconventional, since most of theseapplications are used to employingfloating point math. Intel, however, maintains that floating point offers little advantage in regard to accuracy, and is significantly more computationally expensive. Whether thistradeoff pans outor not remains to be seen.

Knights Mill will also support 16 GB of MCDRAM, Intels version of on-package high bandwidth memory assembled in a 3D stack, as well as 6 channels of DDR4 memory. From the graphic they presented at Hot Chips (above), the design appears to support 72 cores, at least for this particular configuration. Give the 256 ops/cycle value for the VPU, that would mean Knights Mill could deliver more than 27 teraops of deep learning performance for say, a 1.5 GHz processor.

Well find out what actual performance can be delivered once Intel starts cranking out the chips. Knights Mill is scheduled for launch in Q4 of this year.

Follow this link:

Intel Spills Details on Knights Mill Processor | TOP500 ... - TOP500 News

Next Big Academic Supercomputer Set for Debut in 2018 – TOP500 News

The National Science Foundation (NSF) is soliciting proposals from US universities to acquire a $60 million next-generation supercomputer two to three times as powerful as Blue Waters.

The request for proposal (RFP) was originally published in May, and, as of July 14, all interested institutions were supposed to have sent the NSF a letter of intent, registering their interest. Final proposals are due on November 20. Its safe to assume that most, if not all of the major academic supercomputing centers in the US will be vying for the NSF grant.

The Pittsburgh Supercomputing Center (PSC), which collaborates with Carnegie Mellon University and the University of Pittsburgh, has gone on record about its intent to secure the funding for the Phase 1 system. An article published this week in the local Pittsburgh Post-Gazette reports that PSC would like to use such a machine to help spur the areas economy. Although the supercomputer would primarily be used by academic researchers in the science community, interim PSC director Nick Nystrom thinks the machine could also be a boon to the areas startup businesses, manufacturers and other industry players.

From the Post-Gazette report:

Everybody has big data, but big data has no value unless you can learn something from it, Mr. Nystrom said. We have a convergence in Pittsburgh: artificial intelligence, big data, health care and these are things PSC is already doing.

According to the Phase 1 RFP, the new system will be two to three times faster at running applications than the Blue Waters supercomputer, an NSF-funded machine installed at the National Center for Supercomputing Applications (NCSA), at the University of Illinois at Urbana-Champaign. Blue Waters is a Cray XE/XK system, powered by AMD "Interlagos" CPUs and NVIDIA K20X GPUs. It became operational in 2013.

Although Blue Waters has a peak speed of over 13 petaflops, NCSA never submitted a Linpack result for it. However, based on its peak performance, Blue Waters would almost certainly qualify as a top 10 system on the current TOP500 list. NCSA says a number of applications are able run at a sustained speed of more than one petaflop, with a plasma physics code attaining 2.2 petaflops. Given that the Phase 1 machine is supposed to be at least twice as powerful as Blue Waters, it should provide its users a significant boost in application performance.

This Phase 1 effort is also supposed to include an extra $2 million that will go toward the design of the Phase 2 system, which will be funded separately. That system is expected to be 10 times as fast as the Phase 1 machine, upon which it will draw at least some its technology and architecture. No hard dates have been set for this project.

The Phase 1 winner is anticipated to be announced in the first half of 2018, with the system expected to go into production by the end of FY 2019.

Continued here:

Next Big Academic Supercomputer Set for Debut in 2018 - TOP500 News

UT Austin supercomputer helping agencies track Harvey – KXAN.com

Related Coverage

AUSTIN (KXAN) From his desk at an engineering building in Austin, University of Texas professor Clint Dawson has been tracking Hurricane Harvey powered by a supercomputer.

Dawson is part of a team of academics, from Louisiana to North Carolina, who have been trying to create better systems for compiling hurricane data since the 90s. Its a system Dawson says they can count on to work every time.

Using the strength of the Lonestar 5 supercomputer at the Texas Advanced Computing Center, Dawson and his colleagues download information from the National Hurricane Center and are able to update and automate it with high-resolution data. The information they gather are not forecasts but provide very detailed guidance which allows state agencies to see models for things like rising water levels with accuracy down to the neighborhood level.

As far as academic computing, this is the best available that we have to us in the country, Dawson said of the supercomputer they are using.

Dawson explained this is a level of detail that even National Hurricane Center forecasts dont achieve

He added that agencies like the Texas State Operations Center, TxDOT, NOAA, and the National Hurricane Center consult the data his team produces when making decisions like where to evacuate and where to send resources.

The State Operations Center can decide to use our guidance or they can decide to not use it, Dawson said. They tend to look at our results very carefully because we have done a good job of predicting hurricanes in the past.

Dawson is monitoring this data hourly, he says its exciting to provide information that is helpful to people on the ground.

This is what we live for, just like a storm chaser, like if you chase tornadoes you know you live for tornadoes. The flip side is people get hurt in these things and thats what were trying to do is prevent that, he said.

He added that he was surprised to note how much Harvey has escalated since Wednesday.

Yesterday it wasnt much and now today its something, you know its something big, he said. And its happening more and more, that hurricanes like this they blow up over night and suddenly we have a major event to deal with.

His team will be tracking the hurricane at least through Saturday, you can track their latest updates about the storm here. As of Thursday afternoon, their models anticipated a maximum water surge level of 12 feet in areas around where Harvey will make landfall.

More here:

UT Austin supercomputer helping agencies track Harvey - KXAN.com

Pittsburgh stepping up to try to win competition for supercomputer project – Pittsburgh Post-Gazette


Pittsburgh Post-Gazette
Pittsburgh stepping up to try to win competition for supercomputer project
Pittsburgh Post-Gazette
Pittsburgh is competing to build the fastest nongovernmental computer in the country, with an economic impact to the region that could run up some big numbers possibly exceeding $1 billion, according to one backer. It would also need a lot of power ...

Continue reading here:

Pittsburgh stepping up to try to win competition for supercomputer project - Pittsburgh Post-Gazette

Inside View: Tokyo Tech’s Massive Tsubame 3 Supercomputer – The Next Platform

August 22, 2017 Ken Strandberg

Professor Satoshi Matsuoka, of the Tokyo Institute of Technology (Tokyo Tech) researches and designs large scale supercomputers and similar infrastructures. More recently, he has worked on the convergence of Big Data, machine/deep learning, and AI with traditional HPC, as well as investigating the Post-Moore Technologies towards 2025.

He has designed supercomputers for years and has collaborated on projects involving basic elements for the current and more importantly future Exascale systems. I talked with him recently about his work with the Tsubame supercomputers at Tokyo Tech. This is the first in a two-part article. For background on the Tsubame 3 system we have an in-depth article here from earlier this year.

TNP: Your new Tsubame 3 supercomputer is quite a heterogeneous architecture of technologies. Have you always built heterogeneous machines?

Satoshi Matsuoka, professor of the High Performance Computing Systems Group, GSIC, at the Tokyo Institute of Technology, showing off the Tsubame 3.0 server node.

Matsuoka: Ive been building clusters for about 20 years nowfrom research to production clustersall in various generations, sizes, and forms. We built our first very large-scale production cluster for Tokyo Techs supercomputing center back in 2006. We called it Tsubame 1, and it beat the then fastest supercomputer in Japan, the Earth Simulator.

We built Tsubame 1 as a general-purpose cluster, instead of a dedicated, specialized system, as the Earth Simulator was. But, even as a cluster it beat the performance in various metrics of the Earth Simulator, including the top 500, for the first time in Japan. It instantly became the fastest supercomputer in the country, and held that position for the next 2 years.

I think we are the pioneer of heterogeneous computing. Tsubame 1 was a heterogenous cluster, because it had some of the earliest incarnations of accelerators. Not GPUs, but a more dedicated accelerator called Clearspeed. And, although they had a minor impact, they did help boost some application performance. From that experience, we realized that heterogeneous computing with acceleration was the way to go.

TNP: You seem to also be a pioneer in power efficiency with three wins on the Green 500 list. Congratulations. Can you elaborate a little about it?

Matsuoka: As we were designing Tsubame 1, it was very clear that, to hit the next target of performance for Tsubame 2, which we anticipated would come in 2010, we would also need to plan on reducing overall power. Weve been doing a lot of research in power-efficient computing. At that time, we had tested various methodologies for saving power while also hitting our performance targets. By 2008, we had tried using small, low-power processors in lab experiments. But, it was very clear that those types of methodologies would not work. To build a high-performance supercomputer that was very green, we needed some sort of a large accelerator chip to accompany the main processor, which is x86.

We knew that the accelerator would have to be a many core architecture chip, and GPUs were finally becoming usable as a programming device. So, in 2008, we worked with Nvidia to populate Tsubame 1 with 648 third-generation Tesla GPUs. And, we got very good results on many of our applications. So, in 2010, we built Tsubame 2 as a fully heterogenous supercomputer. This was the first peta-scale system in Japan. It became #1 in Japan and #4 in the world, proving the success of a heterogeneous architecture. But, it was also one of the greenest machines at #3 on the Green 500, and the top production machine on the Green 500. The leading two in 2010 were prototype machines. We won the Gordon Bell prize in 2011 for the configuration, and we received many other awards and accolades.

It was natural that when we were designing Tsubame 3, we would continue our heterogeneous computing and power efficiency efforts. So, Tsubame 3 is the second-generation, large-scale production heterogeneous machine at Tokyo Tech. It contains 540 nodes, each with four Nvidia Tesla P100 GPUs (2,160 total), two 14-core Intel Xeon Processor E5-2680 v4 (15,120 cores total), two dual-port Intel Omni-Path Architecture (Intel OPA) 100 Series host fabric adapters (2,160 ports total), and 2 TB of Intel SSD DC Product Family for NVMe storage devices, all in an HPE Apollo 8600 blade, which is smaller than a 1U server.

A lot of the enhancements that went into the machine are specifically to make it a more efficient machine as well as for high performance. The result is that Tsubame 3although at the time of measurement for the June 2017 lists we only ran on a small subset of the full configurationis #61 on the Top 500 and #1 on the Green 500 with 14.11 gigaflops/watt, an RMax of just under 2 petaflops, and a theoretical peak of over 3 petaflops. Tsubme 3 just became operational August 1, with its full 12.1 petaflops configuration, and we hope to have the scores for the full configuration for the November benchmark lists, including the Top500 and the Green500.

TNP: Tsubame 3 is not only a heterogeneous machine, but built with a novel interconnect architecture. Why did you choose the architecture in Tsubame 3?

Matsuoka : That is one area where Tsubame 3 is different, because it touches on the design principles of the machine. With Tsubame 2, many applications experienced bottlenecks, because they couldnt fully utilize all the interconnect capability in the node. As we were designing Tsubame 3, we took a different approach. Obviously, we were planning on a 100-gigabit inter-node interconnect, but we also needed to think beyond just speed considerations and beyond just the node-to-node interconnect. We needed massive interconnect capability, considering we had six very high-performance processors that supported a wide range of workloads, from traditional HPC simulation to big data analytics and artificial intelligence, all potentially running as co-located workloads.

For the network, we learned from the Earth simulator back in 2002 that to maintain application efficiency, we needed to sustain a good ratio between memory bandwidth and injection bandwidth. For the Earth Simulator, that ratio was about 20:1. So, over the years, Ive tried to maintain a similar ratio in the clusters weve built, or set 20:1 as a goal if it was not possible to reach it. Of course, we also needed to have high bisection bandwidth for many workloads.

Todays processors, both the CPUs and GPUs, have significantly accelerated FLOPS and memory bandwidth. For Tsubame 3, we were anticipating certain metrics of memory bandwidth in our new GPU, plus, the four GPUs were connected in their own network. So, we required a network that would have a significant injection bandwidth. Our solution was to use multiple interconnect rails. We wanted at least one 100 gigabit injection port per GPU, if not more.

For high PCIe throughput, instead of running everything through the processors, we decided to go with a direct-attached architecture using PCIe switches between the GPUs, CPUs, and Intel OPA host adapters. So, we have full PCIe bandwidth between all devices in the node. Then, the GPUs have their own interconnect between themselves. Thats three different interconnects within a single node.

If you look at the bandwidth of these links, theyre not all that different. Intel OPA is 100 gigabits/s, or 12.5 GB/s. PCIe is 16 GB/sec. NVLink is 20 GB/sec. So, theres less than a 2:1 difference between the bandwidth of these links. As much as possible we are fully switched within the node, so we have full bandwidth point to point across interconnected components. That means that under normal circumstances, any two components within the system, be it processor, GPU, or storage, are fully connected at a minimum of 12.5 GB/sec. We believe that this will serve our Tsubame 2 workloads very well and support new, emerging applications in artificial intelligence and other big data analytics.

TNP: Why did you go with the Intel Omni-Path fabric?

Matsuoka: As I mentioned, we always focus on power as well as performance. With a very extensive fabric and a high number of ports and optical cables, power was a key consideration. We worked with our vendor, HPE, to run many tests. The Intel OPA host fabric adapter proved to run at lower power compared to InfiniBand. But, as important, if not more important, was thermal stability. In Tsubame 2, we experienced some issues around interconnect instability over its long operational period. Tsubame 3 nodes are very dense with a lot of high-power devices, so we wanted to be make sure we had a very stable system.

A third consideration was Intel OPAs adaptive routing capability. Weve run some of our own limited-scale tests. And, although we havent tested it extensively at scale, we saw results from the University of Tokyos very large Oakforest-PACS machine with Intel OPA. Those indicate that the adaptive routing of OPA works very, very well. And, this is critically important, because one of our biggest pain points of Tsubame 2 was the lack of proper adaptive routing, especially when dealing with degenerative effects of optical cable aging. Over time AOCs die, and there is some delay between detecting a bad cable and replacing it or deprecating it. We anticipated Intel OPA, with its end-to-end adaptive routing, would help us a lot. So, all of these effects combined gave the edge to Intel OPA. It wasnt just the speed. There were many more salient points by which we chose the Intel fabric.

TNP: With this multi-interconnect architecture, will you have to do a lot of software optimization for the different interconnects?

Matsuoka: In an ideal world, we will have only one interconnect, everything will be switched, and all the protocols will be hidden underneath an existing software stack. But, this machine is very new, and the fact that we have three different interconnects is reflecting the reality within the system. Currently, except for very few cases, there is no comprehensive catchall software stack to allow all of these to be exploited at the same time. There are some limited cases where this is covered, but not for everything. So, we do need the software to exploit all the capability of the network,including turning on and configuring someappropriate DMA engines, or some pass through, because with Intel OPA you need some CPUs involvement for portions of the processing.

So, getting everything to work in sync to allow for this all-to-all connectivity will require some work. Thats the nature of the research portion of our work on Tsubame 3. But, we are also collaborating with people like a team at The Ohio State University.

We have to work with some algorithms to deal with this connectivity, because it goes both horizontally and vertically. The algorithms have to adapt. We do have several ongoing works, but we need to generalize this to be able to exploit the characteristics of both horizontal and vertical communications between the nodes and the memory hierarchy. So far, its very promising. Even out of the box, we think the machine will work very well. But as we enhance our software portion of the capabilities, we believe the efficiency of the machine will become higher as we go along.

In the next article in this two-part series later this week, Professor Matsuoka talks about co-located workloads on Tsubame 3.

Ken Strandberg is a technical story teller. He writes articles, white papers, seminars, web-based training, video and animation scripts, and technical marketing and interactive collateral for emerging technology companies, Fortune 100 enterprises, and multi-national corporations. Mr. Strandbergs technology areas include Software, HPC, Industrial Technologies, Design Automation, Networking, Medical Technologies, Semiconductor, and Telecom. He can be reached at ken@catlowcommunications.com.

Categories: HPC

Tags: OmniPath, Tsubame

An Early Look at Baidus Custom AI and Analytics Processor First In-Depth View of Wave Computings DPU Architecture, Systems

View post:

Inside View: Tokyo Tech's Massive Tsubame 3 Supercomputer - The Next Platform

Using Genetics to Uncover Human History – JD Supra (press release)

Human history is often something modern man only sees as through a glass, darkly. This is particularly the case when that history did not occur in the Mediterranean, the Nile Valley, India, or China, or when there is no written record on which scholars can rely. Exacerbating the disrupting effects of time on history can be when that history occurs in a region where extensive migration has disrupted whatever temporarily stable civilization happened to have taken root at that place at any particular time.

But humans leave traces of themselves in their history and a variety of such traces have been the source of reconstructions outside conventional sources. Luigi Cavalli-Sforza began the study of human population genetics as a way to understand this history in 1971 in The Genetics of Human Populations, and later extended these studies to include language and how it influences gene flow between human populations. More recent efforts to use genetics to reconstruct history include Deep Ancestry: The Landmark DNA Quest to Decipher Our Distant Past by Spencer Wells (National Geographic: 2006), and The Seven Daughters of Eve: The Science that Reveals our Genetic Ancestry by Brian Sykes (Carrol & Graf: 2002). And even more recently, genetic studies have illuminated the "fine structure" of human populations in England (see "Fine-structure Genetic Mapping of Human Population in Britain").

Two recent reports illustrate how genetics can inform history: the first, in the American Journal of Human Genetics entitled "Continuity and Admixture in the Last Five Millennia of Levantine History from Ancient Canaanite and Present-Day Lebanese Genome Sequences"; and a second in the Proceedings of the National Academy of Sciences USA, entitled "Genomic landscape of human diversity across Madagascar." In the first study, authors* from The Wellcome Trust Sanger Institute, University of Cambridge, University of Zurich, University of Otago, Bournemouth University, Lebanese American University, and Harvard University found evidence of genetic admixture over 5,000 years of a Canaanite population that has persisted in Lebanese populations into the modern era. This population is interesting for historians in view of the central location of the ancestral home of the Canaanites, the Levant, in the Fertile Crescent that ran from Egypt through Mesopotamia. The Canaanites also inhabited the Levant during the Bronze Age and provide a critical link between the Neolithic transition from hunter gatherer societies to agriculture. This group (known to the ancient Greeks as the Phoenicians) is also a link to the great early societies recognized through their historical writings and civilizations (including the Egyptians, Assyrians, Babylonians, Persians, Greeks, and Romans); if the Canaanites had any such texts or other writings they have not survived. In addition, the type of genetic analyses that have been done for European populations has not been done for descendants of inhabitants of the Levant from this historical period. This paper uses genetic comparisons between 99 modern day residents of Lebanon (specifically, from Sidon and the Lebanese interior) and ancient DNA (aDNA) from ~3,700 year old genomes from petrous bone of individuals interred in gravesites in Sidon. For aDNA, these analyses yielded 0.4-2.3-fold genomic DNA coverage and 53-264-fold mitochondrial DNA coverage, and also compared Y chromosome sequences with present-day Lebanese, two Canaanite males and samples from the 1000 Genomes Project. Over one million single nucleotide polymorphisms (SNPs) were used for comparison.

These results indicated that the Canaanite ancestry was an admixture of local Neolithic populations and migrants from Chalcolithic (Copper Age) Iran. The authors estimate from these linkage disequilibrium studies that this admixture occurred between 6,600 and 3,550 years ago, a date that is consistent with recorded mass migrations in the region during that time. Perhaps surprisingly, their results also show that the majority of the present-day Lebanese population has inherited most of their genomic DNA from these Canaanite ancestors. These researchers also found traces of Eurasian ancestry consistent with conquests by outside populations during the period from 3,750-2,170 years ago, as well as the expansion of Phoenician maritime trade network that extended during historical time to the Iberian Peninsula.

The second paper arose from genetic studies of an Asian/African admixture population on Mozambique. This group** from the University of Toulouse, INSERM, the University of Bordeaux, University of Indonesia, the Max Plank Institute for Evolutionary Anthropology, Institut genomique, Centre Nacional de Genotypage, University of Melbourne, and the Universite de la Rochelle, showed geographic stratification between ancestral African (mostly Bantu) and Asian (Austronesean) ancestors. Cultural, historical, linguistic, ethnographic, archeological, and genetic studies supports the conclusion that Madagascar residents have traits from both populations but the effects of settlement history are termed "contentious" by these authors. Various competing putative "founder" populations (including Arabic, Indian, Papuan, and/or Jewish populations as well as first settlers found only in legend, under names like "Vazimba," "Kimosy," and "Gola") have been posited as initial settlers. These researchers report an attempt to illuminate the ancestry of the Malagasy by a study of human genetics.

These results showed common Bantu and Austronesian descent for the population with what the authors termed "limited" paternal contributions from Europe and Middle Eastern populations. The admixture of African and Austronesian populations occurred "recently" (i.e., over the past millennium) but was gender-biased and heterogeneous, which reflected for these researchers independent colonization by the two groups. The results also indicated that detectable genetic structure can be imposed on human populations over a relatively brief time (~ a few centuries).

Using a "grid-based approach" the researchers performed a high-resolution genetic diversity study that included maternal and paternal lineages as well as genome-wide data from 257 villages and over 2,700 Malagasy individuals. Maternal inheritance patterns were interrogated using mitochondrial DNA and patterns of paternity assayed using Y chromosomal sequences. Non-gender specific relationships were assessed through 2.5 million SNPs. Mitochondrial DNA analyses showed maternal inheritance from either African or East Asian origins (with one unique Madagascar variant termed M23) in roughly equal amounts, with no evidence of maternal gene flow from Europe or the Middle East. The M23 variant shows evidence of recent (within 900-1500 years) origin. Y chromosomal sequences, in contrast are much more prevalent from African origins (70.7% Africa:20.7% East Asia); the authors hypothesize that the remainder may reflect Muslim influences, with evidence of but little European ancestry.

Admixture assessments support Southeast Asian (Indonesian) and East African source populations for the Malagasy admixture. These results provide the frequency of the African component to be ~59%, the Asian component frequency to be ~37%, and the Western European component to have a frequency of about 4% (albeit with considerable variation, e.g., African ancestry can range from ~26% to almost 93%). Similar results were obtained when the frequency of chromosomal fragments shared with other populations were compared to the Malagasy population (finding the closest link to Asian populations from south Borneo, and excluding Indian, Somali, and Ethiopian populations, although the analysis was sensitive in one individual to detect French Basque ancestry). The split with ancestral Asian populations either occurred ~2,500 years ago or by slower divergence between ~2,000-3,000 years ago, while divergence with Bantu populations occurred more recently (~1,500 years ago).

There were also significant differences in geographic distribution between descendants of these ancestral populations. Maternal African lineages were found predominantly in north Madagascar, with material Asian lineages found in central and southern Madagascar (from mtDNA analyses). Paternal lineages were generally much lower overall for Asian descendants (~30% in central Madagascar) based on Y chromosome analyses. Genome-wide analyses showed "highlanders" had predominantly Asian ancestry (~65%) while coastal inhabitants had predominantly (~65%) African ancestry; these results depended greatly on the method of performing the analyses which affected the granularity of the geographic correlates. Finally, assessing admixture patterns indicated that the genetic results are consistent with single intermixing event (500-900 years ago) for all but one geographic area, which may have seen a first event 28 generations ago and a second one only 4 generations ago. These researchers also found evidence of at least one population bottleneck, where the number of individuals dropped to a few hundred people about 1,000-800 years ago.

These results are represented pictorially in the paper:

In view of the current political climate, the eloquent opening of the paper deserves attention:

Ancient long-distance voyaging between continents stimulates the imagination, raises questions about the circumstances surrounding such voyages, and reminds us that globalization is not a recent phenomenon. Moreover, populations which thereby come into contact can exchange genes, goods, ideas and technologies.

* Marc Haber, Claude Doumet-Serhal, Christiana Scheib, Yali Xue, Petr Danecek, Massimo Mezzavilla, Sonia Youhanna, Rui Martiniano, Javier Prado-Martinez, Micha Szpak, Elizabeth Matisoo-Smith, Holger Schutkowski, Richard Mikulski, Pierre Zalloua, Toomas Kivisild, Chris Tyler-Smith

** Denis Pierrona, Margit Heiskea, Harilanto Razafindrazakaa, Ignace Rakotob, Nelly Rabetokotanyb, Bodo Ravololomangab, Lucien M.-A. Rakotozafyb, Mireille Mialy Rakotomalalab, Michel Razafiarivonyb, Bako Rasoarifetrab, Miakabola Andriamampianina Raharijesyb, Lolona Razafindralambob, Ramilisoninab, Fulgence Fanonyb, Sendra Lejamblec, Olivier Thomasc, Ahmed Mohamed Abdallahc, Christophe Rocherc,, Amal Arachichec, Laure Tonasoa, Veronica Pereda-lotha, Stphanie Schiavinatoa, Nicolas Brucatoa, Francois-Xavier Ricauta, Pradiptajati Kusumaa,d,e, Herawati Sudoyod,e, Shengyu Nif, Anne Bolandg, Jean-Francois Deleuzeg, Philippe Beaujardh, Philippe Grangei, Sander Adelaarj, Mark Stonekingf, Jean-Aim Rakotoarisoab, Chantal Radimilahy, and Thierry Letelliera

Read more:

Using Genetics to Uncover Human History - JD Supra (press release)

Test reveals possible treatments for disorders involving MeCP2 – Baylor College of Medicine News (press release)

The first step consisted of genetically modifying a laboratory cell line in which the researchers could monitor the levels of fluorescent MeCP2 as they inhibited molecules that might be involved in its regulation. First author Dr. Laura Lombardi, a postdoctoral researcher in the Zoghbi lab at the Howard Hughes Medical Institute, developed this cell line and then used it to systematically inhibit one by one the nearly 900 kinase and phosphatase genes whose activity could be potentially inhibited with drugs.

We wanted to determine which ones of those hundreds of genes would reduce the level of MeCP2 when inhibited, Lombardi said. If we found one whose inhibition would result in a reduction of MeCP2 levels, then we would look for a drug that we could use.

The researchers identified four genes than when inhibited lowered MeCP2 level. Then, Lombardi and her colleagues moved on to the next step, testing how reduction of one or more of these genes would affect MeCP2 levels in mice. They showed that mice lacking the gene for the kinase HIPK2 or having reduced phosphatase PP2A had decreased levels of MeCP2 in the brain.

These results gave us the proof of principle that it is possible to go from screening in a cell line to find something that would work in the brain, Lombardi said.

Most interestingly, treating animal models of MECP2 duplication syndrome with drugs that inhibit phosphatase PP2A was sufficient to partially rescue some of the motor abnormalities in the mouse model of the disease.

This strategy would allow us to find more regulators of MeCP2, Zoghbi said. We cannot rely on just one. If we have several to choose from, we can select the best and safest ones to move to the clinic.

Beyond MeCP2, there are many other genes that cause a medical condition because they are either duplicated or decreased. The strategy Zoghbi and her colleagues used here also can be applied to these other conditions to try to restore the normal levels of the affected proteins and possibly reduce or eliminate the symptoms.

Other contributors to this work include Manar Zaghlula, Yehezkel Sztainberg, Steven A. Baker, Tiemo J. Klisch, Amy A. Tang and Eric J. Huang.

This project was funded by the National Institutes of Health (5R01NS057819), the Rett Syndrome Research Trust and 401K Project from MECP2 duplication syndrome families, and the Howard Hughes Medical Institute. This work also was made possible by the following Baylor College of Medicine core facilities: Cell-Based Assay Screening Service (NIH, P30 CA125123), Cytometry and Cell Sorting Core (National Institute of Allergy and Infectious Diseases, P30AI036211; National Cancer Institute P30CA125123; and National Center for Research Resources, S10RR024574), Pathway Discovery Proteomics Core, the DNA Sequencing and Gene Vector Core (Diabetes and Endocrinology Research Center, DK079638), and the mouse behavioral core of the Intellectual and Developmental Disabilities Research Center (NIH, U54 HD083092 from the National Institute of Child Health and Human Development).

The full study can be found inScience Translational Medicine.

The rest is here:

Test reveals possible treatments for disorders involving MeCP2 - Baylor College of Medicine News (press release)

Web Extras – LWW Journals (blog)

BY LISA COLLIER COOL

Vincent Van Gogh ranks as one of the most brilliantand prolificartists of all time, painting hundreds of masterpieces ablaze with vivid colors, bold brushstrokes, and swirling coronas. He also experienced seizures, hallucinations, and other symptoms throughout his short life that many historians, his own doctors, and Van Gogh himself attributed to a neurologic disease: epilepsy.

Other famous artists, including Willem de Kooning, who developed Alzheimer's disease, created masterful works of enduring genius while living with neurologic conditions. More recently, Chuck Close, an American painter and photographer, has talked about how his various neurologic conditions both enhance and limit his artistic output (bit.ly/NN-ChuckClose).

We spoke with John McNeil, a jazz trumpeter, to find out how a diagnosis of Charcot-Marie-Tooth disease in childhood influenced his career.

A trumpet player and bandleader who has performed with many of the greats of the music world and recorded more than a dozen critically acclaimed albums, John McNeil has been called "one of the best improvisers working in jazz" by Ben Ratliff, music critic for the New York Times. What makes his success particularly remarkable is that McNeil, 69, has a neurologic disorder that affects his breathing, facial muscles, and finger control, all of which are essential for his art.

Born Different

McNeil was born with Charcot-Marie-Tooth disease (CMT), an inherited condition that affects about one in 2,500 Americans. Named after the three doctors who discovered it, CMT damages peripheral nerves, disrupting signals from the brain to muscles, much like static on a phone line. Over time, this causes muscles to weaken and start to shrink, says Stephan Zchner, MD, PhD, professor of human genetics and neurology, chair of the department of human genetics, and co-director of the John P. Hussman Institute for Human Genomics at University of Miami Health System. "Often CMT symptoms begin in the feet, which have the longest nerves, while the hands and other parts of the body can be affected later in the disease."

In McNeil's case, the symptoms started in childhood. "By age 3, I had trouble with motor skills, and I was falling a lot because my feet had started to deform from the disease," he recalls. This common early symptom often causes people to develop very high arches that impair walking because of weakness in foot muscles. "By the time I was 11, my spine started to get twisted, and I had to wear braces on my legs and body," he adds.

A Sudden Inspiration

When he was 10, McNeil saw a TV show that sparked a lifelong passion. "I watched Louis Armstrong playing the trumpet on a variety show and thought, 'Man, that looks like fun!' I bugged my parents to get me a trumpet, and I'm pretty sure the only reason they agreed was that they'd been told my disease was progressing so fast I might not live past age 13 or 14. Not only did they get me a trumpet, but they also gave me a bunch of Louis Armstrong records that I used to teach myself how to play."

CMT is rarely fatal, says Dr. Zchner. "There are a few extreme cases when patients die at an early age while other people have very mild problems that may not start until they are middle-aged. There are more than 100 subtypes of CMT, and it's very difficult to predict how an individual patient will be affected except that people typically start with a few symptoms and over time, develop more."

Remission

At first, muscle and coordination problems made playing the trumpet difficult for McNeil, but he persisted. Then at age 16, he had a dramatic health turnaround. "The disease suddenly stopped progressing. I worked out every day, and my strength exploded. Within a year, I gained nearly 50 pounds of muscle and felt great." Soon the Yreka, CA, native had more good news to trumpet. He'd become so skilled at playing his instrument that he was invited to play first chair in the Northern California All-Star Concert Band. By the time he graduated from high school, he was playing jazz trumpet professionally.

Relapse

In the 1970s, after getting a degree in music and playing professionally around the country, he moved to New York City and began working as a freelance musician. He also began playing jazz and eventually started recording albums and touring internationally with his band. Then his disease flared up. "I started stumbling, sometimes with no warning, and dropping things. I couldn't get enough air out. Once, in the middle of recording a live album, I had trouble getting air out. I played so poorly that I begged the record company not to release it."

After several years and through sheer determination, he staged a comeback, only to be hit with an even more devastating setback. "I got my band on the road and then this disease really whacked me. I lost control of my right hand and couldn't move my fingers well enough to play the trumpet." Refusing to give up, McNeil spent the next yearand more than 1,000 hours of practiceteaching himself to play left-handed, then formed a new band called Lefty.

A Clinical Trial

However, he continued to struggle with CMT symptoms and, despite daily workouts at the gym, became increasingly frail and disabled. "I was having so much trouble walking that the doctor said I needed a wheelchair. I said no and looked around for somethinganythingthat might help." He enrolled in a small clinical study of human growth hormone, a drug approved by the US Food and Drug Administration (FDA) for certain medical conditions, but not CMT. "Within three months, I threw my cane away," McNeil says.

He was eventually able to resume playing the trumpet right-handed, aided by custom finger braces. "When I was playing left-handed, my style and musical phrasing became more economical since I couldn't rely on music memory and was learning to play all over again. When I switched back to playing right-handed, I found I carried some of this increased clarity with memaking me a much better player," he recalls. "The improvement was amazing!"

"It's extremely unusual for someone with CMT to regain any lost function," says Dr. Zchner. "However, since there's no FDA-approved treatment for this disease, if patients find any therapy they consider helpful and it isn't causing any major side effects, then I wouldn't tell them to stop using it. Exercise, such as swimming or biking, is generally advised, not to reverse the disease, but to make the body more resilient to the loss of muscular strength." Patients with CMT should also ask their neurologists about clinical trials of new treatments, he adds. "Some very promising research programs from the Charcot-Marie-Tooth Association (cmtausa.org) are expected to lead to clinical trials in the near future."

Winning Battle

Although CMT has repeatedly interrupted McNeil's career, often for years at a time, and he continues to battle a wide range of complications, including joint problems, lung infections, and chronic shortness of breath, he's now in a band called Hush Point and performs regularly at New York City clubs with a group of much younger musicians. "Without CMT, I wouldn't be the musician I am today," he says.

"Because I've had to work so hard on my body and concentration to continue playing at a professional level, I find I've become more perceptive musically: I have to completely see, feel, and hear what each note is going to sound like before I play it. While it's a continuing battle to stay at this level, I'm determined to keep fighting this disease. Every time I go out on stage, pick up my trumpet, and start improvising, I've won."

To learn more about John McNeil and his music, go to McNeilJazz.com. To listen to a clip of McNeil playing a traditional Scottish folk song called "The Water Is Wide," by an unknown composer, click on the box below. To order the full CD, Sleep Won't Come, go tobit.ly/SleepWontCome. For interviews of artists with other neurologic conditions, go to bit.ly/NN-TheArtOfIllness.

See the original post:

Web Extras - LWW Journals (blog)

To Protect Genetic Privacy, Encrypt Your DNA – WIRED

In 2007, DNA pioneer James Watson became the first person to have his entire genome sequencedmaking all of his 6 billion base pairs publicly available for research. Well, almost all of them. He left one spot blank, on the long arm of chromosome 19, where a gene called APOE lives. Certain variations in APOE increase your chances of developing Alzheimers, and Watson wanted to keep that information private.

Except it wasnt. Researchers quickly pointed out you could predict Watsons APOE variant based on signatures in the surrounding DNA. They didnt actually do it, but database managers wasted no time in redacting another two million base pairs surrounding the APOE gene.

This is the dilemma at the heart of precision medicine: It requires people to give up some of their privacy in service of the greater scientific good. To completely eliminate the risk of outing an individual based on their DNA records, youd have to strip it of the same identifying details that make it scientifically useful. But now, computer scientists and mathematicians are working toward an alternative solution. Instead of stripping genomic data, theyre encrypting it.

Gill Bejerano leads a developmental biology lab at Stanford that investigates the genetic roots of human disease. In 2013, when he realized he needed more genomic data, his lab joined Stanford Hospitals Pediatrics Departmentan arduous process that required extensive vetting and training of all his staff and equipment. This is how most institutions solve the privacy perils of data sharing. They limit who can access all the genomes in their possession to a trusted few, and only share obfuscated summary statistics more widely.

So when Bejerano found himself sitting in on a faculty talk given by Dan Boneh, head of the applied cryptography group at Stanford, he was struck with an idea. He scribbled down a mathematical formula for one of the genetic computations he uses often in his work. Afterward, he approached Boneh and showed it to him. Could you compute these outputs without knowing the inputs? he asked. Sure, said Boneh.

Last week, Bejerano and Boneh published a paper in Science that did just that. Using a cryptographic genome cloaking method, the scientists were able to do things like identify responsible mutations in groups of patients with rare diseases and compare groups of patients at two medical centers to find shared mutations associated with shared symptoms, all while keeping 97 percent of each participants unique genetic information completely hidden. They accomplished this by converting variations in each genome into a linear series of values. That allowed them to conduct any analyses they needed while only revealing genes relevant to that particular investigation.

Just like programs have bugs, people have bugs, says Bejerano. Finding disease-causing genetic traits is a lot like spotting flaws in computer code. You have to compare code that works to code that doesnt. But genetic data is much more sensitive, and people (rightly) worry that it might be used against them by insurers, or even stolen by hackers. If a patient held the cryptographic key to their data, they could get a valuable medical diagnosis while not exposing the rest of their genome to outside threats. You can make rules about not discriminating on the basis of genetics, or you can provide technology where you cant discriminate against people even if you wanted to, says Bejerano. Thats a much stronger statement.

The National Institutes of Health have been working toward such a technology since reidentification researchers first began connecting the dots in anonymous genomics data. In 2010, the agency founded a national center for Integrating Data for Analysis, Anonymization and Sharing housed on the campus of UC San Diego. And since 2015, iDash has been funding annual competitions to develop privacy-preserving genomics protocols. Another promising approach iDash has supported is something called fully homomorphic encryption, which allows users to run any computation they want on totally encrypted data without losing years of computing time.

Megan Molteni

The Go-To Gene Sequencing Machine With Very Strange Results

Sarah Zhang

Cheap DNA Sequencing Is Here. Writing DNA Is Next

Rachel Ehrenberg, Science News

Scrubbing IDs Out of Medical Records for Genetic Studies

Kristen Lauter, head of cryptography research at Microsoft, focuses on this form of encryption, and her team has taken home the iDash prize two years running. Critically, the method encodes the data in such a way that scientists dont lose the flexibility to perform medically useful genetic tests. Unlike previous encryption schemes, Lauters tool preserves the underlying mathematical structure of the data. That allows computers to do the math that delivers genetic diagnoses, for example, on totally encrypted data. Scientists get a key to decode the final results, but they never see the source.

This is extra important as more and more genetic data moves off local servers and into the cloud. The NIH lets users download human genomic data from its repositories, and in 2014, the agency started letting people store and analyze that data in private or commercial cloud environments. But under NIHs policy, its the scientists using the datanot the cloud service providerresponsible with ensuring its security. Cloud providers can get hacked, or subpoenaed by law enforcement, something researchers have no control over. That is, unless theres a viable encryption for data stored in the cloud.

If we dont think about it now, in five to 10 years a lot peoples genomic information will be used in ways they did not intend, says Lauter. But encryption is a funny technology to work with, she says. One that requires building trust between researchers and consumers. You can propose any crazy encryption you want and say its secure. Why should anyone believe you?

Thats where federal review comes in. In July, Lauters group, along with researchers from IBM and academic institutions around the world launched a process to standardize homomorphic encryption protocols. The National Institute for Standards and Technology will now begin reviewing draft standards and collecting public comments. If all goes well, genomics researchers and privacy advocates might finally have something they can agree on.

See the rest here:

To Protect Genetic Privacy, Encrypt Your DNA - WIRED

Memes, memes everywhere | SunStar – Sun.Star

MEMESAN ongoing social phenomenon. These often come in the form of funny pictures and texts combined, creating jokes that are passed on across cultures throughout the world wide web.

One cannot possibly open social media or at the very least use the internet without coming across memes. For baby boomers (the generation born before the internet began), these things are mere silly distractions that take up most of generation Ys time. However, the truth is, theres more to it than meets the eye.

To address this misunderstanding between two different generations, Tropical Futures Institute (TFI) held a one night only open-sourced exhibit of memes entitled The Meme Show last Aug. 18. TFI is a loose group of like-minded individuals, an arm of 856 G Gallery that focuses on neo-centric community shows, focused more on bringing people together as emphasized by Anne Amores, assistant gallerist of 856 G Gallery.

Anyone can join. Its a celebration of the meme culture and were trying to elevate memes into an art form which it arguably is, said Zach Aldave, meme enthusiast and a member of TFI.

Memes relate to the Dada movement. The dada began as a reaction to the limitation of art. Dada started like that; its anti-art art. We can relate that to memes, which are satirical social commentaries, he continued. Its a super-mutated form of satire, added Anne.

The interrelation of cultures before was brought about by intercontinental travels and interracial marriages. Back in the day, globally educating oneself was expensive and entailed one to physically expose himself to another culture, but in the present generation this happens in a different way, more accessible and easier.

If you look at the meme and you strip all the unnecessary sh*tall the irony and all the humorit boils down to being just a pure form of social commentary, said Zach.

Memes are cultural symbols or social ideas in the form of jokes, and are virally transmitted through wires without needing one to get out of the house. So despite the fact that one is just staring into the computer screen reading memes, one is actually being educated about the varying cultures from the different corners of the Earth.

As a form of art, memes are also forms of expression. Some memes exhibit dark humor which represents the sector from which it comes, and which a lot of people surprisingly empathize with.

Some memes are also sort of expressing deeply seated feelings like depression. Whats good about memes is that these are like an outlet for a lot of people who are struggling. Usually theyre cloaked in irony or humor, and they empathize with each other through memes, said Anne.

Unknown by many, memes can be traced back in history. It is being brought to light as a science with a study called Memetics. Memetics is a study begun by evolutionary biologist Richard Dawkins. In this study, memes are understood to be cultural genes, carrying cultural information from one person to another and human beings are vehicles of their transmission.

Original post:

Memes, memes everywhere | SunStar - Sun.Star

The matter with memes – The GUIDON

Features

by Mikaela T. Bona and Joma M. Roble Published 20 August, 2017 at 1:01 AM from the April 2017 print issue

A meme is both the picture that is worth a thousand words and the few words that can make a thousand picturesor not.

Like hungry brigands waiting by the side of busy trade route, memes ambush and bombard many of us in our own journeys across the Internet, particularly when we travel by social media. They can strike our newsfeeds unexpectedly and boldly. However, unlike bandits out for bounty, Internet memes are seemingly a much more pleasant sight to encounter.

In her 2008 TED talk, memeticist Susan Blackmore explained that memes are bits of information that replicate themselves from person to person through imitation. Memeticists study memetics, a field which explores how ideas propagate among people. Blackmore then continued to say that we human beings have created a new meme: what she calls the technological meme, or the teme for short, which is a meme disseminated via technology. The teme is what is commonly known to be the meme with a comical picture and text shared on social networking sites like Facebook or Twitter.

This merry friend of ours still has much to share with us. As it acts as a mirror that can reflect our joys and sorrows in an instant, memes have also become a mouthpiece of a generation in constant flux.

To define it is to kill it

Ethologist and evolutionary biologist Richard Dawkins is the first to coin the term meme in his bestselling book, The Selfish Gene. Deriving from the Greek mimemes and the French mme which mean imitated thing and memory, respectively, he defines the traditional meme as a living structure that transfers from brain to brain in the process of imitation.

According to Dawkins, memes could be tunes, ideas, catch-phrases, clothes, fashions, ways of making pots, or [even ways] of building the arches. He states further that memes and genes are both meant to sustain as well as change humans, but while genes exist for biological evolution, memes, on the other hand, are replicators that allow for cultural transmission throughout generations.

Interestingly, Dawkins did not lay down specificities as to why memes proliferated. Internet memes are steadily reproduced for an unknownand possibly nonexistentreason. As The Atlantic writer Venkatesh Rao puts it, the Internet meme is a meme in the original sense intended by Richard Dawkins: a cultural signifier that spreads simply because it is good at spreading. It pertains to something that is necessarily vague for it to be universally understood.

While a picture is often described to speak a thousand words, the meme goes beyond interrelated ideas and event. A photo of a smirking man with his right index finger pointing on the right side of his forehead, for instance, would mean hes thinking of something clever. What that thing is though is uncannily up to all of us, making us not just observers, but active participants in the meme experience.

When you speak of memes, you just feel that its a meme. It takes its own being of being a meme in your mind and it can become as weird or not weird as your imagination wants. Its just what it is for you, shares Vince Nieva, of the meme page Ageless Ateneo Memes, in his talk for Arete 2017: Hayo held last April 5.

The ambiguous quality of Internet memes have been subject to research since 2011. This is what paves the way for a designation of new meanings that creates a sense of flexibility. With every user that is able to add a new twist or plot to the meme, it becomes more amorphous and far-reaching that it connects seemingly disparate ideas into relational entities.

A language of its own

Rao believes that memes are an effect of the post-everything world we live in. He explains the complex intertwinement of ideas in our fast-paced world by emphasizing that there is a distinction between the Harambe meme and the actual slain zoo gorilla. This is an age wherein stories are captured while they are still unfolding.

Rapid media technology is going faster than humans can process, which can warp and stunt the emotional reactions to current news. The shock caused by the 2016 American election results led to the creation of many Donald Trump memes pre- and post-elections, which have since been correlated with other memes. In a world freer than ever before, we are both repressed by our technological creations and freed by them.

The universality of meme sharing on social media platforms has made it difficult to continue a single train of thought. In his contribution to the book The Social Media Reader, Patrick Davison states that viewing and linking...is part of the meme, as is saving and reposting. Ironically, the ability of anyone to take part in the dialogue, by a multitude of means through memes, has orchestrated cacophonies. However, genuine relationships can still be formed in the ruckus.

Memes can prove to be a global inside joke amongst ourselves. They can be a way for us to make [some] sense [out] of confusing events and perhaps even cope with personal lost-ness. Memes are a way to get people to connect, says Alfred Marasigan, an Ateneo Fine Arts lecturer, during his talk in Arete 2017: Hayo.

The practice of meme creation draws up a vague sense of community among those who partake in meme sharing; this creates a mutual understanding of what the meme isand principally, what it can be. People partake in the definition production that sustains the meme vogue for as long as possible until a new one comes along to dominate the cyber sphere, while the former eventually dies out.

As old memes die, strong emotions from people who share the same experience come together to form a new meme. Interestingly, it has also been a medium for cultural and socio-political critique. According to Know Your Meme, which tracks the origin of memes, the Evil Kermit meme is an image of Kermit and his nemesis Constantine, who is dressed like a Star Wars Sith lord and instructs Kermit to perform various indulgent, lazy, selfish and unethical acts.

The meme has been used to point out religions underlying crusade tendencies and even question meme culture itself. Other examples include the nut button, which evolved from having sexual implications to anything that can trigger one to act strongly, Arthurs Fista reaction to situations that are frustrating or infuriating, and many more.

Show and tell

In the technologically-forward society we live in, the way culture is transferred from person to person is changing. Internet memes have revolutionized communication by their nature of transmuting meaning as it spreads. As expressions of our alienation from what our traditional memes can normally keep up with, it is vital to note that we are satirizing something that we cannot fully understand. The world is perpetually moving and memes are constantly angled towards a multitude of narratives.

Memes are like junk food, says Andrew Ty, a lecturer at the Ateneo Department of Communication. Their gratification is immediate and not long-lasting and you end up waiting for the next one very quickly. In the end, [memes] are just one part of this overall tendency nowadays towards viral communication.

A study conducted at the University of Bonn in Germany provided mathematical models to explain the temporality of memes. Internet memes are just fads, but they are ones that persist by coming back with the same vague appeal and rhetoricalbeit in different forms. Their vogue is infectious to the generation as of now. Soon, however, theyll be images of the past.

It may seem hard to see memes as something akin to Edo Japans The Floating World of Ukiyo-e, or even Victorian era post-mortem photographs, but they might just be one of our eras most distinguishing and awestriking depictions. After all, the meme is representative of a world moving faster than we can understand. As its uncanniness pulls us in, it is likely for memes to one day be an iconic portrayal of our generation.

Read the rest here:

The matter with memes - The GUIDON

The Evolution Of My Dividend Growth Portfolio – Seeking Alpha

When I write articles about buying shares of companies like Alibaba (BABA) and Tencent (OTCPK:TCEHY), yet call myself a dividend growth investor, Im sure that people begin to wonder what in the world is going on? I wouldnt blame them. Those singular pieces focused on high-growth stocks dont do my focus on income justice. Ive had several people reach out to me, asking about my current holdings. Its been awhile since Ive done a portfolio review piece, so I decided to spend some time putting this together so that followers, old and new, can stay up to date on my portfolios construction.

We live in a different world now than investors did in the middle of the last century. Many of the markets that have given tremendous returns throughout much of the 20th century are very mature at this point, and therefore, I dont expect a lot more growth coming out of them. Im not saying that a company like Coca-Cola (KO) is going away. I expect continued large cash flows and capital returns, but Im not satisfied with 100% exposure to slow-growth industries. I bet that KO will continue to post low-single-digit top line growth on average over the long-term as it adds brands and takes market share. This is all a very mature company needs to do: Single-digit sales growth combined with respectable margins and a sustainable share buyback with excess cash flows is all a mature company needs to produce respectable bottom line growth. Being that these companies are typically evaluated based upon a price to earnings ratio, this bodes well for their stock prices over the long-term. With that being said, I dont expect many of the current dividend aristocrats to generate wealth for their investors over the next 50 years as they did during the last 50 years, and Ive been willing to place riskier bets to attempt to capitalize on truly wealth-building opportunities elsewhere.

I dont think that any of this could come across as a controversial or revolutionary statement. As markets mature, growth slows, and expectations of future returns should change. When seeking that same sort of generational growth that early investors in the dividend aristocrats of today experienced, Im looking at new growth frontiers. Im looking for developing markets in terms of both sectors/industries and economies. In other words, when it comes to growth, Im looking at things like software, and not soft drinks.

But before we get to the growth portion of my portfolio, lets take a look at the more conservative dividend growth portion, which makes up the vast majority of my holdings. I end up writing about my more speculative bets much more often than my conservative holdings, but thats sort of the point, isnt it? When I buy shares of a dividend aristocrat, I hope that I never have to write about them again. I dont want these companies in the news. Im quite happy to sit back and watch as they slowly and steadily grow. Im happy to watch their dividends compound via re-investment in an un-noteworthy fashion and track my monthly income, which is surely trending in the right direction. For the most part, I hope that my dividend growth holdings are boring. That would mean that theyre meeting my expectations and goals.

My dividend growth investments make up nearly 75% of my portfolio. When you consider the fact that ~8.5% of my portfolio is currently in cash, this majority appears even larger. My speculative bets basket amounts to 16.7% of my portfolio, though 5 of the 11 companies that currently comprise that basket pay a growing dividend and I imagine that in a decade or so, their yields will be high enough for me to move them up into the main dividend growth category. So without further ado, here are the graphs I put together to break down my holdings.

DGI Holdings:

Speculative Growth Basket:

As you can see, I hold more holdings than many of the other DGI portfolios that are regularly tracked here on Seeking Alpha. I think only RoseNose owns more individual names. This highly diversified strategy may not be best for other investors; Im essentially a full-time investor at this point, and I have the time/energy available to track a portfolio with 75 holdings. I know Jim Cramer says that retail investors shouldnt hold more than a dozen or so stocks because of the time it takes to properly track them. I dont think there is any one magic number in terms of the right amount of holdings. I imagine it comes down to investable capital, risk tolerance, and the aforementioned time, energy, and passion for the markets. I look at a lot of professionally managed funds/sovereign wealth funds, and these portfolios are typically highly diversified. While I ultimately make investment decisions based upon my own personal goals, I like to see what the big boys and girls are doing as I strive to become a better investor. I imagine that my portfolio will continue to grow to the point where its 100 holdings or so and plateau there due to the fact that, for me at least, money doesnt grow on trees.

I understand that my industry/sector allocations are different than many of the other DGI portfolios that youll see here on Seeking Alpha. Technology, not consumer staples, utilities, or real estate, is my largest sector allocation at more than 26% of my overall portfolio. Up next is consumer cyclical and healthcare, coming in at ~16.5% and ~15%, respectively. The rest of my major sectors/industries are currently weighted fairly equally in the 7-10% range. None of these sector allocations are set in stone. Over time I buy value where I see it (healthcare throughout 2016, for example), and I imagine these weightings will change as market sentiment ebbs and flows. When I take a step back and look at my portfolio through a wider lens, Im happy with where they all currently sit.

Maybe the most glaring difference between my portfolio and others is the fact that I have basically zero exposure to energy and utilities. Ive been a bear with regard to the energy space for some time now, having divested all of my oil/gas-related names in 2015/early 2016. Id love to add exposure to the utilities back into my portfolio, but for the time being, I believe theyre irrationally overpriced in this low-rate environment. These high valuations combined with the fact that utilities typically dont offer the dividend growth that Id like to see have caused me to avoid the sector, in general.

77% of my holdings are of the large/mega cap variety. 6% are mid-caps and less than 1% are small caps. Generally, because of my focus on shareholder returns, reliable earnings, strong balance sheets and large cash reserves, Im attracted to large-cap companies. Even when I look at growth names I find myself attracted to large-/mega-cap names because of my focus on best in breed names. The cream typically rises to the top in the markets, and once a growth company becomes profitable (something that I usually wait for before investing), their market caps are relatively large. Im OK with this. Ive seen amazing stories of investors who created generational wealth with relatively small investments in early stage companies that turned into the industry leaders that we see today, but for every one of these home runs Im sure there were numerous strike outs, and sticking to the baseball metaphor, Im content to bat for average rather than power.

Nearly 92.5% of my overall portfolio is comprised of companies domiciled in North America. I dont mind being so overweight with North American (primarily American) companies because many of them are multinationals and Im getting exposure to foreign and emerging markets through their sales anyway. I have taken steps recently to reduce this vastly overweight exposure, hoping to become a bit more diversified internationally. European companies currently make up about ~5% of my portfolio and Id probably like to see that figure rise to the 10% range. Asia makes up the last ~2.5% of my portfolio; as time moves forward, Id like to see this figure rise as well, probably to the 5-10% range. Right now Im seeing better value in Europe and Asia than I am in the U.S. markets, generally. I hope to take advantage of these value gaps as the world plays catch up to the American markets.

But heres the most important graph that Ill be including in this piece: my monthly dividend income totals. Im quite pleased with the progress that Ive made in this regard and I feel confident that Im well on my way to financial freedom because of this passive income. Every few months it seems that I cross a new monthly income threshold with potentially impactful meaning to my life. I remember a few years ago when I was excited to know that my dividends could cover my utility bills if I needed them to. Now my utilities and both of our car payments could fall under the dividend income umbrella if I decided to spend the cash rather than re-invest it. I havent had a month yet where my dividend income could have potentially covered my mortgage payment, but I expect to achieve that goal within the next year or so. Tracking dividend income, rather than the overall value of my portfolio, gives me an anchor to hold on to during market volatility. This is one of the reasons why Ive become so attracted to the DGI portfolio management strategy.

Sure, if I was to eliminate my speculative growth basket and put those funds into a handful of stocks yielding 3%, my monthly income would be even higher. But since Im still in the accumulation phase, I like having that growth exposure and the opportunity to generate outsized returns over the long-term. Although I like to focus on my income stream, I still track the major market indexes and attempt to beat them on an annual basis. The competition aspect of the stock market is a large part of what I love so much about it. Having exposure to companies like Facebook and Amazon and Alibaba and Tencent as a small piece of my portfolio gives me what I believe to be the best of both worlds: steady, reliable income and the potential to make large jumps up the social ladder due to the massive potential of a sub-set of my portfolio.

All of this just goes to show that there are many ways to skin the cat in terms of a dividend growth portfolio. I dont think it matters much what ones sector/industry allocation weights look like so long as they stay true to standard value investing principles with an extra focus on shareholder return related metrics. I look forward to hearing what everyone has to say about the portfolio that Ive constructed over the last 5 years or so. I look forward to the continued journey moving forward. Until next time, best wishes all!

Disclosure: I am/we are long AAPL, DIS, T, BA, AMGN, ABBV, BMY, MDT, MRK, PFE, NVO, JNJ, AMZN, FB, GOOGL, NVDA, MA, V, EXPE, REGN, CELG, BABA, TCEHY, KR, MKC, SJM, KO, PEP, MMM, MO, HASI, NNN, STOR, VER, VTR, SBRA, OHI, CMCSA, MSFT, DLR, AVGO, NKE, QCOM, CSCO, UPS, WHR, FDX, NSRGY, C, MS, GS, BAC, JPM, TRV, BRK.B, GILD, HON, UNP, BX, BLK, UL, XLF, XLK, EZU, IEUR, DEO, BUD, VZ, KMB, IBM.

I wrote this article myself, and it expresses my own opinions. I am not receiving compensation for it (other than from Seeking Alpha). I have no business relationship with any company whose stock is mentioned in this article.

Additional disclosure: Just incase I missed a long in the disclosure form, I am long every stock mentioned in the graphs posted in this article.

Editor's Note: This article discusses one or more securities that do not trade on a major U.S. exchange. Please be aware of the risks associated with these stocks.

Original post:

The Evolution Of My Dividend Growth Portfolio - Seeking Alpha

Interspecies Hybrids Play a Vital Role in Evolution – Quanta Magazine

Controversies like this one underscore the possibility that the bad reputation of naturally occurring hybrids is not entirely justified. Historically, hybrids have often been associated with the sterile or unfit offspring of maladaptive crossings (such as the mule, born of a female horse and a male donkey). Naturalists have traditionally viewed hybridization in the wild as a kind of irrelevant, mostly rare, dead-end fluke. If hybrids arent viable or fertile or common, how could they have much influence on evolution? But as genomic studies provide new insights into how species evolve, biologists are now seeing that, surprisingly often, hybrids play a vital role in fortifying species and helping them take on useful genes from close relatives.

In short, maladaptive pairings dont tell the full story of interbreeding. The genetic transfer that takes place between organisms while their lineages are diverging has a hand in the emergence of adaptive traits and in the creation of new species altogether. According to Arnold, not only is it common for newly emerging species to reacquire genes through hybrid populations, but its probably the most common way evolution proceeds, whether youre talking about viruses, plants, bacteria or animals.

Most recently, signatures of hybridization have turned up in studies on the evolution of the jaguar. In a paper published last month in Science Advances, a team of researchers from institutions spanning seven countries examined the genomes of the five members of the Panthera genus, often called the big cats: lions, leopards, tigers, jaguars and snow leopards. The scientists sequenced the genomes of the jaguar and leopard for the first time and compared them with the already existing genomes for the other three species, finding more than 13,000 genes that were shared across all five. This information helped them construct a phylogenetic tree (in essence, a family tree for species) to describe how the different animals diverged from a common ancestor approximately 4.6 million years ago.

Some of these adaptations, however, may not have originated in the jaguar lineage at all. Eiziriks team found evidence of many crossings between the different Panthera species. In one case, two genes found in the jaguar pointed to a past hybridization with the lion, which would have occurred after their phylogenetic paths had forked. Both genes turned out to be involved in optic nerve formation; Eizirik speculated that the genes encoded an improvement in vision the jaguars needed or could exploit. For whatever reasons, natural selection favored the lions genes, which took the place of those the jaguar originally had for that trait.

Such hybridization illustrates why the Eizirik groups delineation of the Panthera evolutionary tree is so noteworthy. The bottom line is that this has all become more complex, Eizirik said. Species eventually do become separated, but its not as immediate as people would frequently say. He added, The genomes we studied reflected this mosaic of histories.

Although supporting data as detailed and as thoroughly analyzed as Eiziriks is rare, the underlying idea that hybridization contributes to species development is by no means new. Biologists have known since the 1930s that hybridization occurs frequently in plants (its documented in about 25 percent of flowering plant species in the U.K. alone) and plays an important role in their evolution. In fact, it was a pair of botanists who, in 1938, coined the phrase introgressive hybridization, or introgression, to describe the pattern of hybridization and gene flow they saw in their studies. Imagine members of two species lets call them A and B that cross to produce 50-50 hybrid offspring with equal shares of genes from each parent. Then picture those hybrids crossing back to breed with members of species A, and assume that their offspring do the same. Many generations later, nature is left with organisms from species A whose genomes have retained a few genes from species B. Studies have demonstrated that this process could yield entirely new plant species as well.

But animal species seemed more discrete, at least for a while. Most zoologists supported the biological species concept proposed in 1942 by the legendary biologist Ernst Mayr, who was one of the architects of the modern synthesis, the version of evolution theory that combined Darwins natural selection with the science of genetics. Mayrs biological species concept was based on reproductive isolation: A species was defined as a population that could not or did not breed with other populations. Even when exceptions to that rule started to emerge in the 1970s, many biologists considered hybridization to be too rare to be important in animals. We had a blinkered attitude, said James Mallet, an evolutionary biologist at Harvard University. Today, he added, saying that such hybridizations dont affect reconstructions of evolutionary history or that this wasnt useful in adaptive evolution thats no longer tenable.

This is especially true now that computational and genomic tools prove just how prolific introgression is even in our own species. Since 2009, studies have revealed that approximately 50,000 to 60,000 years ago, some modern humans spreading out of Africa interbred with Neanderthals; they later did so with another ancestral human group, the Denisovans, as well. The children in both cases went on to mate with other modern humans, passing the genes they acquired down to us. At present, researchers estimate that some populations have inherited 1 to 2 percent of their DNA from Neanderthals, and up to 6 percent of it from Denisovans fractions that amount to hundreds of genes.

In 2012, Mallet and his colleagues showed a large amount of gene flow between two hybridizing species of Heliconius butterfly. The following year, they determined that approximately 40 percent of the genes in one species had come from the other. Mallets team is now working with another pair of butterfly species that exchange even more of their genes: something like 98 percent, he said. Only the remaining 2 percent of the genome carries the information that separates the species and reflects their true evolutionary trajectory. A similar blurring of species lines has already been found in malaria-carrying mosquitoes of the Anopheles genus.

Other types of organisms, from fish and birds to wolves and sheep, experience their share of introgression, too. The boundaries between species are now known to be less rigid than previously thought, said Peter Grant, an evolutionary biologist at Princeton University who, along with his fellow Princeton biologist (and wife) Rosemary Grant, has been studying the evolution of Galpagos finches for decades. Phylogenetic reconstructions depict treelike patterns as if there is a clear barrier between species that arises instantaneously and is never breached. This may be misleading.

Arnold concurred. Its a web of life, he said, rather than a simple bifurcating tree of life. That also means its more necessary than ever before to examine the entire genome, and not just selected genes, to understand a species evolutionary relationships and generate the correct phylogeny. And even that might not be enough. It may well be, Mallet said, that some actual evolutionary patterns are still completely irrecoverable.

Genomic studies cant create a complete picture of the introgressive movements of genes. Whenever one species inherits genes from another, the outcome can be either deleterious, neutral or adaptive. Natural selection tends to weed out the first, although some of the genes we have inherited from Neanderthals, for example, may be involved in disorders such as diabetes, obesity or depression. Neutral introgressed regions drift, so its possible for them to remain in the genome for very long periods of time without having an observable effect.

But its the beneficial introgressions that particularly fascinate researchers. Take the Neanderthal and Denisovan DNA again: Those genes have allowed people to adapt to the harsh environs of places like the Tibetan plateau, protecting them against the harmful effects of high altitudes and low oxygen saturation, which in nonlocals can cause stroke, miscarriage and other health risks. Variants from interbreeding with archaic humans have also conferred immunity to certain infections and made skin and hair pigmentation more suitable for Eurasian climes.

Mallets butterflies, too, reflect evidence of adaptive hybridization, particularly with traits involved in mimicry and predator avoidance. Researchers had observed that although most Heliconius species had highly divergent wing coloration and patterning, some bore a striking resemblance to one another. The researchers believed that these species had independently converged on these traits, but it turns out thats only partially correct. Mallet and others have found that introgression was also responsible. The same goes for Galpagos finches: Pieces of their genomes that control for features including beak size and shape were shared through hybridization. Once again, parallel evolution cant explain everything.

For these effects to occur, the rate of hybridization can be and most likely is very small. For Mallets almost entirely hybridized butterflies, the occasional trickle of one hybrid mating every 1,000 normal matings is sufficient to completely homogenize genes between the species, he said. Thats pretty exciting.

As these patterns of introgression have become more and more predominant in the scientific literature, researchers have set out to uncover their evolutionary consequences. These go beyond the fact that speciation tends to be a much more gradual process than its often made out to be. Diversification, adaptation and adaptive evolution really do seem to be driven quite often by genes moving around, Arnold said.

The research done by Eizirik and his team makes a compelling case for this. Around the time when the gene introgressions they analyzed occurred, the populations of all five Panthera species are estimated to have declined, likely due to climate changes. The smaller a population is, the greater the probability that a harmful mutation will get affixed to its genome. Perhaps the gene flow found between the different species, then, rescued them from extinction, providing adaptive mutations and patching deleterious ones. This kind of infusion of genetic mutations is so large that it can cause really rapid evolution, Arnold said.

And the process doesnt end with speeding up evolution in a single species. Adaptive introgression can in turn contribute significantly to adaptive radiation, a process by which one species rapidly diversifies into a large variety of types, which then form new lineages that continue to adapt independently. The textbook case can be found in the great lakes of East Africa, which are home to hundreds upon hundreds of cichlid species, a type of fish that diversified in explosive bursts (on the evolutionary timescale) from common ancestors, largely in response to climatic and tectonic shifts in their environment. Today, cichlids vary widely in form, behavior and ecology thanks in large part to introgressive hybridization.

Biologists will need many more years to understand the full importance of hybridization to evolution. For example, Arnold wants to see further investigations like the ones that have been done on the finches in the Galpagos and the wolves of Yellowstone National Park: behavioral, metabolic and other analyses that will reveal how much of introgression is adaptive and how much is deleterious or neutral as well as whether adaptive introgression affects only particular kinds of genes, or if it acts in a more widespread manner.

Unfortunately, for conservationists and others challenged with managing the diversity of imperiled species, the absence of satisfactory answers poses more immediate problems. They must often weigh the value of protecting wild hybrid populations against the harm hybrids can do to established species, including the ones from which they emerged.

A case in point: In the 1950s, a pair of California bait dealers from the Salinas Valley, seeking to expand their business, hopped into a pickup truck and took off to central Texas and New Mexico. They brought back barred tiger salamanders, which could grow to more than double the size of Californias native tiger salamander. The new species quickly proved to be good for the local fishermen but bad for the local ecosystem: The introduced salamanders mated with the natives, creating a hybrid breed that could outcompete its parent species. Soon the California tiger salamander found itself in danger of being wiped out entirely, and it remains a threatened species today.

Follow this link:

Interspecies Hybrids Play a Vital Role in Evolution - Quanta Magazine

Gone Hunting: Shotgun shells have undergone an evolution for a resolution on lead shot issue – Greeley Tribune

I am a staunch believer in the Book of Genesis and what it teaches us about how we arrived at where we are today.

However, when it comes to shotgun ammunition, evolution is the key to successful hunting.

Thirty years ago, in 1987, the Federal government began phasing in its ban on toxic lead shot for waterfowl/migratory bird hunting. This ban spread nationwide in 1991.

The reasoning behind this ban was the thought that crippled birds that flew off, died and were then ingested by birds of prey such as our national symbol, the Bald Eagle. There was evidence to support this theory, with several instances of birds of prey found dead or dying from lead poisoning.

I know hunters that carry nothing but steel even when hunting upland/non-migratory birds just to avoid having to switch loads in the field.

Waterfowl hunters were sent scrambling for alternative ammunition. Even upland (pheasant/quail) hunters needed options if they were hunting on federal waterfowl production areas and national wildlife refuges.

The initial and often-used option to lead shot was steel shot. The results were not good. Unprepared for this new law, ammo manufacturers simply switched out steel for lead without changing much of anything else in the shell.

Steel shot is not nearly as heavy as lead shot and does not pack the wallop or shock when it contacts the target. Steel shot also patterns more tightly which reduces the "kill zone". Hunters crippled more birds but didn't kill them.

I can vividly remember hunting geese with my brother Jack in the cornfields north of Greeley back in the late 80s. The first morning flock of Canadian honkers were locked up, feet down and settling into our decoys. We emptied our shotguns on them.

It literally rained feathers on us as we watched that flock hurry into the sky and safety. Not one pellet penetrated enough to be lethal.

Ammo manufacturers tried alternative shot such as bismuth and tungsten, which were comparable to lead in weight and shocking power but not in price.

Manufacturers began to concentrate on making a better steel-shot shotgun shell. Evolution, trial and error, and test markets were used well, and finally, we have a better product.

It began with the guts of the shotshell. The wad that cradles the tiny pellets was re-tooled. It became a bit shorter to accommodate more pellets.

The primers that ignite the powder were redesigned to burn slower and reduce chamber pressure. The steel shot remained spherical but some manufacturers experimented with different shapes of the tiny BB's. I likened this to the dimples on a golf ball. Ball manufacturers claim their dimple pattern is the best for straight flight or longer flight. The same claims were made by the shotshell makers. The results of this evolutionary period are shotgun shells that contain steel pellets that perform virtually as well as lead ammo.

My favorite lead ammo continues to be a Federal shotshell that contains 1 oz. of no. 4 lead pellets pushed by 3 drams of gunpowder at 1330 feet per second. I prefer this load for upland hunting because it has been my most consistently lethal load at all ranges and in any wind and weather conditions.

Federal, Remington, Fiocchi all make loads similar to what I have just described.

I know hunters that carry nothing but steel even when hunting upland/non-migratory birds just to avoid having to switch loads in the field.

A good example of an effective modern steel load is Federal's Prairie Storm Steel. It comes in a 3-inch shell (requiring at least a 3-inch chamber in your shotgun) and launches number 3 or 4 steel shot at 1600 feet per second. Sixteen hundred feet per second is fast and should have enough wallop out at 40 yards or in the killing zone.

There are also two shapes of pellets or BB's in the Prairie Storm shell. About half of the 170 pellets are spherical while the remaining pellets are spherical with a band (called Flitestoppers). They resemble the planet Saturn and help deliver a lethal punch.

I don't hunt waterfowl much any more. I don't like to kill something that I don't like to eat. However, I do carry a box of steel shot along with me in my F-150 just in case I get the urge.

When you stop to think about it, steel shotgun shell evolution had to have a genesis, too.

Jim Vanek is a longtime hunter who lived in Greeley for many years. He can be reached at kimosabe14@msn.com.

View post:

Gone Hunting: Shotgun shells have undergone an evolution for a resolution on lead shot issue - Greeley Tribune

Justin Chon on YouTube’s evolution – Olean Times Herald

Justin Chon may have made it in Hollywood through a key role in the "Twilight" franchise, but he appreciates the "renegade" approach of YouTube. (Aug. 24)

Get fast, accurate coverage of every arts and entertainment story making headlines worldwide, from festivals and premieres to births, deaths, scandals and arrests, plus celebrity reactions to news events.

The Associated Press is the essential global news network, delivering fast, unbiased news from every corner of the world to all media platforms and formats.

APs commitment to independent, comprehensive journalism has deep roots. Founded in 1846, AP has covered all the major news events of the past 165 years, providing high-quality, informed reporting of everything from wars and elections to championship games and royal weddings. AP is the largest and most trusted source of independent news and information.

Today, AP employs the latest technology to collect and distribute content - we have daily uploads covering the latest and breaking news in the world of politics, sport and entertainment. Join us in a conversation about world events, the newsgathering process or whatever aspect of the news universe you find interesting or important. Subscribe: http://smarturl.it/AssociatedPress

View post:

Justin Chon on YouTube's evolution - Olean Times Herald

Taylor Swift’s New Album Signals a Dark, Powerful Style She’s Never Shown Before – Glamour

Taylor Swift is a master of self-invention. It's been said before, but she's a success by design: behind the girl-next-door persona that's so incredibly easy to relate to (yeah! Eff that guy and his precious truck, we'll do better), there's a whole machine of thought that goes into her image. Since she burst onto the scene in 2006, we've lived through multiple incarnations, watching her style evolve at every big turning point in her career.

Country princess, pop star, retro babe, fashun lover: everyone's got their personal Swift era preference. Nowhere better is each image so succinctly summed up than the look that comes with a new album drop. So, with Reputation's cover reveal and a rumored new single on the way, we're taking a stroll back through her greatest beauty hitsand analyzing what this new era could signal.

PHOTO: Getty Images

Our first introduction to Swift as a Nashville teen, her country era was strongly, strongly boilerplate princess-themed. Innocence was the name of the game on her eponymously named album, alongside songs like "Teardrops on My Guitar" (DREW!), "Picture to Burn" (still a banger), and "Our Song." With her naturally curly hair and love for the maximum amount of glitter on both her eyeshadow and dresses, it was very much a "this girl believes in fairytales and romance" moment, and one that both made her approachable to the middle school girl demo, and set her apart from the rest of the country music scene.

PHOTO: Getty Images

"You Belong With Me" hit in 2008, and who could forget Swift pulling a Parent Trap and playing both the girl next door and the villainous popular girl. Truly, this woman contains multitudesbut the greatest trick of all was selling the idea that Swift was just an average girl looking for love. The sparkles got toned down, and while her curls were still going strong, they started to move into a more styled, barrel curl look. Spanning from Fearless to Speak Now , these were the years of her image as a lovelorn lady out for her Nicholas Sparks story. "Mine," "Dear John," "If This Was a Movie," "Better Than Revenge"there was drama, but Swift's persona was always squarely on the right of it, with her curls and lipgloss there to back her up.

PHOTO: Ethan Miller

And with Red the curls exited stage right, in favor of her now-trademark red lip and sleek bangs. This was Swift with more vindication and agency: if you wrong her, you're gonna get called out. Curls can have agency, but Swift's transition to a totally smooth style read like she was tightening her grip on deciding who the world saw. There was still the romance in her lyricsand what's more romantic than a red lip?but with Red's cover showing her face half in the shadows, only her lips and a shiny lock of hair in the light, Swift painted a narrative of a girl who'd been burnt, but was surviving. The vibe was cardigans and Keds, with red lipstick and cat eye liner; a little kitschy, '50s nostalgia-cute .

PHOTO: Getty

Ah, the age of "Blank Space," "Shake It Off," and "Bad Blood." It was an aggressive time, matched by Swift's turn to chic, femme fatale looks without a single hair out of place. Her red lips went darker, with 1989 's cover revolving around her fractured, above the fray self: lips-down on the cover, nose-up on the album liner, and a faded, Polaroid-from-a-distance aesthetic. Truly, she hit an insane balance between approachable BFF (I'm just a girl baking cookies and taking roadtrips with Karlie Kloss ) and bombshell living above the rumors (those now-signature two-piece sets; " It had to do with business ").

PHOTO: Getty Images

This 'twas not an era of much new music for Swift. Her only release was "I Don't Want to Live Forever" with Zayn Malik for 50 Shades Darker . But personally, it was a huge. With an abundance of think pieces surrounding the Kim/Kanye fiasco , at this point, the world caught on to Swift's immaculate image control. And so she transformed again.

The first signal came at Coachella , when she debuted a new platinum dye job (which came at Vogue 's persuasion ). Then at the Met Gala, she channeled Debbie Harry's punk look with dark lips and a shaggier cut. This progressed into a few other decidedly less "safe" looks, including this unexpected rendezvous with contour and bubblegum pink gloss. That was in May 2016, and as you know, she's been out of the spotlight pretty much since. (Her break from the red carpet, of course, was hardly a vacationduring her sexual assault trial earlier this month she paved the way for anyone fuzzy on consent with her concrete, unyielding testimony .)

Everything from here on is speculation, though we'll surely be seeing plenty of Swift again soon enough. But what we can gather from her new album cover is that we're in for the singer's most powerful evolution (both personally and lyrically) yet. Significantly, the cover is black and whiteand with headlines covering half Swift's face, fans are speculating that it implies we've only gotten half the story.

Her makeup is pared down and clean with the exception of a not just dark but jet-black lip, and her hair looks wet, which could allude to the concept of rebirth and renewal. Such can also be said that a snake represents the same since it sheds its skin (and it'd fit with her clean slate social media strategy). The conclusion would be that she's had her persona (and thus, her style) built a certain way, and now the real her is coming out. It's not commercial, bubblegum, or high-fashion approvedbut it's mature and authentic, a look worn with the confidence of coming into your own.

Related Stories: -Taylor Swift's New Shag Haircut Is All Kinds of Cool -Taylor Swift's 10 Most Powerful Statements From Her Sexual Assault Trial Cross-Examination -Katy Perry and Taylor Swift's Feud: A Timeline

More here:

Taylor Swift's New Album Signals a Dark, Powerful Style She's Never Shown Before - Glamour

The more people know about climate change and evolution, the more they disagree – Cosmos

It seems the political hyper-partisanship engulfing the United States has found yet another victim: science. New research shows that political and religious orientations are strongly associated with polarized views of scientific consensus.

Theres a twist, however: the more scientific education and literacy a person has, the more their views are likely to be polarized. These puzzling findings are outlined in a paper published in the Proceedings of the National Academy of Sciences authored by Caitlin Drummond and Baruch Fischoff of Carnegie Mellon University.

The pair studied data from the General Social Survey about Americans views on six controversial topics: human evolution, the Big Bang, stem cell research, anthropogenic climate change, genetically modified foods and nanotechnology. For the first four issues there was significant polarization among respondents, while the last two showed little evidence of it.

Respondents who identified themselves as politically and religiously conservative were far more likely to reject scientific consensus on the polarised issues, while those who identified as liberal were more likely to accept it.

Both for other subjects, such as genetically modified food, that are controversial but have not become part of these larger social conflicts in America, Drummond and Fischoff found no connection between education and polarisation.

So how to explain this? One model the authors suggest is known as motivated reasoning which suggests that more knowledgeable individuals are more adept at interpreting evidence in support of their preferred conclusions. The authors also speculate that better educated people are more likely to know when political and religious communities have chosen sides on an issue, and hence what they should think (or say) in keeping with their identity.

This, of course, will have a substantial effect on science communications efforts. Drummond suggests that science communication on polarized topics should take into account not just science itself, but also its context and its implications for things people care about, such as their political and religious identities. While pragmatic, this may be a bitter pill to swallow for those who think that science should stand or fall on its epistemic merits.

There was one positive finding: greater trust in the scientific community meant greater agreement with the scientific consensus. Perhaps, then, scientists and sciences advocates need to work on building such trust, on both sides of the aisle.

Read more:

The more people know about climate change and evolution, the more they disagree - Cosmos