New Engineering and Science Building Nearing Completion – UConn Today

The Engineering and Science Building will open in the fall, with researchers moving in during the summer. (Sean Flynn/UConn Photo)

When UConns new Engineering and Science Building opens this fall, it will provide room for some of the universitys fastest growing research fields systems genomics, biomedical sciences, robotics, cyber-physical systems (think drones) and virtual reality technology.

The five-story building, under construction since September 2015, is approximately 75 percent complete, according to Brian Gore, UConns director of infrastructure and program management. Researchers will move in to the new space this summer, beginning in July.

Located behind Student Health Services and the Chemistry building in North Campus, the Engineering and Science Building will be the first structure on the Storrs campus to utilize an open lab concept for research. The shared research space and open floor plan is intended to make it easier for scientists from different disciplines to collaborate, fostering innovation.

The new structure also gives scientists access to a high-speed broadband network that delivers the capacity they need to process large amounts of data quickly a necessity in many research fields today.

Its exciting, says Professor Rachel ONeill, a molecular genetics scientist and director of UConns Center for Genome Innovation, which is moving into the new building. We hope this will increase the already vibrant synergy among these faculty and foster strong, productive collaborations and interactions.

Read about ONeills research here.

There will be plenty of opportunities for graduate students and students pursuing advanced degrees to conduct research in the new building. The Engineering and Science Buildings core mission is to support UConns role as a vital state resource, fueling Connecticuts economy with innovative technologies and highly skilled graduates, and helping to create high-paying jobs.

The School of Engineering occupies three of the five floors. The second and third floors will house UConns Institute for Systems Genomics and related programs.

The new building addresses a pressing need for space within the School of Engineering, where enrollment has doubled over the past decade. The school recently hired more than 30 faculty to expand its research efforts and teaching staff. In addition, UConns School of Engineering supports numerous partnerships with world-class manufacturers, such as General Electric, Pratt & Whitney, Fraunhofer, Comcast, and FEI.

Here is a breakdown of the buildings future tenants:

First Floor:

Robotics and Controls Lab. An advanced, interdisciplinary, engineering lab developing tools to improve the efficiency and safety of robots used in manufacturing and other industries.

Computational Design Lab. A virtual reality research lab advancing new haptic technologies (haptics is the science of applying touch sensation and control in human interactions with computers, e.g. vibrations in smart phones and video games) and gesture recognition technologies for 3-D applications.

Adaptive systems, Intelligence, and Mechantronics Lab and Laboratory of Intelligent Networks and Knowledge-perception Systems. These labs focus on the development of new technologies and sensors for adaptive and intelligent autonomous vehicles (e.g. drones) and other systems.

Manufacturing Systems Laboratory. This labs mission is to advance technologies toward the development of smart and green buildings that optimize energy consumption, conserve resources, and enhance efficiencies.

Second and Third Floors:

Institute for Systems Genomics. UConns premiere genomics research and training program. Includes faculty from multiple disciplines: molecular & cell biology, ecology and evolutionary biology, allied health sciences, and UConn Health. Offices for researchers from UConn Healths Department of Genetics and Genome Sciences will be included in this space, emphasizing the cross-campus collaborative nature of the research area.

Center for Genome Innovation. The core service and training center for UConns genomics and cytogenetics programs. The new space will feature some of the latest instrumentation for Next Generation genome sequencing, analysis, and genotyping. The CGI supports more than 120 labs at UConn campuses in Storrs, Farmington, and Avery Point and provides services for clients outside of UConn.

Microbial Analysis, Resources, and Services (MARS) This core facility assists researchers by performing microbiome, targeted amplicon, and small genome sequencing.

Computational Biology Core. This core group provides crucial computational power and technical support to UConn researchers and affiliates. The CBC is led by assistant professor Jill Wegrzyn, who recently helped decipher the largest genome sequenced and assembled to date the sugar pine tree.

Professional Science Masters in Genetic and Genomic Counseling programs. Affiliated with Allied Health Sciences, these new programs will teach students how to interpret genetic testing results, a rapidly growing aspect of health care.

Fourth Floor:

Cellular Mechanics Laboratory. This lab investigates how changes in the biomechanical properties of cells influence the onset and progression of sickle cell disease.

Biointegrated Materials and Devices at Nano- and Micro-scales. Research here focuses on the development of materials, devices, and systems at extremely small scale for applications in biomedicine.

Neuroengineering and Pain Research. This lab focuses on sensory coding and processing in the peripheral nervous system with a goal of developing Next Generation strategies and devices for better management of chronic pain.

Microelectromechanical systems for biomedical analysis. Researchers use nano- and micro-scale optical imaging and mechanical sensing for the biomedical analysis of cancer cells.

Smart Imaging. This labs core mission is the development of novel imaging and sensing tools to tackle measurement problems in biology, medicine, and metrology including lab-on-a-chip platforms.

Interdisciplinary Mechanics. This lab uses computational modeling and experimental testing to solve challenging problems in biomechanics and engineering related to soft biological tissues, new materials, and applications.

Fifth Floor:

Electrocatalysts and Fuels. Using electrochemistry, chemical engineering, and materials science, this lab designs and develops electroactive materials for use in such things as fuel cells and energy storage applications for batteries and supercapacitors.

Thermal Transport Physics. With a focus on thermal transport physics at the micro- and nano-scale, this lab investigates the engineering of materials at nanoscale for energy conversion and storage applications.

Advanced Solar Cells. This research group investigates novel nanoarchitectures for enhanced solar cells.

Advanced Fuels using Modified Zeolites. The development of new catalysts and sorbents for the production of clean energy and biofuels is the focus of this lab.

Process Design Simulation and Optimization. This lab uses model-assisted experimental design and process scaling to research processes that address the growing energy crisis and the environmental impact of energy production.

Computational Atmospheric Chemistry and Exposure. Addressing problems related to air pollution and atmospheric chemistry, this labs overarching mission is to bridge the gap between basic scientific knowledge of atmospheric pollutants and the tools policy makers rely on to develop air pollution strategies.

Process Systems and Operations Research. This lab uses modeling, simulations, and control algorithms to develop novel solutions to emerging problems in a wide array of industry applications ranging from water treatment and desalination to renewable energy and personalized medicine.

Membrane Separations. Researchers here are developing innovative materials and processes to advance technologies for water treatment, desalination, and reuse.

Read more about progress on constructing the building: Work to Start Soon on New Engineering Complex Construction Begins on New Engineering and Science Building UConn Marks Construction Milestone for New Engineering Complex

Read the original here:

New Engineering and Science Building Nearing Completion - UConn Today

Strength of hair inspires new materials for body armor – ScienceBlog.com (blog)

In a new study, researchers at the University of California San Diego investigate why hair is incredibly strong and resistant to breaking. The findings could lead to the development of new materials for body armor and help cosmetic manufacturers create better hair care products.

Hair has a strength to weight ratio comparable to steel. It can be stretched up to one and a half times its original length before breaking. We wanted to understand the mechanism behind this extraordinary property, said Yang (Daniel) Yu, a nanoengineering Ph.D. student at UC San Diego and the first author of the study.

Nature creates a variety of interesting materials and architectures in very ingenious ways. Were interested in understanding the correlation between the structure and the properties of biological materials to develop synthetic materials and designs based on nature that have better performance than existing ones, said Marc Meyers, a professor of mechanical engineering at the UC San Diego Jacobs School of Engineering and the lead author of the study.

In a study published online in Dec. in the journal Materials Science and Engineering C, researchers examined at the nanoscale level how a strand of human hair behaves when it is deformed, or stretched. The team found that hair behaves differently depending on how fast or slow it is stretched. The faster hair is stretched, the stronger it is. Think of a highly viscous substance like honey, Meyers explained. If you deform it fast it becomes stiff, but if you deform it slowly it readily pours.

Hair consists of two main parts the cortex, which is made up of parallel fibrils, and the matrix, which has an amorphous (random) structure. The matrix is sensitive to the speed at which hair is deformed, while the cortex is not. The combination of these two components, Yu explained, is what gives hair the ability to withstand high stress and strain.

And as hair is stretched, its structure changes in a particular way. At the nanoscale, the cortex fibrils in hair are each made up of thousands of coiled spiral-shaped chains of molecules called alpha helix chains. As hair is deformed, the alpha helix chains uncoil and become pleated sheet structures known as beta sheets. This structural change allows hair to handle up a large amount deformation without breaking.

This structural transformation is partially reversible. When hair is stretched under a small amount of strain, it can recover its original shape. Stretch it further, the structural transformation becomes irreversible. This is the first time evidence for this transformation has been discovered, Yu said.

Hair is such a common material with many fascinating properties, said Bin Wang, a UC San Diego PhD alumna and co-author on the paper. Wang is now at the Shenzhen Institutes of Advanced Technology in China continuing research on hair.

The team also conducted stretching tests on hair at different humidity levels and temperatures. At higher humidity levels, hair can withstand up to 70 to 80 percent deformation before breaking. Water essentially softens hair it enters the matrix and breaks the sulfur bonds connecting the filaments inside a strand of hair. Researchers also found that hair starts to undergo permanent damage at 60 degrees Celsius (140 degrees Fahrenheit). Beyond this temperature, hair breaks faster at lower stress and strain.

Since I was a child I always wondered why hair is so strong. Now I know why, said Wen Yang, a former postdoctoral researcher in Meyers research group and co-author on the paper.

The team is currently conducting further studies on the effects of water on the properties of human hair. Moving forward, the team is investigating the detailed mechanism of how washing hair causes it to return to its original shape.

View post:

Strength of hair inspires new materials for body armor - ScienceBlog.com (blog)

Tatas Learn Key Lesson As Nano Heads For Sunset: Indians Want … – Swarajya

After eight years and a sustained failure to set hearts racing, the Tata Nano, it seems, is set to drive into the sunset. A Times of India report says that Tata Motors will phase out the Nano in three to four years so that it can cut out the multiplicity of car platforms from the current six to just two.

If this happens, it will be both a vindication of ousted Tata Sons chairman Cyrus Mistry, and a partial rejection of his stand that the Nano was being kept alive only for emotional reasons. His reference was to the fact that the Nano was Ratan Tatas pet Rs 1 lakh car project, a car which was supposed to upgrade millions from thinking two-wheelers to four-wheelers.

A day after he was ousted, Mistry said in a note leaked to the media that the Nano had consistently lost money, peaking at Rs 1,000 crore As there is no line of profitability for the Nano; any turnaround strategy for the company (Tata Motors) requires to shut it down. Emotional reasons alone have kept us away from this crucial decision.

But he has been proved wrong in his assumption that emotional reasons will keep the Nano running, as the decision by the Tata Motors management to phase it out along with the Sumo show.

The failure of the Nano, unveiled with much fanfare amidst global spotlight, can primarily be put down to Tatas mistake in presuming that price was crucial to weaning people away from two-wheelers to cheap cars.

This is a mistake many marketers make: they confuse the average Indians need for affordability to a willingness to buy products that come cheap.

Far from it. As Dheeraj Sinha wrote in his book India Reloaded, the average Indians idea of a car was built around the roomy Ambassador. He may not be able to afford a car, but his idea of a car is not something with all the essentials removed from it. Quite the contrary. He want the addition of desirable features. A car is a status enhancer, and the last thing Indians want is to look cheap. A second-hand car that is cheaper than the Nano would work for most Indians better than a car that has cheap written all over it. The Nano was tomtommed as the worlds cheapest car, and so the Indian lumped it.

Consider the contrast with Renaults Kwid, another car inspired by the idea of frugal engineering. Far from looking cheap, it tries to resemble a mini SUV. And, after selling over 100,000 Kwids, Renault is making money on it.

Phasing the Nano out shows that Ratan Tata has learnt to bite the bullet. The new Tata cars, built around style and better performance, are doing much better than the old models.

Tata Motors has outgrown Nano thinking.

Continued here:

Tatas Learn Key Lesson As Nano Heads For Sunset: Indians Want ... - Swarajya

Bendable Phone Advances With New Flexible, Ultrafast Memory – Android Headlines

The University of Exeter has been working on new multilevel, flexible, ultrafast memory devices, and this would be a significant advance in the development of devices such as bendable phones, televisions, and even smart clothing. Engineering talents have detailed small but high-capacity memories that will be ideal for flexible devices including smartphones. Additionally, these new transparent memory devices will be both eco-friendly and cheap to produce, so could be a credible and more affordable to flash memory that is currently used in graphics cards, memory cards, and USB drives.

Research about this endeavor has been published in ACS Nano, a scientific journal. The new development regards a nano-scale, non-volatile fusion of graphene oxide and titanium oxide, and the team behind the new memory devices suggest that it signifies an evolution for flexible electronics with improved power, speed, and endurance. Lead author of the research paper, Professor David Wright, described the new GO-based memory option. Its capable of being written to and read from in less than five nanoseconds and is just eight nanometers thick and 50 nanometers long. When discussing the results the research paper says it will help transform the way in which we view the potential and possibilities for GO memory device development and applications. In the event that this type of new memory could be produced in high enough yields, it could mean the end of flash memory in electronic devices.

Its not the first time graphene oxide has been used in the production of memory devices. However, previously the results had been slow and cumbersome, and thus more suited to the economy end of the device market. The research is in the early stages and it could be quite some time before the ultra fast, flexible memory is ready for mass production. In recent years many smartphone companies including LG, Microsoft, and Samsung have invested resources into flexible devices, and these and other manufacturers are likely to be following further developments involving flexible memory very closely. As far as the much-rumored Samsung Galaxy X foldable phone is concerned, it was previously thought that production might begin in Q3 or Q4 this year. However, recent news inferred there were production and technical issues, and that the companys first foldable smartphone launch is more likely to occur next year.

Continued here:

Bendable Phone Advances With New Flexible, Ultrafast Memory - Android Headlines

AI, machine learning will shatter Moore’s Law in rapid-fire pace of innovation – Healthcare IT News

Artificial intelligence: Savvy hospitals are deploying AI and its technological brethren cognitive computing and machine learning in specific use cases at this point while industry luminaries are predicting that their advancement will soon start happening more quickly than previously anticipated.

"I've never in my career seen the acceleration of technology as fast as what we've witnessed in machine learning during the last two years," said Dale Sanders, executive vice president at Health Catalyst.

Sanders, it's worth noting, has a U.S. Air Force background working on stacked neural networks and fuzzy logic, which used to be called deep learning, as well as serving as the CIO of both Northwestern University and national health system of the Cayman Islands.

"The rate of improvement happening in machine learning," Sanders added, "is way beyond what Moore's Law is to chips."

Hospitals already deploying AI As the next generation of both patients and caregivers including clinicians, doctors, nurses, specialists, even executives and administrators starts taking a foothold in the healthcare workforce, hospitals looking for a first-mover advantage already know that AI is on the verge of becoming a critical component across the entire organization, and not just IT.

"AI and machine learning are exciting opportunities for us to accelerate," Carolinas HealthCare Chief Information and Analytics Officer Craig Richardville said. "To be successful you have to understand how that will fit within your market and your patient population, and you have to be knowledgeable about how to use it."

[Also:Hospital datacenters: Extinct in 5 years?]

Today, that means picking opportunities akin to low hanging fruit for modern AI capabilities. Carolinas, for its part, is working to develop self-service applications that provide tools to patients for self-diagnosis and self-treatment in very targeted scenarios where the science enables clinicians to understand what the right methods are, Richardville said.

The hospital is also eyeing AI to capture patient informationand bring it into data lakes or warehouses, which is paramount because Richardville said that only 20 percent of relevant patient information for Carolinas clinicians resides in its EHRs.

"Applying more intelligence to that data continues our transition from the art of medicine to the science of medicine," Richardville added.

The revenue cycle is another area ripe for machine learning, according to Stuart Hanson, senior vice president of Change Healthcare.

"Healthcare organizations have started to become more information-centric, and the next level of that is taking a personalized view," Hanson said.

Hanson cited two examples: the ability to predict what is relevant for a particular patient and deliver smart messaging, such as wellness and prevention tips and price transparency, as well as the opportunity to drive down costs associated with useless billing by better understanding how patients interact with various types of payment statements.

"There's clear ROI in the revenue cycle for physicians and hospitals," Hanson said.

While such work among payers and at Carolinas and other leading hospitals is admittedly cutting-edge, Health Catalyst's Sanders is hardly alone in believing AI, cognitive computing and machine learning will outpace the processing power advances that Moore's Law illuminated.

Read moreInnovation Pulsecolumns from Healthcare IT News.

Beyond the futuristic hypothetical Intel co-founder Gordon Moore predicted in 1965 that the then-current pace of computer chips doubling in power every year would continue into the future; Moore's Law was amended a decade later as processing power was doubling every two years. And depending on whom you ask, that rate pretty much held steady for the better part of 50 years.

Indeed, a lot has happened during the last five decades, genomics advances not least of all. When a company named 454 sequenced DNA co-discoverer James Watson's genome in 2007, it cost $2 million which was down just a dram from the $3 billion the Human Genome Project spent on its first sequencing in 2003.

"Now we're down to $1,000, and we'll get to an era of $100 per genome," said Bryce Olson, global marketing director of health and life sciences at Intel. "It's going to become the next big thing."

AI is exploding quickly. Healthcare providers will be able to diagnose disease by DNA in the near future, Olson said, andthe industry is on the verge of making the technologies faster, better and cheaper.

"We're seeing it right now in the genomic space and with machine learning algorithms," Olson added. "It's a lot faster than Moore's Law."

Twitter:SullyHIT Email the writer:tom.sullivan@himssmedia.com

Like Healthcare IT News onFacebookandLinkedIn

Read more here:

AI, machine learning will shatter Moore's Law in rapid-fire pace of innovation - Healthcare IT News

Moore’s New Law: Put your Chips on What’s Possible. – Huffington Post

When I first discovered Moores law in 1983, I realized I could use it as one of my tools to accurately predict the future of technological change. At the time, few were paying much attention to Moores law. Over the decades, the press has declared the death of Moores law, usually stating that it is impossible for scientists to make processors smaller and more powerful at the same exponential rate. This news usually comes from a tech conference where industry executives share their frustration in going to the next level. We have recently seen major news reports of this kind. I am always reminded of a great quote: The reports of my death have been greatly exaggerated.

Although that iconic remark attributed to Mark Twain is, in reality, a misquotation, it does aptly summarize the recent rebirth of Moores law.

But to my mind, the purported phoenix like rise from the ashes of one of technologys best-known principles really misses the mark so far as anticipatory thinking is concerned. We need to be asking more pertinent questions and looking at bigger issues that command greater attention.

At the risk of explaining a concept thats already widely understood, Moores lawnamed after Gordon E. Moore, co-founder of Intel and Fairchild Semiconductordeals with processing power, the speed at which a machine can perform a particular task. In 1965, Moore published a paper in which he observed that, between 1958 and 1965, the number of transistors on an integrated circuit had doubled every 18 to 24 months. At the same time, Moore noted, the price of those integrated circuits dropped by half.

Although the formula held true for some 50 years, critics have been quick to point to possible death knells over the past several years. In effect, they argue that a transistor can only be made more powerful and smaller as chips inevitably keep getting more expensive to produce.

Whenever I hear this type of prediction, I write an article reminding us all that using the word impossible is a bet against human creativity and ingenuity and they will be wrong. Case in point, last year IBM proved them all wrong by doing the impossible and introducing a new chipset, keeping Moores law going, and Intel just did it again by recently unveiling its long-anticipated Cannonlake chipset. The Intel chips are a mere 10 nanometers, down from 14 nanometers used in currently available chips.

The product debut, announced Intel CEO Brian Krzanich, underscored the reality that Moores law was still, in fact, alive, well and flourishing, as Krzanich put it.

The fact that Intels announcement, at the very least, waters down the obituaries for Moores law, is certainly good news. The fact that processing chips can continue to be manufactured to increasingly stringent specifications bodes well for anyone who uses technology in some capacity (meaning all of us).

But I also feel very strongly that it keeps us from seeing the bigger picture. Instead, we need to be asking better questions, because the factors encompassed by Moores law simply no longer matter as much as they once did. We depend less on advances in chip technology because of the exponential growth of the capabilities of the overall ecosystem of which chips, bandwidth and digital storage are merely one part.

Heres one way to look at it. Not very long ago, a laptop was largely a stand-alone device, as its storage and processing power derived solely from its chips.

Not anymore. For one thing, we now use a smart phone or tablet to access supercomputers in the cloud, allowing us to go far beyond the processing power of the individual chips in our devices. Thats how we can use powerful tools such as Apples Siri, the Amazon Echo and Google Home to tap into the capabilities of the worlds supercomputers with just a few spoken words.

Looked at another way, in the recent past, we all relied on the power of the chips in our devices, but today we have the computing power of the world in our pockets or on top of a table, and it isnt limited as it once was to the chips inside a device.

All this boils down to the fact that, despite Moores laws focus on the processing speed of the chip, computing power is no longer limited to the computational brute strength of the individual device. Its more specialized, meaning that overall computing power will continue to improve as functions such as distributed computing, digital storage, advanced bandwidth, wired and wireless, and network processing are more equitably spread out over an ecosystem of computing power.

It also comes down to looking past the surface when seemingly central issues are raised. In this case, whether Moores law is dead and buried or alive and kicking is, in many ways, less relevant when compared with other advances in technology and structure. And it begs the question: What issues and developments are you and your organization examining at a deeper level to identify game-changing insights and opportunities? Are we all paying sufficient attention to the transformational advances in the whole technology ecosystem, or needlessly focusing on just one or two elements?

Link:

Moore's New Law: Put your Chips on What's Possible. - Huffington Post

There’s more to Moore’s Law than transistor counts – PC World

The implied increase in power is meant to transform into more-meaningful computing

Picture: Krbo (Flickr)

The PC industry has faithfully followed Moores Law since Gordon Moore first announced in 1965 that the density of transistors on a chip will double every year. What many people dont know is that Moores law was actually revised in 1975 to state that the density doubles every two years instead. Things have been a bit shaky of late and this law is stagnating. The announcement of Intels 8th Gen CPU, which is still being powered by a 14nm processor, effectively means that weve had the same chip density on the market for some five years. So is Moores law dead? Many say it is. I disagree.

Moores Law, from a purists point of view, has always been about computing power. But it isnt just about cramming more transistors into a space. Instead, its about making computing power affordable for the masses. Take a step back and think about the first man on the moon and colour TVs, then the progression to the personal computer what is the true meaning of Moores law?

I interpret these critical points as cost reduction, practical usage, and sub-components working as part of an overall system that is affordable, accessible, usable and purposeful.

Depending on who you ask within the industry, the interpretation of Moores Law differs. In Moores reasoning, it is a log-linear relationship between device complexity (higher circuit density at reduced cost) and time. Simply put, it is more-meaningful computing power at affordable prices. This triggers a secondary off-shoot (or complementary law) of Moores Law, which is Rocks Law, but we can save that one for another time.

So does that mean that Moores Law is akin to a moving goal post? It isnt about more transistors crammed into a set area; its a changing set of guidelines for people to make more meaningful systems. After all, what good is a processor on its own anyway?

To better understand this point, lets look quickly to PC history for some clues.

A quick historical recap (Source: Professor Wouter Den Haan, Chair LSE)

1970 mechanical calculators, repetitive retyping, file cards, filing cabinets

1970s. Memory typewriters, electronic calculators

1980s. PCs with word processing and spreadsheets

Late 1980s. E-mail, electronic catalogues, T-1 lines, proprietary software

Late 1990s. The web, search engines, e-commerce

2000-05 flat screens, airport check-in kiosks

By 2005 the revolution in business practices was almost over

From 2005 until now, offices use proprietary information, desktop computers and laptops in pretty much the same way they did post-1994. The current major tech companies and trends that we consider as recent champions of tech growth are Amazon (1994), Google (1998), Wikipedia (2001), iTunes (2001), BlackBerry (2003), Facebook (2004), the iPhone (2007) and the iPad (2010). The effect of the smartphone boom in 2007 and the tablet boom of 2010 I think needs to be discussed separately and I will leave that for another date. It is important to note however, that both inventions have not transformed the way business is done at its core unlike, say, email, the Internet and the PC, which have.

The smartphone and tablet are improvements on a category that have already done most of its disruptiveness in the office. The advent of smart watches and Fitbit-like devices can now be seen mostly as a fad that failed to go mainstream and it is something that I personally was challenged with launching into a few years ago. I would say our team did a great job in bypassing this category at the time (IOT and smart devices still have a place, just not for us right now). We also argued that tablets would go the way of Netbooks (remember them?) and it looks like they are not a category that will be able to stand by themselves. We have an onslaught of 2-in-1 devices coming and they seem to be a rather logical evolution of the notebook PC, taking the good elements from tablets and becoming a meaningful device for some users, but for now, lets go back to Moores law.

To put it really simply, Moores Law means something different to everyone. For me as a computer designer, it means making powerful computers that are affordable and useful. Hence, from that perspective, I think Moores Law is far from dead. Thus, as a team, we are going to carry on making more powerful computers that can do more, not just because they have double the density of transistors, but because this implied increase in power is meant to transform into more-meaningful computing. At the end of the day, wouldnt having better battery life, a faster hard drive, better screen and Wi-Fi be meaningless unless it amounted to better performance and a more positive value-added computing experience? If we could also keep it affordable, then Moores Law is well and truly alive and it continues to benefit us all.

A sustainability angle should probably also be added to it i.e. we need to think of the full product life-cycle of a device and its flow-on effects of its own ecosystem (e.g. cable, etc). After all, Moores law has changed before and it can be adjusted again.

Error: Please check your email address.

Tags notebooksintelVenom Computers

See the article here:

There's more to Moore's Law than transistor counts - PC World

Xeon E3: A Lesson In Moore’s Law And Dennard Scaling – The Next Platform

April 6, 2017 Timothy Prickett Morgan

If you want an object lesson in the interplay between Moores Law, Dennard scaling, and the desire to make money from selling chips, you need look no further than the past several years of Intels Xeon E3 server chip product lines.

The Xeon E3 chips are illustrative particularly because Intel has kept the core count constant for these processors, which are used in a variety of gear, from workstations (remote and local), entry servers to storage controllers to microservers employed at hyperscalers and even for certain HPC workloads (like Intels own massive EDA chip design and validation farms). In a sense, it is now the Xeon E3, not the workhorse Xeon E5, that is literally driving Moores Law in terms of chip design. (Ironic, isnt it?)

In the wake of the recent Kaby Lake Xeon E3 v6 server chip announced, which we covered here in detail, we decided to take a look at how the Xeon E3 has evolved over time, complete with details tables and charts comparing the performance and price/performance of the family of single-socket server chips over their lifetime and specifically compared to the Nehalem Xeon 5500 processors from March 2009 that represent the resurgence, both economically and technically, of the Xeon platform in the datacenter after a few years of AMDs Opterons gaining considerable share.

To get started, lets just line up the feeds and speeds of the various generations of chips, ranging from the Sandy Bridge chips from 2012 through the Kaby Lake chips this year.

As we have done for past Xeon family comparisons, we have calculated the aggregate and relative oomph of each processor by multiplying the clock speeds by the core counts to give a kind of aggregate peak clocks for each chip. This is called Raw Clocks in our tables, and you can reckon a cost per gigahertz of clock speed to get a very rough relative performance metric. We have also ginned up a more precise relative performance metric, called Rel Perf, that takes into account the instructions per clock (IPC) enhancements from each Xeon core generation, and then scaled this with the clock speed enhancements and core expansion in the Xeon lines. We created this Rel Perf metric for the first time when comparing the Xeon E5 processors from the Nehalem Xeon 5500 processors in 2009 through the Broadwell Xeon E5 v4 processors that came out this time last year. We reckoned the relative performance of each processor SKU across all of the families against the performance of the Nehalem E5540, which was a four-core processor with eight threads that had a 2.53 GHz clock speed. The top-bin Broadwell Xeon E5-2699 v4 processor, which has 22 cores running at 2.2 GHz, for example, has 6.34X the performance of this baseline Nehalem E5540 processor. (Intel did not have the distinction between the E3 for uniprocessor and E5 for dual-socket machines back then.)

The relative performance metric presumes that the workload is not memory capacity or memory bandwidth constrained, of course. Meaning, it fits in a relatively small memory footprint and is not bandwidth sensitive. A lot of workloads are like this, particularly for hyperscalers and HPC shops.

Here is the full lineup of the Kaby Lake Xeons just unveiled last week:

As you can see, the top-bin Kaby Lake chip, which has four cores running at 3.9 GHz, has only 2.18X the performance of that baseline Nehalem E5540 processor. About 54 percent of that 118 percent performance increase comes from clock speeds alone, which have been enabled from the shrink from 45 nanometer to 14 nanometer processes. The rest of that performance bump (and this is really a gauge of integer performance) is due to improvements in IPC in the cores and tweaks in the cache hierarchy. Floating point performance has increased by leaps and bounds over these years, of course, and so has the integrated GPU performance, which can be used to do calculations with OpenCL if you are adventurous.

You will note that the L3 cache sized on the Xeon E3s do not change that much; it is usually 8 MB, sometimes 4 MB or 6 MB in special cases.

With the core count and L3 cache constant, and only IPC and process changing, there is just not the same room to expand performance (as measured by throughput) as can be done when you let Moores Law push the core counts up and the clock speeds down. What is interesting is that the successive process shrinks have allowed Intel to boost the bang for the buck on E3-class chips considerably over the past eight years. The Nehalem E5540, which definitely could be deployed in a single-socket machine cost $744, or $744 per unit of relative performance as we reckon it since it is the touchstone in our comparisons. As you can see, the top bin and therefore most expensive in terms of performance Kaby Lake E3-1280 v6 part costs $612, and that yields a $280 per unit of relative performance rating. That is a factor of 2.65X better bang for the buck. And for the mainstream Kaby Lake Xeon E3 chips (those that have HyperThreading activated on their cores), the price/performance is averaging around $142 per unit of relative performance, and that is a factor of 5.4X better price/performance compared to that baseline Nehalem Xeon E5540. The Broadwell Xeon E5s have a bang for the buck that ranges from a low of $159 to a high of $649 per unit of relative performance. In other words, those top bin parts in the Xeon E5 have lots more throughput, but they have lower clocks and they have not shown the same kind of price/performance improvements.

This is how Intel has really benefitted from its manufacturing process prowess. Intel has, we think, been able to wring a lot more profit out its Xeon E5 parts and the middle line (operating profits) of the Data Center Group shows it. Our point is this: Intel has profits to burn when and if ARM server chip makers and AMD with its X86 alternatives get aggressive. How it will make up profits it sacrifices to maintain market share remains to be seen. But it will take some cajoling to keep the server makers of the world in line, and this time around, the hyperscalers do exactly and precisely what the hell they want to unlike in 2003 when the Opterons plan was unveiled and 2005 when they were shipping in volume and kicking Intels tail.

Here is how the Skylake Xeon E3 processors, including specialized ones that were announced last June for datacenter and media processing uses and implemented in 14 nanometer processes like Kaby Lake Xeon E3s, line up:

The Skylake Xeon E3s were focused more on low power consumption than performance for these specialized parts, but the more standard Skylake Xeon E3s have higher wattages but not really much higher relative performance.

Things got interesting back in the Broadwell generation, shown above and also using 14 nanometer wafer etching, when Intel did regular single-socket Xeon E3 processors and also kicked out specialized Xeon D system-on-chip designs for Facebook and, we presume, others. The Xeon D chips were single socket processors, but had an integrated southbridge on the package and a lot more cores. They also had much higher price tags, and Intel charged a hefty premium for low voltage versions that had higher performance. These are not Xeon E3 processors, per se, but they are close to a Xeon E3 than they are to a Xeon E5.

Here are the Haswell and Ivy Bridge Xeon E3s, implemented in 22 nanometer processes, stack up:

And finishing up, here are the 32 nanometer Sandy Bridge Xeon E3 parts and the 45 nanometer Nehalem 5500 parts:

That is a lot of tabular data to chew on, so we made some arts and charts to get some general trends. The first is a trend line showing the performance and price/performance of the top bin Xeon E3 parts over time compared to the top bin Nehalem X5570, which had four cores running at 2.93 GHz, and the top bin Xeon D-1581, which had sixteen cores running at 1.9 GHz. As with other Xeon processors, the price/performance curves have flattened out.

And that curve, dear friends, represents Intels profits. We think Intel is able to make chips a lot cheaper over this same period of time, and the company spent the better part of a whole day last week bragging about this very fact. In great and glorious detail. We think Intel always suspected it would eventually get competition again in datacenter compute, and it has been making hay tons and tons of it while the sun was shining on its fields alone.

This is really smart. Even right up to the moment that it encourages intense competition that causes a compute price war, which we think is coming this year. It is better to make the money between 2010 and 2017 than not make it that is for sure.

As you can see from the top bin chart, the Xeon D really sticks out, and if offers about the same bang for the buck as the real Haswell and Broadwell Xeon E3 parts. Over the past eight years, performance has gradually trended up, but again, it has only slightly more than doubled for the real four-core Xeon E3 variants. (FYI: We have picked the Xeon E3 chips without a graphics processor in the package or on the die wherever possible to get the rawest X86 compute comparison.)

Now, lets step back from the top bin and see how it looks:

As you can see, the bang for the buck for these chips has fallen lower, but the performance and price/performance curves are not that different. And the Xeon D does not stick out so much like a sore thumb, either. (It is also interesting that there is not a Skylake or Kaby Lake Xeon D. Hmmmm. . . . )

And at the low end of the Xeon E3 lineup, the performance gains are more choppy and so is the price/performance:

In fact, after the Nehalems, Intel kept the price/performance dead steady except for the Broadwell E3-1265L and the Skylake E3-1265L, both of which are specialized low voltage parts that fit the performance profile we were looking at. You could draw any number of charts from this data, and you have our full permission to do so. Have fun.

Categories: Compute, Enterprise, HPC, Hyperscale

Tags: Intel, Kaby Lake, Skylake, Xeon, Xeon D, Xeon E3

Fujitsu Takes On IBM Power9 With Sparc64-XII From Mainframes to Deep Learning Clusters: IBMs Speech Journey

More:

Xeon E3: A Lesson In Moore's Law And Dennard Scaling - The Next Platform

Scientists Say They’ve Identified a Gene Linked to Anorexia – Mental Floss

People withanorexia nervosahave a distorted body image and severely restrict their food to the point of emaciation and sometimes death. It's long been treated as a psychological disorder, but that approach has had limited results; the condition has one of the highest mortality rates among psychiatric conditions. But recently, neuroscience researchers at the UC San Diego School of Medicine who study the genetic underpinnings of psychiatric disorders have identified a possible gene that appears to contribute to the onset of the disease, giving scientists a new tool in the effort to understand the molecular and cellular mechanisms of the illness.

The study, published in Translational Psychiatry, was led by UC San Diego's Alysson Muotri, a professor at theSchool of Medicines departments of pediatrics and cellular and molecular medicine and associate co-director of the UCSD Stem Cell Program. His team took skin cells known as fibroblasts from seven young women with anorexia nervosa who were receiving treatment at UCSDs outpatient Eating Disorders Treatment and Research Center, as well as from four healthy young women (the study's controls). Then the team initiated the cells to become induced pluripotent stem cells (iPSCs).

The technique, which won researcher Shinya Yamanaka the Nobel Prize in 2012, takes any nonreproductive cell in the body and reprograms it by activating genes on those cells. You can push the cells back into the development stage by capturing the entire genome in a pluripotent stem cell state, similar to embryonic stem cells, Muotri tells mental_floss. Like natural stem cells, iPSCs have the unique ability to develop into many different types of cells.

Once the fibroblasts were induced into stem cells, the team differentiated the stem cells to become neurons. This is the most effective way to study the genetics of any disorder without doing an invasive brain biopsy, according to Muotri. Also, studying animal brains for this kind of disorder wouldnt have been as effective. At the genetic level as well as the neural network, our brains are very different from any other animal. We dont see chimpanzees, for example, with anorexia nervosa. These are human-specific disorders, he says.

Once the iPSCs had become neurons, they began to form neural networks and communicate with one another in the dish similar to the way neurons work inside the brain. Basically what we have is an avatar of the patients brain in the lab, Muotri says.

His team then used genetic analysis processes known as whole transcriptome pathway analysis to identify which genes were activated, and which might be associated with the anorexia nervosa disorder specifically.

They found unusual activity in the neurons from the patients with anorexia nervosa, helping them identify a gene known as TACR1, which uses a neurotransmitter pathway called the tachykinin pathway. The pathway has been associated with other psychiatric conditions such as anxiety disorders, but more pertinent to their study, says Mutori, is that tachykinin works on the communication between the brain and the gut, so it seems relevant for an eating disorderbut nobody has really explored that. Prior research on the tachykinin system has shown that it is responsible for the sensation of fat. So if there are misregulations in the fat system, it will inform your brain that your body has a lot of fat.

Indeed, they found that the AN-derived neurons had a greater number of tachykinin receptors on them than the healthy control neurons. This means they can receive more information from this neurotransmitter system than a normal neuron would, Muotri explains. We think this is at least partially one of the mechanisms that explains why [those with anorexia] have the wrong sensation that they have enough fat.

In addition, among the misregulated genes, connective tissue growth factor (CTGF), which is crucial for normal ovarian follicle development and ovulation, was decreased in the AN samples. They speculate that this result may explain why many female anorexia patients stop menstruating.

Muotrinext wants to understand what he calls the downstream effect of those neurons with too many TACR1 receptors. In other words, how does it affect the neurons at a molecular level, and what information do those neurons receive from the gut? This link between the brain and the gut is unclear, so we want to follow up on that, he says.

He also wants to look into thepotential to design a drug that could compensate for the large amount of TACR1 receptors, and the over-regulation of that receptor in the brainwhich would be a huge development for the notoriously difficult-to-treat disease.

While Muotri is excited about new avenues of research that can follow from this work, he doesn't see it as a panacea for the disease, but a way to begin to understand it more fully. He says, Its a good start, but arguably you have to understand what are the other environmental factors that contribute.

Originally posted here:

Scientists Say They've Identified a Gene Linked to Anorexia - Mental Floss

IBA Molecular and Mallinckrodt Nuclear Medicine LLC to create a … – PR Newswire (press release)

This new entity delivers diagnostic and therapeutic solutions to over 14 million people from a global network of 21 manufacturing centers comprised of 1 Molybdenum facility, 3 large SPECT facilities, and close to 40 PET and SPECT radiopharmacies. Our customer base counts over 6,000 public and private hospitals, radiopharmacies and imaging centers in over 70 countries. Curium customers can expect best-in-class products, exceptional service reliability, a large and diverse product portfolio, a relentless pursuit of stable isotope supply, and a commitment to develop and launch new products.

Speaking on behalf of CapVest, the owner of Curium, Kate Briant, CapVest Partner and Chairman of the Board said, "We are excited to launch this dynamic new brand in the marketplace. We believe the expertise, size and scale, and proven track record of the united companies will provide future growth opportunities in this attractive segment."

The Curium name emphasizes two aspects that are critical to us:

Visually, our identity conveys a sense of continuity and advancement along the patient care continuum. Our brand tagline, Life Forward, sums up our commitment to our customers and the industry we serve by enhancing the quality of health outcomes through patient care, life-saving diagnostics and treatment. "We feel this uniquely positions the expanded business, as we build our second century of progress," says Dehareng.

For additional information on Curium, visit our website at curiumpharma.com

Media Contacts: Janet Ryan Visintine & Ryan Public Relations +1-314-822-8860 or +1-314-614-7408 janet@visintineandryan.com

Priscilla Visintine Visintine & Ryan Public Relations +1-314-422-5646 priscilla@visintineandryan.com

About Nuclear ImagingWith the challenge of aging populations around the world and the rising incidence of diseases, solving diagnostic challenges to ensure patients have better outcomes has never been more important.

Nuclear medicine is a specialized area where 'SPECT' and 'PET' cameras are used to capture emitted particles from radiopharmaceuticals and the technology is used to monitor major disease areas including oncology, cardiology and neurology.

The combination of the radiopharmaceuticals and the advanced imaging technology helps doctors to diagnose diseases earlier and more accurately, making treatments more effective and, as a consequence, reducing the long-term cost of care.

About IBA MolecularIBA Molecular is a highly diversified global supplier of molecular imaging and other proven technologies in nuclear medicine, mainly SPECT and PET products. The company operates across 18 sites globally, servicing a growing client base of private hospitals and health/imaging clinics in over 70 countries. It produces radioactive tracers used in molecular imaging and therapy to diagnose and monitor a range of common diseases including cancer, heart, brain and bone.

IBA Molecular was created in 2012 following the buy-out of the radiopharmaceutical division of Ion Beam Applications ("IBA") SA, a European-based leader in advanced cancer radiation therapy which is listed on the Euronext pan-European Stock Exchange. In 2016, IBA Molecular was acquired by CapVest. IBA Molecular is today a wholly separate business from IBA SA.

About Mallinckrodt Nuclear Medicine LLCMallinckrodt's Nuclear Imaging business is a global producer of the medical isotope molybdenum-99, and its derivative, Technetium-99m, which is used in nuclear medicine procedures worldwide. The business has manufacturing operations in the US and the Netherlands, close to critical transport links, and its products are approved for use in many countries. Over two-thirds of its revenues originate in the US.

About CapVestCapVest, which was established in 1999, is a leading private equity firm with a strong record of success. The firm's investment strategy is focused on identifying and managing investments in companies supplying essential goods and services. A patient investor, CapVest works closely with management to transform the size and scale of its investee companies through a combination of organic and acquisition-led growth.

Notes to Editor[1] SPECT - A Single Photon Emission Computed Tomography (SPECT) is a type of nuclear imaging technique that uses radioactive substances injected into the blood to create 3-D images that help to diagnose a variety of diseases across oncology, cardiology and neurology, among others.

[2] PET - Like SPECT, Positron Emission Tomography is a nuclear imaging technique that uses radioactive material injected into the body to create 3-D images. However, PET imaging typically provides better resolutions.

To view the original version on PR Newswire, visit:http://www.prnewswire.com/news-releases/iba-molecular-and-mallinckrodt-nuclear-medicine-llc-to-create-a-new-world-class-radiopharmaceutical-company-curium-300435720.html

SOURCE Curium

Here is the original post:

IBA Molecular and Mallinckrodt Nuclear Medicine LLC to create a ... - PR Newswire (press release)

Immune-Onc Gains Rights to Cancer Immunotherapies – Genetic Engineering & Biotechnology News

Immune-Onc Therapeutics will acquire exclusive global rights to develop and commercialize novel cancer immunotherapies and other biotherapeutics from The University of Texas Health Science Center at Houston (UTHealth) and The University of Texas Southwestern Medical Center (UTSW), the company and the UT System said today.

In addition, Immune-Onc has launched a multiyear research collaboration with UTHealth and UTSW to discover and develop new biotherapeutics that modulate the immune system under the license agreement, whose value was not disclosed.

The collaboration will use the Cancer Prevention & Research Institute of Texas (CPRIT) Therapeutic Monoclonal Antibody Lead Optimization and Development Core Facility at UTHealth to advance lead antibodies from academic laboratories. The core facility aims to provide state-wide support and services to advance lead antibodies from academic laboratories to preclinical development.

This is an important step in translating our therapeutic antibody from discovery to development, said Zhiqiang An, Ph.D., director of the core facility at UTHealth, where he is director of the Texas Therapeutics Institute at the Brown Foundation Institute of Molecular Medicine, as well as a professor of molecular medicine, and the Robert A. Welch Distinguished University Chair in Chemistry.

The collaboration is the second announced in as many weeks by Immune-Onc with an academic partner. On March 28, the company announced a similar, exclusive, global, licensing agreement with Albert Einstein College of Medicine and Memorial Sloan Kettering Cancer Center to develop and commercialize novel biotherapeutics, with applications in cancer immunotherapy and other diseases.

Immune-Onc is a Palo Alto, CA, startup founded last year to develop therapeutic antibodies for cancer treatment, with a focus on immuno-oncology products. On September 1, 2016, the company said it closed a $7 million Series A financing, with major investors that included Fame Mount and CLI Ventures.

Immune-Onc was co-founded by Charlene Liao, Ph.D., who serves as its president and CEO, and Guo-Liang Yu, Ph.D., a serial entrepreneur and industry veteran. Dr. Liao was project team leader at Genentech, a member of the Roche Group, where she spent nearly 14 years leading oncology and immunology drug development programs from preclinical to Phase III. Dr. Liao was a fellow of the Damon Runyon Cancer Research Foundation and a special fellow of the Leukemia and Lymphoma Society.

Dr. Yu was co-founder and CEO of Epitomics, an antibody company acquired by Abcam in 2012 for $170 million. His numerous board and management positions include executive chairman of oncology platform company Crown Bioscience, and venture partner of OrbiMed, a healthcare and life sciences-dedicated investment firm.

View post:

Immune-Onc Gains Rights to Cancer Immunotherapies - Genetic Engineering & Biotechnology News

Bruce Hamilton: Why science matters to all – Santa Clarita Valley Signal

Growing up in the Santa Clarita Valley, I always felt that the world was wide and mine to explore. Whether hiking in Placerita Canyon, following the Santa Clara River bed, or just marveling at how surface tension can hold water drops in place on the few days it rained, there was always something to see, to observe, to figure out.

Trying to figure things out is, in part, how I became a scientist. With the role of science in our society being debated in some quarters, I want to share a few things about science that should be more widely known.

Science is fun. Some people tell me they dont like science because of a class that made them memorize lists of boring facts. That isnt science (nor, for that matter, good teaching).

Science is a process, a way of finding things out, not a collection of results. Science is how we obtain new knowledge and re-check what we think we know already.

Scientists are more competitive than you might think. Few things are as exhilarating as understanding something new about the world before anybody else or disproving something that was thought to be true.

This is part of how science self-corrects. If the evidence is weak or the conclusion is wrong, another scientist will delight in correcting it.

Science is full of surprises. While basic science supports development of many useful things, and scientific methods are used to make or improve products, science-for-the-sake-of-finding-things-out turns out to be really important for progress.

We dont know where the next real innovation will come from. By pushing the limits of our knowledge, we create the widest possible base for new invention.

Somewhat paradoxically, undirected discovery (basic research) is often the shortest path to goals that have resisted direct attempts based on our prior understanding precisely because we did not know what pieces of the puzzle we were missing until someone found them in a place no one had looked before.

Editing DNA in cells had been an extremely challenging goal for basic research and therapeutic development before a new tool made it relatively easy. This tool (called CRISPR/Cas9) was found by studying how bacteria fight off bacterial viruses which led to an unexpected class of enzymes that edit DNA.

Science supports our quality of life. Science is not itself technology, but it is a necessary foundation of technology. Your smartphone, streaming video and health care were all driven by basic science discoveries whose commercial applications were not always obvious.

Basic research in bacterial genetics led to recombinant DNA technology which in turn allows the production of things like synthetic insulin, life-saving for diabetics.

Development of vaccines for emerging viruses, remarkable new treatments for some cancers (such as Gleevec for certain leukemias), and the promise of personalized medicine are made possible by the basic research that allow scientists to grow viruses in the laboratory, understand how cells divide, and interpret vast amounts of data. And you are unlikely to know anyone who died of smallpox or polio.

Science is an economic engine. Because science provides the basis for new goods and services that people want, economies that invest in science tend to prosper. The impact is not always direct and can be hard to measure fully, but it is real and powerful.

Silicon Valley and two of the three largest biotech industry clusters in the U.S. are in California because of innovations that came out of California universities.

We dont know where the next innovation will come from, but our investment in science has been essential to economic competitiveness in an increasingly technology-dependent world. The technological advantages we gain and the economic value they confer make our investment in science a national security imperative.

Science is non-partisan, but scientists can be politically engaged. Politicians should also engage with science. Science is how we understand the world; politics is how we decide what to do about it.

When policymakers say I am not a scientist, but followed by a dismissal of scientific evidence, we should be wary. When someone asserts the value of a policy choice, we should expect to see and evaluate the logic and evidence.

Many scientists are becoming more engaged with public policy because they see partisanship pushing science the way we discover and test how things work out of many policy debates. Diminishing the role of science in an increasingly technological world is bad for our security, our economy, our health care, and our lives.

If we disagree in our perceptions of the way things are, science can inform us on the facts. If we disagree on how to respond to new conditions, science can and should inform our options.

Science is not about finding the evidence to support our beliefs. It is about modifying our opinions and actions in response to evidence.

Strength in science is essential to addressing many of our common challenges and that matters.

The author is a scientist and graduate of Canyon High School, the University of California San Diego, and the California Institute of Technology. He is currently Professor of Cellular and Molecular Medicine and Associate Director of the Institute for Genomic Medicine at theUniversity of California San Diego. Published under Creative Commons CC-BY license from the author.

See original here:

Bruce Hamilton: Why science matters to all - Santa Clarita Valley Signal

The Machine of Life – Washington Free Beacon

'Death Comes to the Banquet Table' (detail) by Giovanni Martinelli (1635)

BY: Joseph Bottum April 8, 2017 4:58 am

Here's a new book about how wonderful the next stages of the cyber-revolution are going to be: Heart of the Machine: Our Future in a World of Artificial Emotional Intelligence by Richard Yonck, a contributing editor to TheFuturist magazine. And here's another: The Digital Mind: How Science Is Redefining Humanity by Arlindo Oliveira, president of the Instituto Superior Tcnico in Lisbon.

Recent months have also brought us Thinking Machines: The Quest for Artificial Intelligenceand Where It's Taking Us by the widely published technology writer Luke Dormehl. And What Algorithms Want: Imagination in the Age of Computing by Arizona State University professor Ed Finn. In case that's not enough, you can always go for Homo Deus: A Brief History of Tomorrow by Yuval Noah Harari, a history professor at the Hebrew University of Jerusalem and self-designated cheerleader for modern atheism. And if you get bored with that, you can add in the more worried Data for the People by Andreas Weigend, former chief scientist at Amazon, and The Art of Invisibility by Kevin Mitnick, the convicted-felon hacker, freefrom prison and wondering where computers are taking us.

Or you can just skip them. The moral reasoning in these books rarely rises above a freshman-level ethics class, and the metaphysical analysisis more like a late-night bull session in the dorm after those freshmen have had a few beers: But, like, Turing said that if you can't tell if you're talking to a computer, then it's a mind, you know? Each of these authors issmart, for certain values of the word smart, especially Oliveira, Dormehl, and Weigend. But even the professional writers among them have a prose that clatters, connecting thoughts like train cars being slammed together. And they all have the kind of intelligence that imagines it can fly because it is so completely ungrounded.

I gave up on Harari, the anti-religion activist, around the point he informedhis readers that the name Eve derives from the Hebrew word for snake and thus, you know, Judaism is basically nothing more than a harvest-festival cult. I gave up on Yonck after he insisted that proof of the coming of emotional machines is found in the fact that cavemen had tools before they had language. I gave up on Finn once he found himself incapable of explaining the agency, the final causation, that he ascribes to bits of computer code ashe speaks of what algorithms want. In truth, these books are far more interesting in general than they are in particular, and the bulk of them suggests far more compelling thoughts than any one of them manages on its own.

Although the authors tend toward the happy-happy end of futurismSoon we will live like George Jetson!they begin in outrage. It's outrageous that our bones break and our cells fail. It's outrageous that we have such flimsy bodies. It's especially outrageous that we die. The indignation here is metaphysical, a fury at the human condition, and it has its root down in Francis Bacon's modernity-defining claim that science is born in rejection of the world as unchangeable.

Unfortunately, the new futurists' panangelicum is not Bacon's seventeenth-century New Atlantis, much less Thomas More's sixteenth-century Utopia. Instead of plowing ahead on the path that early modern thinkers pointed out, seeking to ameliorate the shocks that flesh is heir to, the new generations of computer-enamored writers seem to have taken a detourand found themselves looping back to recreate, all unknowingly, the old hatred of the material world taught bythe gnostics of late antiquity. If it's outrageous that our bodies fail us, then we should try to eliminate the body. If it's outrageous that we die, then we must become immortal. If it's outrageous that human existence is so sloppy and fragile, then the human parts of us will simply have to go.

So let us become computer programs, you and I. Let us upload our consciousness into the cloud. Let us turn insubstantial, immaterial. Let us be pure spirit, just as the old gnostics wanted. What could possibly go wrong? Not just self-improvement is involved here. Soon robots will be human, fully self-conscious and aware. So we must computerize ourselves in self-defense.

Part of these writers' ungroundedness is their inability to believe that rational thinkers could possibly disagree. Back in 1624, John Donne suggested that "affliction is a treasure, and scarce any man hath enough of it." It's not enough that the new futurists imagine Donne is mistaken. For these modern gnosticsespecially the religion-hating futurist Yuval Noah Hararipeople like Donne must be either idiots or hypocrites. Only rank stupidity or evil motives could produce a thought so manifestly wrong.

And thus, human sympathy soon follows the human condition down the drain. Richard Yonck, for example, begins with love for the promise of emotional machinesand he ends by insisting that those who are bothered by the idea of robot sex are the exact equivalent of the racist opponents of miscegenation. Luke Dormehl starts with great optimism about humans in the cyber future. "Barring some catastrophic risk," he writes, artificial intelligence "will represent an overall net positive for humanity when it comes to employment." But by the conclusion of Thinking Machines, he suggests that the intellectual advantages of neural nets will compel us to cede them rightsgiving them our jobs and forcing us to upload ourselves into computer code.

The other worrisome part of these books is their certainty that the gnostic transformation will happen soon. Years ago, teaching logic to young engineers, I had a student who insisted he could simply take the time to keep following an infinite regress. When I suggested that, if nothing else, death convinces us of our finitude, he had an answer. "I'm not going to die," he explained, "because by the time I get old enough to die, medical science is going to have cured whatever it is that I was going to die of."

I think about that student from time to time, wondering what happened to him when he learned about mortality. The new futurists are all older than my student was, but even in their adulthood they seem to share his sophomoric conviction that never-endingness lies just around the corner. Yuval Noah Harari is already an angry man, but what will the ebullient Richard Yonck dowhat rage will possess himwhen he discovers that he is born to die? How will Luke Dormehl and Ed Finn take the news? For them that think death's honesty / Won't fall upon them naturally, / Life sometimes must get lonely.

We seem to have some weakness that lures us to think fundamental change is barreling down upon us. As it happens, the utopians and dystopians do share one thing in common: For centuries now, neither group has been much more successful at predicting the future than the gypsy lady who reads palms down on 18th Street. But still we imagine that this time, it's going to be different. This time, the world will change.

The current futuriststend toward happy visions of the world to come,but along the way totheir utopias they take our susceptibility for the new and divert it to the old, old belief that there's something ugly and vile, something outrageous, about life in a fragile material body. Why should the new gnostics differ much from the old? Each of them longsto be an animal, a tree, a stone, an angel, a machineanything but a human being.

Excerpt from:

The Machine of Life - Washington Free Beacon

Is it possible to fly spaceships with our minds? – The Independent

Computers and brains already talk to each other daily in high-tech labs and they do it better and better. For example, disabled people can now learn to govern robotic limbs by the sheer power of their mind. The hope is that we may one day be able to operate spaceships with our thoughts, upload our brains to computers and, ultimately, create cyborgs.

Now Elon Musk is joining the race. The CEO of Tesla and SpaceX has acquired Neuralink, a company aiming to establish a direct link between the mind and the computer. Musk has already shown how expensive space technology can be run as a private enterprise. But just how feasible is his latest endeavour?

Neurotechnology was born in the 1970s when Jaques Vidal proposed that electroencephalography (EEG), which tracks and records brain-wave patterns via sensors placed on the scalp (electrodes), could be used to create systems that allow people to control external devices directly with their mind. The idea was to use computer algorithms to transform the recorded EEG signals into commands. Since then, interest in the idea has been growing rapidly.

Indeed, these brain-computer interfaces have driven a revolution in the area of assistive technologies letting people with quadriplegia feed themselves and even walk again. In the past few years, major investments in brain research from the US (the BRAIN initiative) and the EU (the Human Brain project) have further advanced research on them. This has pushed applications of this technology into the area of human augmentation using the technology to improve our cognition and other abilities.

The combination of humans and technology could be more powerful than artificial intelligence. For example, when we make decisions based on a combination of perception and reasoning, neurotechnologies could be used to augment our perception. This could help us in situations such when seeing a very blurry image from a security camera and having to decide whether to intervene or not.

Despite investments, the transition from using the technology in research labs to everyday life is still slow. The EEG hardware is totally safe for the user, but records very noisy signals. Also, research labs have been mainly focused on using it to understand the brain and to propose innovative applications without any follow-up in commercial products. Other very promising initiatives, such as using commercial EEG systems to let people drive a car with their thoughts, have remained isolated.

To try to overcome some of these limitations, several major companies have recently announced investments in research into brain-computer interfaces. Bryan Johnson from human intelligence company Kernel recently acquired the MIT spin-off firm KRS, which is promising to make a data-driven revolution in understanding neurodegenerative diseases. Facebook is hiring a brain-computer interface engineer to work in its secretive hardware division, Building 8.

Musks company is the latest. Its neural lace technology involves implanting electrodes in the brain to measure signals. This would allow getting neural signals of much better quality than EEG but it requires surgery. The project is still quite mysterious, although Musk has promised more details about it soon. Last year he stated that brain-computer interfaces are needed to confirm humans supremacy over artificial intelligence.

The project might seem ambitious, considering the limits of current technology. BCI spellers, which allow people to spell out words by looking at letters on a screen, are still much slower than traditional communication means, which Musk has already defined as incredibly slow. Similar speed limitations apply when using the brain to control a video game.

What we really need to make the technology reliable is more accurate, non-invasive techniques to measure brain activity. We also need to improve our understanding of the brain processes and how to decode them. Indeed, the idea of uploading or downloading our thoughts to or from a computer is simply impossible with our current knowledge of the human brain. Many processes related to memory are still not understood by neuroscientists. The most optimistic forecasts say it will be at least 20 years before brain-computer interfaces will become technologies that we use in our daily lives.

But that doesnt make Musks initiative useless. The neural lace could initially be used to study the brain mechanisms and treat disorders such as epilepsy or major depression. Together with electrodes for reading the brain activity, we could also implant electrodes for stimulating the brain making it possible to detect and halt epileptic seizures.

Brain-computer interfaces also face major ethical issues, especially those based on sensors surgically implanted in the brain. Most people are unlikely to want to have brain surgery or be fit to have it unless vital for their health. This could significantly limit the number of potential users of Musks neural lace. Kernels original idea when acquiring the company KRS was also to implant electrodes in peoples brain, but the company changed its plans six months later due to difficulties related to invasive technologies.

Its easy for billionaires like Musk to be optimistic about the development of brain-computer interfaces. But, rather than dismissing them, lets remember that these visions are nevertheless crucial. They push the boundaries and help researchers set long-term goals.

Theres every reason to be optimistic. Neurotechnology started only started a few years after man first set foot on the moon perhaps reflecting the need for a new big challenge after such a giant leap for mankind. And the brain-computer interfaces were indeed pure science fiction at the time.

In 1965, the Sunday comic strip Our New Age stated: "By 2016, mans intelligence and intellect will be able to be increased by drugs and by linking human brains directly to computers!"

We are not there yet, but together we can win the challenge.

Davide Valerianiis a post-doctoral researcher in Brain-Computer Interfaces at theUniversity of Essex

This article was originally published on The Conversation(theconversation.com). Read the original article

Here is the original post:

Is it possible to fly spaceships with our minds? - The Independent

Ghosts and Shells: Is Transhumanism Cartesian? – National Catholic Register (blog)

Blogs | Apr. 2, 2017

Do transhumanists believe in the soul, or in materialistic reductionism? Or could it be both at the same time?

The Cartesian idea of the spirit or soul as a disembodied presence merely using or occupying a body, rather than the two being integrally connected, is a cardinal principle in transhumanism, the ultimate goal of which is to transcend the limitations of corporeal existence through technology.

So I wrote in my recent review of the transhumanist fantasy Ghost in the Shell, starring Scarlett Johansson. In the combox a longtime reader who goes by Pachyderminator challenged this:

Modern transhumanists tend to hold a scientific materialist worldview, which is often concerned specifically to refute Cartesian dualism and replace it with physical reductionism, which holds that any system can in principle be modeled without loss solely with reference to its lowest-level parts.

This is quite true of many (not all) transhumanists a point I would have noted myselfin a piece on transhumanism. Since I didnt, I thank Pachyderminator for highlighting this point.

This is precisely what makes it so odd that, juxtaposed with this penchant for reductionistic materialism, transhumanist imagination also embraces, at least in its more quasi-religious or existential forms, a Cartesian notion of the self as not bound or defined by the material reality supporting the self a ghost in a shell, as the Japanese franchise, unambiguously an expression of transhumanist imagination, proposes.

The reductionist side of transhumanist thought lies in the notion that the mind, and more fundamentally the self, comprises a system that can be fully replicated, thus becoming equivalent to the original system.

The Cartesian side of transhumanist thought lies in the aspirational hope that replicating the mind and uploading ones memories, thought patterns, etc. can preserve ones identity or self that the me currently residing in my body can be transferred into a completely different form, and this too will be me, continuous with the me I have always been.

Only last week this fantasy was given imaginative expression in an article on transhumanism in the Guardian:

You are lying on an operating table, fully conscious, but rendered otherwise insensible, otherwise incapable of movement. A humanoid machine appears at your side, bowing to its task with ceremonial formality. With a brisk sequence of motions, the machine removes a large panel of bone from the rear of your cranium, before carefully laying its fingers, fine and delicate as a spiders legs, on the viscid surface of your brain. You may be experiencing some misgivings about the procedure at this point. Put them aside, if you can.

Youre in pretty deep with this thing; theres no backing out now. With their high-resolution microscopic receptors, the machine fingers scan the chemical structure of your brain, transferring the data to a powerful computer on the other side of the operating table. They are sinking further into your cerebral matter now, these fingers, scanning deeper and deeper layers of neurons, building a three-dimensional map of their endlessly complex interrelations, all the while creating code to model this activity in the computers hardware. As the work proceeds, another mechanical appendage less delicate, less careful removes the scanned material to a biological waste container for later disposal. This is material you will no longer be needing.

At some point, you become aware that you are no longer present in your body. You observe with sadness, or horror, or detached curiosity the diminishing spasms of that body on the operating table, the last useless convulsions of a discontinued meat.

The animal life is over now. The machine life has begun.

You see how this is imagined to work? The piece posits continuity of consciousness (a first-person experience of self, addressed here in the second person) between you that submits to the operation and the you that at some pointbecome[s] aware that you now exist in another form, leaving behind only discontinued meat. Pure Cartesian imagination.

Crucially, bolstering this mental sleight of hand, the scanning and the consciousness of ones self in the new form is imagined to be simultaneous with a process of destroying what is scanned. If we were to adjust the imaginative scenario so that the scanning process is conceived as non-invasive and non-destructive, you would still have the (imagined) phenomenon of a conscious awareness in a new form but you would also continue to be conscious and aware in your own body.

This alteration reveals that the consciousness we imagine in the machine is in fact a copy of the consciousness in our minds; if I can continue to exist as me in my own body, side by side with the version of me imagined to be in the computer, then I have not escaped or transcended death at all. In this scenario, I would continue to exist in my body for my natural lifespan and then die like anyone else, and the copy of me in the computer would be like a clone with implanted memories, a new self or consciousness based on me, but not me.

As an aside, Christopher Nolans The Prestige explores these implications (in a non-transhumanist cultural context) with his customary ruthlessness. To enjoy Star Trek, on the other hand, we are obliged to ignore the reality that if a viable transporter were ever invented, it wouldnt really transport a person from one place to another; it would kill the original person and create a copy in another location. (The Next Generation comes perilously close to admitting this in the episode where Commander Riker is inadvertently duplicated in a transporter accident, with one version stranded on a deserted planet for years and another version going on to a successful Starfleet career.)

To be sure, there are hard-headed transhumanists who will admit all this, at least in principle. The frankest will admit that, on their own reductionist principles, the notion of a continuous self is an illusion; there is no continuous underlying reality uniting what I call me today and what called itself me yesterday or will call itself me tomorrow. In fact, there is no I or self at all; selfhood itself is a chimera.

On this model, memory fools us all. I have inherited the memories of past iterations of me, which, they say, tricks me into feeling as if or believing that some underlying, continuous reality has had all of these experiences. But this is all unreal. There is no survival of the self from death, but then there is no survival from day to day either, or even from hour to hour.

So they say. Yet they generally believe, for example, in keeping their promises, i.e., promises of which they have inherited memories, though presumably they would not feel bound by promises remembered by what they knew or believed to be false, implanted memories.

Even if they were real promises made by someone else and then copied technologically or telepathically into their minds, they would hold the original promise makers, not themselves, responsible for them. Yet on their own principles its not obvious how the inherited memory of a promise transmitted organically differs from one transmitted from one mind to another.

For that matter, its not clear how much sense the notion of a promise makes at all. A promise creates what we conceive as an obligation for who? Not for me, for by hypothesis I dont exist at all, and certainly I wont exist at the future date when the obligation is held to apply. That will be some other iteration of me, with memories of what I have done to be sure, but the me that made those promises no longer exists, and its far from clear why the me that inherits those memories should be obliged by them.

If artificially transmitted promises dont count, then a consciousness into which all my memories and thought patterns had been poured would be no more bound by my promises than a mind that received them via artificial or telepathic means. But thats another way of saying that the copy of me isnt really me at least, as long as they hold that I am bound by my own promises.

At any rate, such hardheaded materialistic reductionism hardly seems to comport with quasi-religious zeal for achieving immortality through mind uploading. Yet this zeal for immortality is not only often found among those who theoretically acknowledge the illusionary nature of the self, it seems to be an important motive, perhaps even the motive, driving much of the enthusiasm for the transhumanist project in all its forms, technological, biological, cyborganic, etc.

Like a ghost in a shell, a Cartesian notion of the self as an actual, intangible thing lurking inside the biological machines of our bodies, a valuable presence that can be saved from organic frailty and given digital eternal life, coexists anomalously with a reductionistmaterialist view of our cerebral hardware as nothing more than the sum of its parts.

Transhumanists may or may not say out loud that we have no souls, but this doesnt stop them from hoping for the salvation of their souls in a way fundamentally convergent with believers in conventional religions. The main difference isnature of the deity and the hoped-for eschaton.

See also Ghost inthe Shell (review)

Go here to see the original:

Ghosts and Shells: Is Transhumanism Cartesian? - National Catholic Register (blog)

Interkosmos Is A Clever, But Harrowing, Astronaut Adventure For HTC Vive – UploadVR

I dont know about you, but Ive had just about enough of space and I havent even been there yet. Ive been on one too many virtual walks around the ISS, and spent more than my fair share of time floating in zero gravity. If youre going to take me back to the dark abyss, then youll need a really good reason to bring me there. Fortunately, Interkosmos has just that.

This upcomingHTC Vive title from indie developer Ovid Works isnt about escaping some giant, Gravity-esque set piece, nor is it about going for yet another spin around the Earths orbit. Instead, this is a memory puzzle game of sorts that mixes a splash of comedy with a dab of simulation and then sprinkles on some arcade influences to boot. All of that and its set in one of the most detailed, convincing VR environments Ive seen in some time. Not bad for a debut effort.

Interkosmos takes place inside a 70s-inspired Soviet re-entry capsule thats been lavishly assembled with help from the European Space Agency. Though the teams Yashar Dehaghani never uses the term simulator as we chat, its not far from the mind as I explore a maze of buttons, switches and levers, each of which can be pressed, flicked or pulled. Youll need them at specific times, because Interkosmos wants to give you an authentic feeling of attempting to land safely on Earth, though it doesnt take itself as seriously as Apollo 13.

When you start up the game a thick Russian accent comes on the radio, barking instructions at you. Eventually your comrade will entangle himself in an argument with Americans that also gain access to the craft and try to convince you to steer your vessel towards the USA. The game has two branching paths and Ovid wants to encourage multiple playthroughs, especially as players first few attempts will likely result in death.

Getting home is easier said than done. For the purposes of the demo, switches I need are highlighted at the right time Id have been lost without them but when the game is available for everyone to take at their own pace players will also have a mode that expects them to memorize the entire layout of the cockpit, which I suspect is where the real fun comes from.

Even with the guidance though, Interkosmos can be a frantic thrill. Youll need to keep tabs on your oxygen and other meters as you busy yourself with other tasks; let them fall too low and youll die. Fill them up too much and, guess what, youll probably die. Or just start a fire, in which case youll very likely die a bit later. The game is rightly punishing in that regard, as it wants to push you, though I do wonder if everyone will take to the insistence on memorization very well. It could come off as a bit of a chore.

The capsule takes full advantage of VR, as youre never really sure where to look and which of the many screens is the one you should be reading. At one point Im using a lever to steer the capsule towards Earth, while the next Im trying to put out a small fire thats broken about because Ive been neglecting other duties.

With a successful playthrough said to take around 30 minutes, Im going to be interested to see how people take to Interkosmos unforgiving brand of survival. I had a great time scrambling around my cockpit desperately looking for the right buttons to push, and I hope that hardcore element resonates with the VR community.

Ovid is planning to launch Interkosmos on the HTC Vive towards the end of April for approximately 4.99.

Tagged with: Interkosmos, Ovid Works

Read more:

Interkosmos Is A Clever, But Harrowing, Astronaut Adventure For HTC Vive - UploadVR

Is neuroscience rediscovering the soul? – Minnesota Public Radio News

The idea that neuroscience is rediscovering the soul is, to most scientists and philosophers, nothing short of outrageous. Of course it is not.

But the widespread, adverse, knee-jerk attitude presupposes the old-fashioned definition of the soul the ethereal, immaterial entity that somehow encapsulates your essence. Surely, this kind of supernatural mumbo-jumbo has no place in modern science. And I agree. The Cartesian separation of body and soul, the res extensa (matter stuff) vs. res cogitans (mind stuff) has long been discarded as untenable in a strictly materialistic description of natural phenomena.

After all, how would something immaterial interact with something material without any exchange of energy? And how would something immaterial whatever that means somehow maintain the essence of who you are beyond your bodily existence?

So, this kind of immaterial soul really presents problems for science, although, as pointed out here recently by Adam Frank, the scientific understanding of matter is not without its challenges.

But what if we revisit the definition of soul, abandoning its canonical meaning as the "spiritual or immaterial part of a human being or animal, regarded as immortal" for something more modern? What if we consider your soul as the sum total of your neurocognitive essence, your very specific brain signature, the unique neuronal connections, synapses, and flow of neurotransmitters that makes you you?

Just as we have unique fingerprints, our brains, their "connectome," are also unique. Surely, all brains are made of the same stuff, but wired in very individual ways. Recall that our brains are plastic, and mold themselves according to environmental and emotional inputs the stories of our lives. To this, we must add our bodies and their relation to our brains. For the mind is embodied, the self not an isolated property of what's inside your cranium but an emergent property of your whole mind-body integration as mapped through the complex highways of nerves interlocking all of you.

Consider, then, the modern soul as the unique neuronal-synaptic signature integrating brain and body through a complex electrochemical flow of neurotransmitters. Each person has one, and they are all different. That is, or can be considered, your essence from a materialist perspective.

Once we have this definition of the soul, the next question is inevitable. Can all this be reduced to information, such as to be replicated or uploaded into other-than-you substrates? That is, can we obtain sufficient information about this brain-body map so as to replicate it in other devices, be they machines or cloned biological replicas of your body? This would be, if technologically possible, the scientific equivalent of reincarnation, or of the long-sought redemption from the flesh an idea that is at least as old as organized religions in the East and West (as Mark O'Donnell remarked in his book To Be a Machine, reviewed here).

Well, depending on who you talk to, this final transcendence of human into information is either around the corner a logical step in our evolution or an impossibility a mad dream of people who can't accept the inevitability of death, the transhumanist crowd.

Silicon Valley is taking very seriously the possibility that aging is a technological problem that can be hacked. For example, the website of Google's company Calico states right upfront that its mission is to tackle "aging, one of life's greatest mysteries." The company's approach is more one of prolonging life than of uploading yourself somewhere else, but in the end the key word that unites the different approaches is information. If life is a code written genetically, it can be dealt with, including the instructions for aging. Another Google company, DeepMind, is bent on cracking AI: "Solve intelligence to make the world a better place." Google is approaching the problem of death from both a genetic and a computational perspective. They clearly complement one another. Google is not alone, of course. There are many other companies working on similar projects and research. The race is on.

What to make of this? It's inevitable that science will be at the forefront of the quest to prolong or upload life. This is not a bad thing, per se, given that the knowledge this research will surely produce will open new pathways to healthier, longer lives. Accepting death is a hard pill to swallow, the hardest. As I wrote elsewhere, referring to my family in this context: "Every day I have to love them is one less day I have to love them."

However, the possibility of extending life indefinitely also raises all sorts of moral and social questions, and possibly a lot of pain and loss. The curse of the immortal is to lose everyone he loves. Unless everyone jumps in. But how reasonable is this assumption? Who will benefit from these technologies? The very wealthy? The select few that have access to them? What of the rest of society? Would we end up creating a dual species of beings, humans and transhuman demi-gods? Would there be mutual tolerance and respect? I can imagine all sorts of sci-fi scenarios unfolding, utopic and dystopic.

Meanwhile, while the quest for immortality continues, what we can do is eat well, exercise, and try to live a life of meaning, leaving the world a better place than how we found it. Or, perhaps, for some in the future, never leaving it at all.

Marcelo Gleiser is a theoretical physicist and writer and a professor of natural philosophy, physics and astronomy at Dartmouth College. He is the director of the Institute for Cross-Disciplinary Engagement at Dartmouth, co-founder of 13.7 and an active promoter of science to the general public. His latest book is The Simple Beauty of the Unexpected: A Natural Philosopher's Quest for Trout and the Meaning of Everything. You can keep up with Marcelo on Facebook and Twitter: @mgleiser

See the original post:

Is neuroscience rediscovering the soul? - Minnesota Public Radio News

A Rancor In Cloud City: Behind VR’s Best April Fools’ Prank Yet – UploadVR

If you pulled on your Vive on April Fools day and booted it up, chances are you found yourself in your normal home space and didnt think much of it. However, if youre a bit of a Star Wars fan enough to decorate your space after The Empire Strikes Back then you may have been in for a shock.

Last year Kent Sunde created one of the better Star Wars VR tributes; a Vive home space set inside the iconic Cloud City from the series most celebrated chapter. The space authentically recreates the wind-swept cat walk scene where a certain Dark Lord relieves a certain son of a certain hand. Thousands of tiny lights surround you, seemingly stretching on forever both above and below. You can stand on the edge of the catwalk and imagine dropping all the way to the bottom like a desperate Luke Skywalker did, or walk to the end and picture Darth Vader urging you to join him and rule the galaxy. Sunde did an excellent job of making a space thats fun to simply exist in.

But on April 1st he had other ideas.

Even knowing what was to come I still jumped out of my skin. As the screen flickers to life youll find a huge, monstrous set of claws just inches away from your face. Its enough to make you scramble backwards in surprise, convinced for a brief few seconds that youve fallen into a nightmare. If you dare allow yourself to turn your head to the left a little, youll find what the hand is attached to: the Rancor from Return of the Jedi.

Im kind of a VR evangelist, and the idea for the Rancor had come from one of my common rants about how VR will revolutionize game playing once we get past this wave based shooter phase, Sunde tells me when I catch up with him following the brilliantly cruel prank. As a 3D artist passionate about VR, Sunde wants to focus on two things within the medium.

The first is the sense of scale VR provides, which is actually why hed made the Cloud City environment in the first place. He also loves experiences that really root the player in the virtual space theyre standing in.

Weve all gotten that sense of scale and presence with the whale encounter [theBlu], which is the first thing I show people who havent tried VR yet, Sunde says. When the whale comes up to the user I quite regularly see them back up to give that whale space as it enters the players area and that to me is presence, and something right now that game designers really need to play with in a narrative sense.

Sunde, who now teaches modelling and texturing at Capilano University, was working on his portfolio with these thoughts in mind. He wants to create something large and introduce it into an environment in which the user was contained. Obviously, hed already done the leg work on one of those ideas, and 20,000 people already had it installed.

I thought why not go dark side and play an April Fools prank? he says. First, it was a fantastic excuse to fix up some of the lighting and texturing in that scene. However I almost didnt go through with it because I thought the Rancor would be too big, and it wouldnt make sense, but after blocking him in and prototyping I thought its a joke too and hes fitting in okay

So he set about sculpting, posing and painting the beast all within the space of two days. By the end, he had to make some optimizations to the scene itself to fit it in there.

I think the goals of the project were achieved by watching my students and other colleagues going into the environment, he explains. Given the fact I dare not touch the Rancor even knowing it wouldnt move, Id say he did a pretty good job too.

April Fools day usually means an amusing, if throwaway, prank story or product render mock-up. Sunde, however, used VR in a brilliant way to play one of the best tricks on people Ive seen in years.

Tagged with: april fools, htc vive, Kent Sunde, Star Wars

See the original post here:

A Rancor In Cloud City: Behind VR's Best April Fools' Prank Yet - UploadVR

Preventive medicine at the state level: Shadowing Dr. Braund – American Medical Association (blog)

As a medical student, do you ever wonder what its like to specialize in preventive medicine? Meet Wendy E. Braund, MD, MPH, the state health officer for Wyoming and a featured physician in the AMA WireShadow Me Specialty Series, which offers advice directly from physicians about life in their specialties. Check out her insights to help determine whether a career in preventive medicine might be a good fit for you, and compare her responses with those of two other physicians in this specialty, Daniel Blumenthal, MD, and Robert Carr, MD, MPH.

Shadowing Dr. Braund

Specialty: General preventive medicine and public health

Practice setting: State health department

Employment type: Government

Years in practice: 10

A typical day and week in my practice:

As the state health officer for Wyoming, I have broad jurisdiction over public health events that occur in Wyoming. We get a surprising number of inquiries from residents looking for answers on things covered within the public health statute, from rodent infestations in empty lots to ownership rights on common graves. My staff and I also respond to any public health emergencies that arise, such as communicable disease outbreaks, floods and fires.

The most challenging and rewarding aspects of caring for preventive medicine patients: Everyone in Wyoming is my patient, which poses some unique challenges and opportunities. Lack of funding and inability to hire staff with formal public health training and expertise are chronic issues. Many of the public health problems we are addressing have long-term outcomes, so determining appropriate proxy measures to determine the impact of our programs and initiatives in the short term is challenging but necessary.

It is a tremendous privilege to be the state health officer and to have the opportunity to set the public health agenda for the state. Everyone within this enterprise knows they are working for the public good, which is very rewarding, especially when we see people getting healthier and living longer, better lives because of it.

Three adjectives to describe the typical preventive medicine specialist: Dedicated, resourceful and data-driven.

How my lifestyle matches, or differs from, what I had envisioned in medical school: Like most medical students, I envisioned a life of seeing individual patients, but now populations are my patients. I do much more administrative work than I envisioned, but like many other specialists, Im on call.

Skills every physician in training should have for preventive medicine but wont be tested for on the board exam: Leadership, systems thinking and financial management. Also, if youre going to practice governmental public health, you absolutely have to be politically savvy, because getting things accomplished, particularly from the legislative perspective, requires navigating the system. You have to be able to put public health issues in terms that are understandable to decision-makers and also know which battles to choose and how to frame them.

One question physicians in training should ask themselves before pursuing this specialty: Are you OK with not seeing patients on a regular basis?

Books every medical student in preventive medicine should be reading: Anything by Abraham Verghese, MD, and Oliver Sacks, MD, as well as A Chancellor's Tale: Transforming Academic Medicine, by Ralph Snyderman, MD.

The online resource students interested in my specialty should follow: The Community Guide.

Quick insights I would give students who are considering preventive medicine: Do a rotation in preventive medicine. Also, talk with preventive medicine doctors in multiple settings. Preventive medicine physicians have very broad skill setsincluding clinical preventive medicine, occupational medicine, health policy, health systems and health administrationand there is huge variability in their practices, from public health to academic medicine to clinical preventive and lifestyle medicine.

Go here to see the original:

Preventive medicine at the state level: Shadowing Dr. Braund - American Medical Association (blog)

Is medical marijuana right for you? Concerns about the medicine … – WEAR

Is medical marijuana right for you? Concerns about the medicine addressed

Interest in medical marijuana in Northwest Florida skyrocketed this week after the first dispensary opened its doors.

Dr. Michelle Beasley at the Medical Cannabis Clinic of Florida said many people have concerns about using the drug to treat their illnesses.

One of her patients is Channel 3 producer Brett Haskell.

Haskell was diagnosed with Hodgkin's Lymphoma in November and has plenty of questions about how medical marijuana could help him.

He's following up at the clinic 90 days after first asking about medical marijuana.

Under the law, he had to wait that amount of time to build a relationship with Dr. Beasley before she could prescribe the medicine for the first time.

Haskell said, "I was kind of concerned with the 90 days and my time period of getting the cancer cured and going through chemo."

He's gone through chemo and researched medical cannabis to see if it would help with the side effects of his treatment.

"They have other drugs out there that they have for me right now, which is dealing with the nausea and it gives your drowsiness," Haskell said. "There was another one I actually had an allergic reaction to."

Dr. Beasley said many people are worried about taking the medicine and going through their normal everyday lives. She said cannabis use is very patient specific.

The same dosage doesn't work for everyone.

"Some strains are more, make you sleepy. Other ones are more energizing so depending on the type of illness, the age of the patient, their exposure to cannabis in the past, that can all change how much medical cannabis I would start using," Dr. Beasley said.

There's a wide range of medical cannabis from non-euphoric Cannabidiol (CBD) to others that can make you high containing high levels of Tetrahydrocannabinol (THC).

She said prescribing both is important because they work well together.

Dr. Beasley said, "Having CBD around you gets your own medical benefits from the CBD, but CBD actually helps keep the THC in check so patients can benefit from the medical properties of THC without having to have the side effects."

The end goal is to help patients like Haskell live better lives without pain or suffering.

"All the patients I've seen, their goal is to be more functional in their life," Dr Beasley said.

Haskell is waiting for a required registry card before he can actually go buy medical marijuana.

Dr. Beasley has over 100 patients on the registry and at least double that currently in the 90-day waiting period.

More here:

Is medical marijuana right for you? Concerns about the medicine ... - WEAR