Stryker Expands Its Interventional Spine Segment – Market Realist

Strykers Recent Developments Strengthen Its Market Position PART 3 OF 7

Stryker (SYK) registered strong growth inrecent years. In 1Q17, Stryker reported YoY (year-over-year) growth of ~18.5%. The Neurotechnology & Spine segment contributes the least to Strykers revenues out of all the companys business segments. However, the segment presents immense growth potential for the company. The spine side of the segment has seen a weak performance in recent quarters due to some supply issues in the US.

The company expects strong spine sales in 2Q17 and the rest of the year due to easing supply issues as well as new product launches in its interventional spine and 3D printing and titanium platform. Medtronic (MDT), Nuvasive (NUVA), and Johnson & Johnson (JNJ) are some of the other leading companies in the neurotechnology and spine market. Investors get diversified exposure to Stryker by investing in theVanguard Total Stock Market ETF (VTI).Stryker accounts for ~0.20% of VTIs total holdings.

On July 11, 2017, Stryker announced that it received the FDA 510(k) clearance for its MultiGen 2 RF (radio frequency) generator. The company claims that the next generation of radio-frequency ablation procedure arrived when the device launched. The product is expected to help physicians perform radiofrequency ablation, which is a minimally invasive procedure for facet joint pain, more efficiently with higher reliability and control.

Facet joint pain is the pain at the joint between the two vertebrae in the spine that enables bending and twisting. According to the National Center for Health Statistics, lower back pain impacts around 70 million Americans at any given time.

The MultiGen 2 RF generator provides a customizable procedure platform based on the physicians preference and patients needs. It has flexible stimulation controls. The product is powered with double the industry standard. The target temperature is achieved faster with minimal errors. The product allows physicians to start a procedure by pushing a single button. Physicians can create strip lesions without removing electrodes and simultaneously resolve errors without stopping the procedure. These abilities shorten the time it takes to complete procedures.

Next, well look at the companys recent product launch in the orthopedics segment.

Read the original post:

Stryker Expands Its Interventional Spine Segment - Market Realist

BRAIN center gathers to ponder future, direction – Arizona State University

July 19, 2017

For all its resiliency and creativity, the human brain is equally fragile and prone to disease. Millions around the world are affected by neurological and neurodegenerative diseases. In fact, a World Health Organization study found eight out of 10 disorders in the three highest disability classes are linked to neurological problems, a figure likely to increase, as the global elderly population is expected to double by 2050.

In response to this growing need, a new collaboration between Arizona State University, the University of Houston and industry members formed to develop and test new neurotechnologies. Above: From left to right, Professors Jose L. Contreras-Vidal and Marco Santello pose for a photo with Deans Joseph W. Tedesco and Kyle Squires, of the University of Houston's Cullen College of Engineering and ASU's Ira A. Fulton Schools of Engineering, respectively, at Old Main on the Tempe campus, June 29. Santello and Contreras-Vidal lead the ASU and UH sites for the new National Science Foundation-funded Building Reliable Advancements in Neurotechnology, or BRAIN, an IndustryUniversity Cooperative Research Center. Photo by Jessica Hochreiter/ASU Download Full Image

Building Reliable Advancements in Neurotechnology, or BRAIN, is an IndustryUniversity Cooperative Research Center dedicated to bringing new neurotechnologies and treatments to market. The center was officially funded earlier this year with a $1.5 million grant from the National Science Foundation, and has already attracted nine industry partners.

BRAIN held its first industry advisory board meeting June 2930 on ASUs Tempe campus, bringing together stakeholders to begin charting the course of the collaboration.

Neurodegenerative diseases are one of the biggest challenges society faces today, said Professor Marco Santello at the outset of the meeting. An aim of the center is to not only develop new devices and strategies in the realm of neurotechnology, but validate existing ones as well.

Santello and Professor Jose L. Contreras-Vidal, directors of the respective ASU and UH BRAIN sites, will lead the center, which includes more than 40 faculty members from ASUs Ira A. Fulton Schools of Engineering and UHs Cullen College of Engineering.

The pair defined the centers five main research areas as neurological clinical research, mobility assessment and clinical intervention, invasive neurotechnology, noninvasive neurotechnology and neurorehabilitation technology.

Santello, who also serves as the director of the School of Biological and Health Systems Engineering, said BRAINs areas of interest are intentionally broad as to fully investigate all potential solutions, approaches, and outcomes related to neurotechnology.

Contreras-Vidal, who also leads UHs Laboratory for Non-invasive Brain-Machine Interface Systems, noted the unique faculty resources that UH and ASU bring together, whose research expertise encompasses neuroscience, invasive and noninvasive interfaces and neuromodulation, neuroimaging, rehabilitation technologies, big data and bioinformatics as well as regulatory science and law and neuroethics.

Though a stable of researchers firmly rooted in neurology, data, device development and clinical trials are essential to BRAINs success, equally important is the inclusion of regulatory law experts. To this end, Contreras-Vidal is leading a Research Collaborative Agreement between UH and the Food and Drug Administration.

Brain activity measurements, such as scalp electroencephalography, have both diagnostic value in and of themselves, and also value as objective endpoints for measuring the efficacy of other medical devices. However, despite their growing importance, very little is known about the constancy and variability of these measurements in real complex settings in healthy individuals and in the patient population. Nevertheless, the efficacy and safety of EEG-based diagnostics and therapeutics depend on such scientific understanding, Contreras-Vidal said. Thus, understanding of the population distribution of EEG-based biometrics is regulatory science that contributes to personalized medicine and to the development of better biomedical devices.

Professor Barbara Evans of UH, whose background includes engineering, earth science and law, will serve as a resource for regulatory processes, issues and strategy, noting its sometimes necessary to think five or 10 years ahead.

This type of work is going to take careful thought about how to address the FDA, and work out regulatory solutions, said Evans, who is also the director of the Center on Biotechnology and Law at UH. The burden of neurocognitive diseases is a pressing problem. While there are pharmaceutical solutions which have promise, there is even greater promise in terms of the research at BRAIN and I believe we have to attack these diseases on every front. The main thing I hope to do is help translate wonderful technology to market and help people.

The nine industry partners include companies such as Medtronic, the CORE Institute, Indus Instruments and Brain Vision LLC, as well as medical institutions such as the Phoenix Childrens Hospital and The Institute for Rehabilitation and Research Memorial Hermann Hospital.

Eric Maas, a Medtronic representative, said his company was drawn to the immense talent pool contained within BRAIN.

This partnership not only benefits Medtronic, but the world, Maas said. Big companies like ours like to go after big problems, but a center like this opens up paths to solve smaller, sometimes overlooked illnesses that deserve attention.

For Dr. David Adelson, director of the Barrow Neurological Institute and chief of pediatric neurosurgery at Phoenix Childrens Hospital, BRAIN has been a long time coming. Adelson has long since been an advocate for bringing cutting-edge research to clinical care, pushing for a center like BRAIN for some time.

So much of medicine is focused on adults and not children, and so much of is applicable to pediatric care, said Adelson, noting that traumatic brain injury is the leading cause of disability and death in children and adolescents in the U.S.

United with invested industry partners, the multifaceted, transdisciplinary research approach of ASU and UH caught the interest of the National Science Foundation as a way to address the big picture challenges of brain research.

The technical expertise of both ASU and UH goes without saying, but both universities did well in bringing together industry members to get this center off the ground, said Dmitri Perkins, director of the NSFs IUCRC program. Brain research is in general an area of great national interest. The NSF looks for centers with potential to deliver great impact in their areas of study as well as the possibility to work with other IUCRCs, universities and industries, and we see that here.

VisitBRAIN onlinefor more information about the center, or contactSantelloandContreras-Vidalto discuss partnership opportunities.

See the article here:

BRAIN center gathers to ponder future, direction - Arizona State University

Preserving the Right to Cognitive Liberty – Scientific American

The idea of the human mind as the domain of absolute protection from external intrusion has persisted for centuries. Today, however, this presumption might no longer hold. Sophisticated neuroimaging machines and brain-computer interfaces detect the electrical activity of neurons, enabling us to decode and even alter the nervous system signals that accompany mental processes. Whereas these advances have a great potential for research and medicine, they pose a fundamental ethical, legal and social challenge: determining whether or under what conditions it is legitimate to gain access to or interfere with another person's neural activity.

This question has special social relevance because many neurotechnologies have moved away from a medical setting and into the commercial domain. Attempts to decode mental information via imaging are also occurring in court cases, sometimes in a scientifically questionable way. For example, in 2008 a woman in India was convicted of murder and sentenced to life imprisonment on the basis of a brain scan showing, according to the judge, experiential knowledge about the crime. The potential use of neural technology as a lie detector for interrogation purposes has garnered particular attention. In spite of experts' skepticism, commercial companies are marketing the use of functional MRI- and electroencephalography-based technology to ascertain truth and falsehood. The military is also testing monitoring techniques for another reason: to use brain stimulation to increase a fighter's alertness and attention.

Brain-reading technology can be seen as just another unavoidable trend that erodes a bit more of our personal space in the digital world. But given the sanctity of our mental privacy, we might not be so willing to accept this intrusion. People could, in fact, look at this technology as something that requires the reconceptualization of basic human rights and even the creation of neurospecific rights.

Lawyers are already talking about a right to cognitive liberty. It would entitle people to make free and competent decisions regarding the use of technology that can affect their thoughts. A right to mental privacy would protect individuals against unconsented-to intrusion by third parties into their brain data, as well as against the unauthorized collection of those data. Breaches of privacy at the neural level could be more dangerous than conventional ones because they can bypass the level of conscious reasoning, leaving us without protections from having our mind read involuntarily. This risk applies not only to predatory marketing studies or to courts using such technology excessively but also to applications that would affect general consumers. This last category is growing. Recently Facebook unveiled a plan to create a speech-to-text interface to translate thoughts directly from brain to computer. Similar attempts are being made by companies such as Samsung and Netflix. In the future, brain control could replace the keyboard and speech recognition as the primary way to interact with computers.

If brain-scanning tools become ubiquitous, novel possibilities for misuse will arisecybersecurity breaches included. Medical devices connected to the brain are vulnerable to sabotage, and neuroscientists at the University of Oxford suggest that the same vulnerability applies to brain implants, leading to the possibility of a phenomenon called brainjacking. Such potential for misuse might prompt us to reconceptualize the right to mental integrity, already recognized as a fundamental human right to mental health. This new understanding would not only protect people from being denied access to treatment for mental illness but would also protect all of us from harmful manipulations of our neural activity through the misuse of technology.

Finally, a right to psychological continuity might preserve people's mental life from external alteration by third parties. The same kind of brain interventions being explored to reduce the need for sleep in the military could be adapted to make soldiers more belligerent or fearless. Neurotechnology brings benefits, but to minimize unintended risks, we need an open debate involving neuroscientists, legal experts, ethicists and general citizens.

See the original post here:

Preserving the Right to Cognitive Liberty - Scientific American

Biomimetic Underwater Robot Program

We are developing neurotechnology based on the neurophysiology and behavior of animal models. We developed two classes of biomimetic autonomous underwater vehicles (see above). The first is an 8-legged ambulatory vehicle that is based on the lobster and is intended for autonomous remote-sensing operations in rivers and/or the littoral zone ocean bottom with robust adaptations to irregular bottom contours, current and surge. The second vehicle is an undulatory system that is based on the lamprey and is intended for remote sensing operations in the water column with robust depth/altitude control and high maneuverability. These vehicles are based on a common biomimetic control, actuator and sensor architecture that features highly modularized components and low cost per vehicle. Operating in concert, they can conduct autonomous investigation of both the bottom and water column of the littoral zone or rivers. These systems represent a new class of autonomous underwater vehicles that may be adapted to operations in a variety of habitat

We are collaborating with investigators at The University of California, The University of Alabama and Newcastle University to apply principles of synthetic biology to the integration of a hybrid microbot. The aim of this research is to construct Cyberplasm, a micro-scale robot integrating microelectronics with cells in which sensor and actuator genes have been inserted and expressed. This will be accomplished using a combination of cellular device integration, advanced microelectronics and biomimicry; an approach that mimics animal models; in the latter we will imitate some of the behavior of the marine animal the sea lamprey. Synthetic muscle will generate undulatory movements to propel the robot through the water. Synthetic sensors derived from yeast cells will be reporting signals from the immediate environment. These signals will be processed by an electronic nervous system. The electronic brain will, in turn, generate signals to drive the muscle cells that will use glucose for energy. All electronic components will be powered by a microbial fuel cell integrated into the robot body.

This research aims to harness the power of synthetic biology at the cellular level by integrating specific gene parts into bacteria, yeast and mammalian cells to carry out device like functions. Moreover this approach will allow the cells/bacteria to be simplified so that the input/output (I/O) requirements of device integration can be addressed. In particular we plan to use visual receptors to couple electronics to both sensation and actuation through light signals. In addition synthetic biology will be carried out at the systems level by interfacing multiple cellular /bacterial devices together, connecting to an electronic brain and in effect creating a multi-cellular biohybrid micro-robot. Motile function will be achieved by engineering muscle cells to have the minimal cellular machinery required for excitation/contraction coupling and contractile function. The muscle will be powered by mitochondrial conversion of glucose to ATP, an energetic currency in biological cells, hence combining power generation with actuation.

We are also developing neuronal circuit based controllers for both robots and neurorehabilitative devices. These controllers are based on

LAB MEMBERS

VISITING SCIENTISTS

GRADUATE STUDENTS

INTERNS

Continued here:

Biomimetic Underwater Robot Program

Comparing Uroplasty (UPI) and Stryker Corporation (NYSE:SYK) – The Cerbat Gem


The Daily Leicester
Comparing Uroplasty (UPI) and Stryker Corporation (NYSE:SYK)
The Cerbat Gem
The Company offers a range of medical technologies, including orthopedic, medical and surgical, and neurotechnology and spine products. The Company's segments include Orthopaedics; MedSurg; Neurotechnology and Spine, and Corporate and Other.
What's Propelling Stryker Corporation (SYK) to Reach 52-Week High?Weekly Register
Stryker Corporation (SYK)Zacks.com

all 70 news articles »

See the article here:

Comparing Uroplasty (UPI) and Stryker Corporation (NYSE:SYK) - The Cerbat Gem

Insider Activity Stryker Corporation (NYSE:SYK) – Highlight Press

Advertisement

Here is the rundown on market activity for Stryker Corporation (NYSE:SYK). Group President, Orthopaedics David Floyd sold 19,305 shares at an average price of $144.56 on Wed the 12th. Floyd now owns $1,238,590 of stock per an SEC filing yesterday. David Floyd, Group President, Orthopaedics sold $1,068,031 worth of shares at an average price of $144.70 on Mon the 5th. That brings the Group President, Orthopaedicss holdings to $4,033,223 as reported to the SEC.

Timothy J. Scannell, Group President sold $1,810,327 worth of shares at an average price of $135.89 on May 2nd. That brings Scannells holdings to $15,576,119 as recorded in a recent Form 4 SEC filing.

Stryker Corporation (Stryker), launched on February 20, 1946, is a medical technology company. The Company offers a range of medical technologies, including orthopedic, medical and surgical, and neurotechnology and spine products. The Businesss segments include Orthopaedics; MedSurg; Neurotechnology and Spine, and Corporate and Other. The Orthopaedics segment includes reconstructive (hip and knee) and trauma implant systems and other related products. The Businesss MedSurg segment consists of instruments, endoscopy, medical and sustainability products. The Neurotechnology and Spine segment includes neurovascular products, spinal implant systems and other related products..

These funds have also shifted positions in (SYK). Eqis Capital Management, Inc. added to its stake by buying 1,575 shares an increase of 10.3% as of 06/30/2017. Eqis Capital Management, Inc. owns 16,877 shares valued at $2,342,000. The value of the position overall is up by 16.3%. As of the end of the quarter Old National Bancorp /in/ had sold a total of 241 shares trimming its holdings by 4.9%. The value of the investment in (SYK) went from $647,000 to $649,000 increasing 0.3% quarter over quarter.

As of quarter end Lejeune Puetz Investment Counsel LLC had disposed of 240 shares trimming its position 6.1%. The value of the investment in SYK decreased from $516,000 to $511,000 a change of 1.0% quarter to quarter. As of the end of the quarter Central Trust Co had sold a total of 200 shares trimming its stake by 5.6%. The value of the investment in Stryker Corporation decreased from $467,000 to $465,000 a change of $2,000 since the last quarter.

Cantor Fitzgerald added SYK to its research portfolio with a rating of Neutral. On May 16 analysts at Goldman Sachs started covering SYK giving it an initial rating of Neutral.

On December 15 the stock rating was upgraded to Buy from in a statement from UBS. On November 1 the company was upgraded from Underperform to Market Perform in a report from BMO Capital.

Equity analyst SunTrust Robinson Humphrey issued its first research report on the stock setting a rating of Buy. On June 9, 2016 Guggenheim Securities initiated coverage on SYK with an initial rating of Buy.

The company is so far trading up from yesterdays close of $143.2. Additionally Stryker Corporation announced a dividend that will be paid on Monday the 31st of July 2017. The dividend payment will be $0.425 per share for the quarter or $1.70 on an annualized basis. This dividend represents a yeild of $1.18 which is the dividend as a percentage of the current share price. The ex-dividend date will be Wednesday the 28th of June 2017.

Shares of the company are trading at $145.40 just above the 50 day moving average which is $140.23 and barely above the 200 day moving average of $130.49. The 50 day moving average went up by +3.68% and the 200 day average was up $14.91.

The company currently has a P/E ratio of 32.67 and market capitalization is 54.35B. In the latest earnings report the EPS was $4.45 and is projected to be $6.43 for the current year with 373,765,000 shares outstanding. Next quarters EPS is forecasted to be $1.52 with next years EPS anticipated to be $7.05.

Link:

Insider Activity Stryker Corporation (NYSE:SYK) - Highlight Press

Infinitely Flexible 3D Printing with Ultrasonic Manipulation? – ENGINEERING.com

3D printing is an exciting technology in its own right, but, as it works today, it is normally used to fabricate individual components and not functional objects. At most, hundreds of parts in an assembly can be consolidated into a single 3D-printed item, but that item still cannot function on its own.

Progress is being made to change additive manufacturing (AM) technology into something even more powerful, however. In the future, it may be possible to fabricate complete functional objects in a single manufacturing process. Think of it: your smartphone could be produced in one piece in one automatic process.

One company has demonstrated a possible route to that ideal future. Using a unique ultrasonic technique, Neurotechnology, based out of Lithuania, may be able to 3D print a wide variety of objects, including circuits. ENGINEERING.com spoke to Osvaldas Putkis, research engineer and project lead for the companys Ultrasound Research Group, to learn more.

Neurotechnology is focused on developing algorithms and software for biometric applications, such as fingerprint, face, eye and voice recognition. Since launching its first fingerprint identification system in 1991, Neurotechnology has begun exploring other technologies, beginning research into artificial intelligence (AI), computer vision and autonomous robotics in 2004.

While Neurotechnologys core business is in the fields of biometry, computer vision and AI, it is always looking for opportunities to research and develop new technologies that sometimes can be outside the main companys focus, Putkis said. Ultrasonic manipulation seemed an exciting research area with an unused potential and, with the hiring of key personnel who have expertise in ultrasound, an Ultrasound Research Group was created three years ago.

Ultrasonic manipulation? No, its not a sleazy method for picking up strangers at a bar from the dirt bags that brought you those pickup artist guides. It involves using ultrasonic waves to grab and move objects.

A rendering of Neurotechnologys ultrasonic manipulation technique. (Image courtesy of Neurotechnology/YouTube.)

Typically, according to Putkis, most of the research and development in ultrasonic manipulation has been dedicated to liquid media for for cell sorting, cell patterning, [and] single cell manipulation. Applied research on manipulation in air, Putkis said, concentrates on container-less processing and analysis of chemical substances by levitating the samples.

After establishing the Ultrasound Research Group in 2014, the company developed a working prototype, finally releasing footage of its ultrasonic manipulation technique this past June. The process uses a computer with computer vision and an array of ultrasonic transducers, each of which can be controlled individually to grab, move and rotate components by changing the ultrasonic waves they emit.

In the demonstration video embedded above, the system has been set up to position and solder electronic components on a printed circuit board (PCB). Soldering is performed using an onboard laser that fuses the pieces onto the PCB, and is guided by the vision system. Altogether, there is no physical contact made with the objects being moved and soldered, opening up a number of possibilities.

Neurotechnologys ultrasonic manipulation prototype 3D printer. (Image courtesy of Neurotechnology/YouTube.)

"Ultrasonic manipulation can handle a very large range of different materials, including metals, plastics and even liquids," Putkis said."Not only can it manipulate material particles, it can also handle components of various shapes. Other noncontact methods, like the ones based on magnetic or electrostatic forces, can't offer such versatility."

This range of material manipulation, not seen with other technologies like magnetic or electrostatic techniques, means that the technology can print with elements that have a variety of shapes and mechanical properties. This includes liquids, such as conductive ink, and solids, like electronic components. These elements can range from a couple of millimeters in size to submillimeter particles. And ultrasonic manipulation can do this without causing any damage to the elements or introducing electrostatic forces into the process.

Ultrasonic manipulation can control a wide variety of substances, shapes and sizes. (Images courtesy of Neurotechnology/YouTube.)

By altering the ultrasonic profile of the process, the precision of object movement and placement can become highly refined. With ultrasonic waves of 40 kHz, its possible to attain accuracies of within tens of microns. Even higher frequencies result in even more precise movement.

Putkis explained that there may be weight restrictions with the ultrasonic transducers, but that this may not always be the case when the density of the elements is taken into consideration. [Pa]rticle dimensions should be in a sub-wavelength region of the ultrasonic waves used, Putkis said. In terms of weight, it is usually the density of the material that is the determining factor. You will need to create very similar pressure amplitude in order to levitate a 1-millimeter diameter or a 2-millimeter diameter plastic sphere. While the gravity force is bigger for a larger sphere, a larger sphere also has a larger surface area, increasing pressure force respectively. With our semisphere levitator shown in the video, we can levitate materials as dense as solder metal (approx. 8000 kg/m3).

The technology is also already fairly automated. The camera is capable of determining the PCBs position and orientation, making it possible to know where a component should be positioned. The circuits used in the companys demonstration are not overly complex and do not have many elements. Therefore, the trajectories can easily be calculated, according to Putkis.

Neurotechnology has already filed a patent for the technology and is continuing to develop its capabilities. At the moment, the system can only assemble simple electronics, so the Ultrasound Research Group intends to expand the platform.

[O]ur plans now are to develop and demonstrate capabilities of the technology to print/deposit other materials or components, Putkis explained. As our main expertise is in ultrasound, we are willing to cooperate with companies from the 3D printing industry in order to incorporate the technology in 3D printing systems.

If we are successful in adding the capability of printing plastics and improving the current prototype for electronic assembly, it would already be a powerful printer that can print some of the electronic devices, Putkis added. Another application could be to use ultrasonic manipulation just for component handling and integrate it to existing printing technologies of plastics or metal, in this way also creating a more universal printer.

To make the platform as flexible as possible, Putkis noted one specific challenge. The biggest challenges are finding methods for dispensing and soldering material and components that can work for a wide range of different components and materials in order to make full use of the handling versatility of ultrasonic manipulation, he said.

It would be interesting to see Neurotechnology partner with 3D printing companies already focused on electronics 3D printing. Two immediately come to mind: Voxel8 and Nano Dimension. Voxel8 has developed a fused deposition modeling desktop 3D printer that is capable of printing plastic parts with conductive silver ink traces, making it possible to manually embed electronic components to create functional objects. Nano Dimension, in contrast, relies on an inkjet printhead and photocurable resin to produce PCBs.

In both cases, electronic components must be manually inserted. Its not impossible to imagine incorporating an array of ultrasonic transducers into either platform in order to automatically move the components throughout the printbed as the fabrication process is taking place.

Facebook also recently scooped up a company, Nascent Objects, that was using EnvisionTECs digital light processing technology to 3D print functional electronic goods. Although we havent heard from the company in some time, the acquisition is an indicator that this field is a potentially highly valuable one. We may still be years away from being able to 3D print a complete cell phone in a single printing process, but even the steps along the way will be exciting ones, as Putkiss research shows.

To learn more about Neurotechnology, visit the company website.

Read the original:

Infinitely Flexible 3D Printing with Ultrasonic Manipulation? - ENGINEERING.com

DARPA invests further in neurotechnology – SD Times – SDTimes.com

The Defense Advanced Research Projects Agency (DARPA) wants to expand neurotechnology capabilities and create a high-resolution neural interface. The agency announced it is awarding contracts to five research organizations and one company as part of its Neural Engineering System Design (NESD) program.

DARPA announced NESD in January of 2016. The program was created to provide a connection between the brain and digital world.

DARPA has invested hundreds of millions of dollars transitioning neuroscience into neurotechnology with a series of cumulatively more advanced research programs that expand the frontiers of what is possible in this enormously difficult domain. Weve laid the groundwork for a future in which advanced brain interface technologies will transform how people live and work, and the agency will continue to operate at the forward edge of this space to understand how national security might be affected as new players and even more powerful technologies emerge, Justin Sanchez, director of DARPAs Biological Technologies Office.

The contracts will go to: Brown University; Columbia University; Fondation Voir et Entendre (The Seeing and Hearing Foundation); John B. Pierce Laboratory; Paradromics, Inc.; and the University of California, Berkeley.

The organizations will form teams dedicated to creating working systems that support sensory restoration world. According to the agency, four of the teams will focus on vision while two will focus on hearing and speech.

Significant technical challenges lie ahead, but the teams we assembled have formulated feasible plans to deliver coordinated breakthroughs across a range of disciplines and integrate those efforts into end-to-end systems, said Phillip Alvelda, the founding NESD program manager.

The programs first year will focus on breakthroughs in hardware, software, and neuroscience. The second phase of the program will look into properly testing newly developed devices. Achieving the programs ambitious goals and ensuring that the envisioned devices will have the potential to be practical outside of a research setting will require integrated breakthroughs across numerous disciplines including neuroscience, synthetic biology, low-power electronics, photonics, medical device packaging and manufacturing, systems engineering, and clinical testing, according to NESDs website.

See the original post here:

DARPA invests further in neurotechnology - SD Times - SDTimes.com

HIRREM Neurotechnology Better Than Placebo for Insomnia – Sleep Review

A clinical trial has found that HIRREM [high-resolution, relational, resonance based, electroencephalic mirroring] closed-loop neurotechnology is more effective than placebo at reducing symptoms of insomnia and has additional benefits for heart rate and blood pressure regulation. Findings were presented in Boston at SLEEP 2017.

Developed by Brain State Technologies (BST), HIRREM is a noninvasive acoustic stimulation neurotechnology that applies software algorithms for real-time analysis of critical brain frequencies. The algorithms guide production of changing sequences of audible tones, which support brain oscillations to re-organize toward more optimal patterns of symmetry and frequency ratios.

The 3-year study enrolled 107 adults with insomnia and randomly assigned them to receive 10sessions of either HIRREM or a placebo intervention, which consisted of tones produced by a random generator. Subjects were blinded to their group assignment, and they received equal levels of social interaction during the 2-week treatment period. The trial was conducted at Wake Forest School of Medicine, Department of Neurology (Winston-Salem, North Carolina), by Charles Tegeler, MD.

At the predetermined endpoint 2months after their sessions, those who received HIRREM reported significantly greater reduction in insomnia symptoms than those who received placebo. Moreover, the HIRREM group showed marked improvements in heart rate variability and baroreflex sensitivity, whereas the placebo group showed no physiological changes. Ninety-four percent of the enrolled subjects completed all sessions and follow-up visits as scheduled, and there were no adverse events in either group.

Lee Gerdes, founder and CEO of BST, says in a release, We are thrilled that our noninvasive strategy showed highly practical benefits, in an easily tolerable way without side effects, for a problem that affects up to half the US population. He further notes that Brain State Technologies is continuing innovations on HIRREM and other products for well-being, above and beyond the methodology evaluated in this study.

According to Sung Lee, MD, MSc, director of research at BST, The brain is the organ of central command. This study shows that HIRREM benefits sleep, and also helps the brain to fine tune its regulation of heart rate and blood pressure in response to changing stress levels. He says closed-loop neural interventions such as HIRREM have the advantage of precision-guidance based on real-time physiological dynamics, in contrast to reliance on symptom changes or clinical assessments.

Continue reading here:

HIRREM Neurotechnology Better Than Placebo for Insomnia - Sleep Review

Mind-blowing ultrasonic ‘printer’ uses lasers and high-frequency sound to assemble electronics – Digital Trends

Get today's popular DigitalTrends articles in your inbox:

Why it matters to you

Ultrasonic assembly device would change what we think of as a 3D printer -- and make additive manufacturing far more versatile in the process.

Neurotechnology, a Lithuanian software development company, wants to rethink 3D printing using ultrasonic particle manipulation. That might sound pretty far-out and futuristic but with that goal in mind, the company has developed a radically new kind of printer, capable of printing just about anything you can imagine.

According to its creators, this technology could enable even something as complex as a smartphone to be 3D printed using a single machine: right from the outer casing to the printed electronic circuit boards that make it run.As well as your standard metals and plastics, it can also manipulate liquids with precision.

The apparatus uses an array of ultrasonic transducers that emit ultrasonic waves, lead researcher Osvaldas Putkis told Digital Trends. By having individual control of each transducer, it is possible to create desired pressure profiles that can trap, rotate and move particles and components without touch. The non-contact nature of ultrasonic manipulation offers a few important advantages when compared to mechanical handling. It can handle a wide range of materials having very different mechanical properties, from plastics and metals down to even liquids. It can [also] handle sensitive materials and small components, avoiding the parasitic electrostatic forces.

The physics behind the machine are pretty darn complex. However, if it works as well as the demo seen in the above video, you should be able to manipulate a wide range of particles in such a way that your created object forms together like a reassembling liquid metal T-100 from Terminator II. The company claims that its accuracy in moving objects is in the range of just a few microns.

At present, Putkis says his team has developed an early prototype, capable of assembling simple electronic circuits on a printed circuit board. To do this, it employs non-contact ultrasonic manipulation technology for positioning of the different electronic components, as well as a laser to solder them in place. To coordinate the process, calibrate the laser, and detect the various components, it uses an on-board camera.

At this stage it is very hard to say when such printer will be available as an end-user product as there still needs a lot of research and development to be done, Putkis said. We are seeking partnerships that could potentially help speed up the developments and application of this printing method.

In other words, it could be a bit of a wait until youre printing off the new iPhone at home, rather than queuing to pick one up from your local Apple store. If Neurotechnologys research pays off, though, this could be a serious game-changer even in an industry thats bursting at the seams with high-quality 3D printers.

See the rest here:

Mind-blowing ultrasonic 'printer' uses lasers and high-frequency sound to assemble electronics - Digital Trends

Neurotechnology Develops 3D Printing Method with Non-Contact Ultrasonic Manipulation Technology – 3DPrint.com

If youve ever had the feeling that everything you touch turns to, ah, the opposite of gold, a newly developed 3D printing technology emerging from Lithuania might just be the one for you. The Ultrasound Research Group at Neurotechnology has announced a new 3D printing method using ultrasonic manipulation technology thats totally hands-off. Its not just human hands either; the new method has totally non-contact tech behind it, allowing for the manipulation of parts and particles, down to the submillimeter range, without causing damage to sensitive components.

Ultrasonic manipulation can handle a very large range of different materials, including metals, plastics and even liquids. Not only can it manipulate material particles, it can also handle components of various shapes. Other non-contact methods, like the ones based on magnetic or electrostatic forces, cant offer such versatility, explained research engineerDr. Osvaldas Putkis, project lead for Neurotechnologys Ultrasound Research Group.

Neurotechnology is a Vilnius-based company, founded in 1990 under the name Neurotechnologija, that released its first technology a fingerprint identification system in 1991 and has been developing and updating new technologies since, having released more than 130 products and version upgrades throughout its history, including work with 3D modeling. The companys Ultrasound Research Group began work in developing ultrasonic 3D printing products in 2014, and today announced its new technology, which according to the company is set to enable 3D printing and assembly of almost any type of object using a wide range of different materials and components.

Dr. Osvaldas Putkis with the prototype 3D printer

If youre thinking that it sounds like a good idea to bring sound into 3D printing, you (and Neurotechnology) are not alone; Fabrisonic incorporates sound waves into its patented metal 3D printing process, welding layers together via Ultrasonic Additive Manufacturing (UAM) in a hybrid subtractive/additive manufacturing process. Sound waves have additionally been incorporated into more artistic endeavors, as Dutch artists brought vibrations into 3D printed clay creations and 3D printing came into play with work in acoustic manipulation.

Because the work from the Ultrasound Research Group represents a new technological application, Neurotechnology has filed a patent on their system. Neurotechnology describes ultrasonic manipulation as a non-contact material handling method which uses ultrasonic waves to trap and move small particles and components.

The company has shared a video to demonstrate the hands-off capabilities allowed for via ultrasonic manipulation, as their prototype printer can assemble a simple printed circuit board (PCB):

Ultrasonic transducers are arranged in this demonstration in an array used to position electronic components in the creation of a PCB, utilizing a camera to detect accurate positioning. Continuing on with the hands-off theme, a laser solders the PCB components after their non-contact manipulation into placement.

The prototype 3D printer

Important components to the system as described include the ultrasonic array, camera, and soldering laser:

Curious about what Neurotechnology is working on? We are, too, and well be hearing directly from the company with additional details and insights into their new ultrasonic-based 3D printing technology soon.

We do know now that the companys 3D printing apparatus and method of ultrasonic manipulation are patent pending, and that Neurotechnology is looking to collaborate with interested companies toward furthering the development of and applications for the new technology.

3D printing and PCB manufacture are increasingly coming together, as advanced technologies benefit the creation of devices in electronics, including via 3D printed workstations for PCBs. The 3D printer we hear about most often in conjunction with PCBs is of course the DragonFly 2020 from Nano Dimension, which creates, not just assembles, PCBs but they are by no means the only 3D printing player in the electronics space, as others are also looking to change things up and offer additional options in this growing application. As Neurotechnology notes that their method works with all kinds of materials, we can expect to see additional applications beyond PCB assembly, and look forward to sharing more details soon regarding the development and capabilities of their as-yet-unnamed 3D printing technology.

Read the original post:

Neurotechnology Develops 3D Printing Method with Non-Contact Ultrasonic Manipulation Technology - 3DPrint.com

Neurotechnology makes a number of updates to the MegaMatcher product line – Biometric Update

June 22, 2017-

Neurotechnology has announced the availability of MegaMatcher 10, the latest update to the MegaMatcher multi-biometric product line.

MegaMatcher 10 provides a number of significant updates across the MegaMatcher line, which includes: MegaMatcher SDK, a multi-biometric SDK for large scale systems; MegaMatcher Accelerator biometric matching engine, and; the MegaMatcher ABIS turnkey solution. Each biometric modality can be used alone or in any combination to provide to meet the needs of small and large-scale biometric identification projects.

This version provides increased accuracy in multiple biometric modalities, as confirmed by third-party independent tests, explained Dr. Justas Kranauskas, R&D manager for Neurotechnology. Together with the fastest biometric engine algorithms and great standards support, these updates enable our clients to create better products in every respect.

The MegaMatcher 10 update also includes a new version of the MegaMatcher Automated Biometric Identification System (ABIS).

Earlier this month Neurotechnology added a new Extreme edition to its MegaMatcher Accelerator solution.

comments

Read the original post:

Neurotechnology makes a number of updates to the MegaMatcher product line - Biometric Update

Neurotechnology adds face recognition, tracking to video surveillance systems; researchers win competition – Biometric Update

June 19, 2017-

Neurotechnology has released SentiVeillance Server, a ready-to-use solution that integrates with surveillance video management systems (VMS).

SentiVeillance Server is based on the companys deep neural network technology for facial recognition from surveillance camera video, giving a VMS advanced capabilities, including the ability to quickly and accurately recognize faces in video streams and trigger analytical event notifications whenever the system detects an authorized, unauthorized or unknown individual.

The new capabilities significantly improves the workflow of VMS operators so that they can quickly respond to evolving situations and easily view video of past events as well as filter them by gender, age or person ID.

SentiVeillance Server enables advanced analytics in many video management systems where it was too complex or too expensive before, said Aurimas Juska, Neurotechnology software development team lead. Users can benefit from an enhanced surveillance system with only a small amount of configuration and no need for programming.

The solution supports a range of video management systems including Milestone XProtect VMS and Luxriot Evo, Evo S and Evo Global.

SentiVeillance Server can process in real time up to 10 video streams from multiple video management systems.

The solution is equipped with Neurotechnologys latest deep neural-network-based facial detection and recognition algorithm which greatly improves identification accuracy and speed.

The technology is included in other Neurotechnology products including the VeriLook and MegaMatcher software development kits (SDK), which have millions of deployments worldwide.

In addition, the SentiVeillance SDK allows developers to create solutions using facial identification and object recognition from surveillance video.

In a separate announcement, Neurotechnology revealed that the companys deep neural network researchers won first place in a Kaggle competition that sought AI solutions for fisheries monitoring.

For their winning solution in The Nature Conservancy Fisheries Monitoring competition, the team of researchers won a first place prize of $50,000.

The team beat out the competing 2,292 submitted algorithms for the identification of fish and other marine species from video streams. The algorithms were evaluated based on an unseen test set that mimicked a real-life scenario.

Illegal, unreported and unregulated fishing practices are degrading marine ecosystems, global seafood supplies and local livelihoods, according to The Nature Conservancy.

The Neurotechnology employees, which entered the competition independently under the name Towards Robust-Optimal Learning of Learning, used advanced deep neural networks to solve this issue.

The Fisheries Monitoring competition was one of the biggest competitions for Kaggle, a learning, sharing and development site for data, code, research and process.

This was one of the first Kaggle competitions that was comprised of two stages, which means that models developed during the first stage were frozen and evaluated on unseen data that was made available during the second stage, said Gediminas Peksys from the Towards Robust Optimal Learning of Learning team. In such a setting, it is very easy for a teams models to overfit the data by using too many trainable parameters. We were able to utilize our teams experience using deep neural networks to come up with a robust model that performed a lot closer to the original estimate from stage one and generalized in a predictable manner on unseen data.

Previously reported, Neurotechnology added a new Extreme edition to its MegaMatcher Accelerator line of multi-biometric identification solutions for national-scale projects.

comments

View post:

Neurotechnology adds face recognition, tracking to video surveillance systems; researchers win competition - Biometric Update

Neurotechnology Announces MegaMatcher 10 – findBIOMETRICS

Posted on June 21, 2017

Lithuania-based Neurotechnology has announced a new upgrade to its MegaMatcher multimodal biometric platform.

MegaMatcher 10 offers several improvements to its previous iteration. The company says its fingerprint algorithms offer enhanced accuracy on lower-quality images, and improved interoperability with other vendors technology; while its face scanning offers better age detection, and iris scanning has been improved to enable the capture of eyes from various angles and in the visible light spectrum. Voice recognition also now offers greater accuracy.

Other improvements include updated standards support, such as for ICAO; new liveness detection capabilities for Android applications; and a new version of the Automated Biometric Identification System with various improvements.

The update arrives hot on the heels of the launch of SentiVeillance Server, Neurotechnologys new facial recognition solution for video surveillance; and very soon after last months announcement of a new version of its MegaMatcher Accelerator large-scale biometric matching system.

June 21, 2017 by Alex Perala

See original here:

Neurotechnology Announces MegaMatcher 10 - findBIOMETRICS

SentiVeillance Server – Face Recognition and Analytics to Video Management Systems – Officer.com (press release) (registration) (blog)

SentiVeillance Serverisa ready-to-use solution that integrates with surveillance video management systems (VMS). Based on the companys deep neural network technology for facial recognition from surveillance camera video, SentiVeillance Server enhances VMS with advanced capabilities, such as the ability to quickly and accurately recognize faces in video streams and trigger analytical event notifications whenever an authorized, unauthorized or unknown person is detected. This greatly improves the workflow of VMS operators, allowing them to quickly react to changing situations and to easily view video of past events and filter them by gender, age or person ID.

SentiVeillance Server enables advanced analytics in many video management systems where it was too complex or too expensive before, said Aurimas Juska, Neurotechnology software development team lead. Users can benefit from an enhanced surveillance system with only a small amount of configuration and no need for programming.

SentiVeillance Server supports most popular video management systems: Milestone XProtect VMS and Luxriot Evo, Evo S and Evo Global. SentiVeillance Server can process up to 10 video streams from multiple video management systems, all in real time.

SentiVeillance Server includes Neurotechnologys latest deep neural-network-based facial detection and recognition algorithm which significantly improves identification accuracy and speed. The algorithm is based on more than 13 years of development and research and has been tested in the NIST Face Recognition Vendor Test (FRVT) Ongoing. It is also included in other Neurotechnology products, such as the VeriLook and MegaMatcher software development kits (SDK), which have millions of deployments worldwide.

Neurotechnology also offers the SentiVeillance SDK for development of solutions using facial identification and object recognition from surveillance video.

SentiVeillance Server and the SDKs noted above are all available through Neurotechnology or from distributors worldwide.

For more information and trial version, go to:www.neurotechnology.com.

Read this article:

SentiVeillance Server - Face Recognition and Analytics to Video Management Systems - Officer.com (press release) (registration) (blog)

Neurotechnology Announces SentiVeillance Server Facial Recognition Solution – findBIOMETRICS

Posted on June 19, 2017

Neurotechnology has announced SentiVeillance Server, a new facial recognition solution designed for easy deployment on video surveillance systems.

Its compatible with the video management systems Evo Global, Evo S, Luxriot Evo, and Milestone XProtect VMS, enabling user to quickly identify faces in video streams, and to configure automatic alert notifications when certain faces or unknown faces are spotted. It also enables users to filter video by the age, gender, or face of individuals in the feed.

In a statement announcing the solution, Neurotechnology head of software development Aurimas Juska said SentiVeillance Server offers an enhanced surveillance system with only a small amount of configuration and no need for programming.

In keeping with Neurotechnologys recently upgraded SentiVeillance SDK, the new solution allows for up to ten different video feeds to be scanned simultaneously. A trial version is available now from Neurotechnology and the companys distributors.

June 19, 2017 by Alex Perala

Link:

Neurotechnology Announces SentiVeillance Server Facial Recognition Solution - findBIOMETRICS

New SentiVeillance Server from Neurotechnology Adds Face Recognition and Analytics to Video Management Systems – PR Newswire (press release)

SentiVeillance Server supports most popular video management systems: Milestone XProtect VMS and Luxriot Evo, Evo S and Evo Global. SentiVeillance Server can process up to 10 video streams from multiple video management systems, all in real time.

SentiVeillance Server includes Neurotechnology's latest deep neural-network-based facial detection and recognition algorithm which significantly improves identification accuracy and speed. The algorithm is based on more than 13 years of development and research and has been tested in the NIST Face Recognition Vendor Test (FRVT) Ongoing. It is also included in other Neurotechnology products, such as the VeriLook and MegaMatcher software development kits (SDK), which have millions of deployments worldwide.

Neurotechnology also offers the SentiVeillance SDK for development of solutions using facial identification and object recognition from surveillance video.

SentiVeillance Server and the SDKs noted above are all available through Neurotechnology or from distributors worldwide. For more information and trial version, go to: http://www.neurotechnology.com.

About Neurotechnology

Neurotechnology is a developer of high-precision algorithms and software based on deep neural network (DNN) and other AI-related technologies. The company offers a range of products for biometric fingerprint, face, iris, palmprint and voice identification as well as AI, computer vision, object recognition and robotics. Drawing from years of academic research in the fields of neuroinformatics, image processing and pattern recognition, Neurotechnology was founded in 1990 in Vilnius, Lithuania and released its first fingerprint identification system in 1991. Since that time the company has released more than 130 products and version upgrades. More than 3000 system integrators, security companies and hardware providers integrate Neurotechnology's algorithms into their products, with millions of customer installations worldwide. Neurotechnology's algorithms also achieved top results in independent technology evaluations including NIST MINEX and IREX.

Media Contact Jennifer Allen Newton Bluehouse Consulting Group, Inc. +1-503-805-7540 jennifer(at)bluehousecg(dot)com

To view the original version on PR Newswire, visit:http://www.prnewswire.com/news-releases/new-sentiveillance-server-from-neurotechnology-adds-face-recognition-and-analytics-to-video-management-systems-300475097.html

SOURCE Neurotechnology

http://www.neurotechnology.com

See more here:

New SentiVeillance Server from Neurotechnology Adds Face Recognition and Analytics to Video Management Systems - PR Newswire (press release)

The Funded: Justin Kan’s latest startup gets backing from more than 100 investors – Silicon Valley Business Journal


Silicon Valley Business Journal
The Funded: Justin Kan's latest startup gets backing from more than 100 investors
Silicon Valley Business Journal
Rythm, San Francisco, $22 million: The neurotechnology company raised money from investors that include MAIF and angel investors Xavier Niel and Dr. Laurent Alexandre. Culture Amp, San Francisco, $20 million: Sapphire Ventures led the Series C ...

Read more:

The Funded: Justin Kan's latest startup gets backing from more than 100 investors - Silicon Valley Business Journal

Accuray (ARAY) versus Stryker Corporation (SYK) Head-To-Head Review – The Cerbat Gem


Press Telegraph
Accuray (ARAY) versus Stryker Corporation (SYK) Head-To-Head Review
The Cerbat Gem
The Company offers a range of medical technologies, including orthopedic, medical and surgical, and neurotechnology and spine products. The Company's segments include Orthopaedics; MedSurg; Neurotechnology and Spine, and Corporate and Other.
$1.51 EPS Expected for Stryker Corporation (SYK)Weekly Register
Stryker Corporation (SYK)Zacks.com

all 74 news articles »

Read more:

Accuray (ARAY) versus Stryker Corporation (SYK) Head-To-Head Review - The Cerbat Gem

Helping or Hacking? Engineers, Ethicists Must Work Together on Brain-Computer Interface Technology – Government Technology

In the 1995 film Batman Forever, the Riddler used 3-D television to secretly access viewers most personal thoughts in his hunt for Batmans true identity. By 2011, the metrics company Nielsen had acquired Neurofocus and had created a consumer neuroscience division that uses integrated conscious and unconscious data to track customer decision-making habits. What was once a nefarious scheme in a Hollywood blockbuster seems poised to become a reality.

Recent announcements by Elon Musk and Facebook about brain-computer interface (BCI) technology are just the latest headlines in an ongoing science-fiction-becomes-reality story.

BCIs use brain signals to control objects in the outside world. Theyre a potentially world-changing innovation imagine being paralyzed but able to reach for something with a prosthetic arm just by thinking about it. But the revolutionary technology also raises concerns. Here at the University of Washingtons Center for Sensorimotor Neural Engineering (CSNE) we and our colleagues are researching BCI technology and a crucial part of that includes working on issues such as neuroethics and neural security. Ethicists and engineers are working together to understand and quantify risks and develop ways to protect the public now.

All BCI technology relies on being able to collect information from a brain that a device can then use or act on in some way. There are numerous places from which signals can be recorded, as well as infinite ways the data can be analyzed, so there are many possibilities for how a BCI can be used.

Some BCI researchers zero in on one particular kind of regularly occurring brain signal that alerts us to important changes in our environment. Neuroscientists call these signals event-related potentials. In the lab, they help us identify a reaction to a stimulus.

Examples of event-related potentials (ERPs), electrical signals produced by the brain in response to a stimulus. Tamara Bonaci, CC BY-ND

In particular, we capitalize on one of these specific signals, called the P300. Its a positive peak of electricity that occurs toward the back of the head about 300 milliseconds after the stimulus is shown. The P300 alerts the rest of your brain to an oddball that stands out from the rest of whats around you.

For example, you dont stop and stare at each persons face when youre searching for your friend at the park. Instead, if we were recording your brain signals as you scanned the crowd, there would be a detectable P300 response when you saw someone who could be your friend. The P300 carries an unconscious message alerting you to something important that deserves attention. These signals are part of a still unknown brain pathway that aids in detection and focusing attention.

P300s reliably occur any time you notice something rare or disjointed, like when you find the shirt you were looking for in your closet or your car in a parking lot. Researchers can use the P300 in an experimental setting to determine what is important or relevant to you. Thats led to the creation of devices like spellers that allow paralyzed individuals to type using their thoughts, one character at a time.

It also can be used to determine what you know, in whats called a guilty knowledge test. In the lab, subjects are asked to choose an item to steal or hide, and are then shown many images repeatedly of both unrelated and related items. For instance, subjects choose between a watch and a necklace, and are then shown typical items from a jewelry box; a P300 appears when the subject is presented with the image of the item he took.

Everyones P300 is unique. In order to know what theyre looking for, researchers need training data. These are previously obtained brain signal recordings that researchers are confident contain P300s; theyre then used to calibrate the system. Since the test measures an unconscious neural signal that you dont even know you have, can you fool it? Maybe, if you know that youre being probed and what the stimuli are.

Techniques like these are still considered unreliable and unproven, and thus U.S. courts have resisted admitting P300 data as evidence.

For now, most BCI technology relies on somewhat cumbersome EEG hardware that is definitely not stealth. Mark Stone, University of Washington, CC BY-ND

Imagine that instead of using a P300 signal to solve the mystery of a stolen item in the lab, someone used this technology to extract information about what month you were born or which bank you use without your telling them. Our research group has collected data suggesting this is possible. Just using an individuals brain activity specifically, their P300 response we could determine a subjects preferences for things like favorite coffee brand or favorite sports.

But we could do it only when subject-specific training data were available. What if we could figure out someones preferences without previous knowledge of their brain signal patterns? Without the need for training, users could simply put on a device and go, skipping the step of loading a personal training profile or spending time in calibration. Research on trained and untrained devices is the subject of continuing experiments at the University of Washington and elsewhere.

Its when the technology is able to read someones mind who isnt actively cooperating that ethical issues become particularly pressing. After all, we willingly trade bits of our privacy all the time when we open our mouths to have conversations or use GPS devices that allow companies to collect data about us. But in these cases we consent to sharing whats in our minds. The difference with next-generation P300 technology under development is that the protection consent gives us may get bypassed altogether.

What if its possible to decode what youre thinking or planning without you even knowing? Will you feel violated? Will you feel a loss of control? Privacy implications may be wide-ranging. Maybe advertisers could know your preferred brands and send you personalized ads which may be convenient or creepy. Or maybe malicious entities could determine where you bank and your accounts PIN which would be alarming.

The potential ability to determine individuals preferences and personal information using their own brain signals has spawned a number of difficult but pressing questions: Should we be able to keep our neural signals private? That is, should neural security be a human right? How do we adequately protect and store all the neural data being recorded for research, and soon for leisure? How do consumers know if any protective or anonymization measures are being made with their neural data? As of now, neural data collected for commercial uses are not subject to the same legal protections covering biomedical research or health care. Should neural data be treated differently?

Neuroethicists from the UW Philosophy department discuss issues related to neural implants. Mark Stone, University of Washington, CC BY-ND

These are the kinds of conundrums that are best addressed by neural engineers and ethicists working together. Putting ethicists in labs alongside engineers as we have done at the CSNE is one way to ensure that privacy and security risks of neurotechnology, as well as other ethically important issues, are an active part of the research process instead of an afterthought. For instance, Tim Brown, an ethicist at the CSNE, is housed within a neural engineering research lab, allowing him to have daily conversations with researchers about ethical concerns. Hes also easily able to interact with and, in fact, interview research subjects about their ethical concerns about brain research.

There are important ethical and legal lessons to be drawn about technology and privacy from other areas, such as genetics and neuromarketing. But there seems to be something important and different about reading neural data. Theyre more intimately connected to the mind and who we take ourselves to be. As such, ethical issues raised by BCI demand special attention.

As we wrestle with how to address these privacy and security issues, there are two features of current P300 technology that will buy us time.

First, most commercial devices available use dry electrodes, which rely solely on skin contact to conduct electrical signals. This technology is prone to a low signal-to-noise ratio, meaning that we can extract only relatively basic forms of information from users. The brain signals we record are known to be highly variable (even for the same person) due to things like electrode movement and the constantly changing nature of brain signals themselves. Second, electrodes are not always in ideal locations to record.

All together, this inherent lack of reliability means that BCI devices are not nearly as ubiquitous today as they may be in the future. As electrode hardware and signal processing continue to improve, it will be easier to continuously use devices like these, and make it easier to extract personal information from an unknowing individual as well. The safest advice would be to not use these devices at all.

The goal should be that the ethical standards and the technology will mature together to ensure future BCI users are confident their privacy is being protected as they use these kinds of devices. Its a rare opportunity for scientists, engineers, ethicists and eventually regulators to work together to create even better products than were originally dreamed of in science fiction.

Eran Klein, Adjunct Assistant Professor of Neurology at Oregon Health and Sciences University and Affiliate Assistant Professor of Philosophy, University of Washington and Katherine Pratt, Ph.D. Student in Electrical Engineering, University of Washington

This article was originally published on The Conversation. Read the original article.

See original here:

Helping or Hacking? Engineers, Ethicists Must Work Together on Brain-Computer Interface Technology - Government Technology